6 The Urban Problem at the Interface: Reading Governance

Civic feedback systems have received considerable attention as instruments of accountability, data proxies for studying urban phenomena, tools for civic engagement, forces of anti-corruption, and conduits for participatory infrastructure governance (Desouza and Bhagwatwar 2012; Gordon and Baldwin-Philippi 2013; O’Brien, Sampson, and Winship 2015; Zinnbauer 2015). In addition to allowing citizens to have a say in which issues should be addressed by local government, they provide tools for reading a city through instantly published reports that can be aggregated and analyzed. Given the complexity of infrastructure governance, one might wonder how such simple feedback mechanisms can be expected to accomplish so many things.

This chapter investigates how the different premises of infrastructure governance are enacted in the design of citizen feedback mechanisms and how this design influences users, who can include city agencies as well as residents. It looks at the assumptions embedded in the design of citizen feedback systems, assessing how these decisions shape the interaction between the citizen and the city. It makes a case for critically scrutinizing the political nature of these interfaces and argues that technical standards, protocols, and applications should be considered critical components for democratic discourse.

Analyzing reports submitted in the metropolitan area of Boston over four years between 2010 and 2014, this chapter compares the design aspects of two citizen feedback systems, CitizensConnect1 and SeeClickFix. While the two systems are very similar in functionality and purpose, and their reported issues pertain to the same municipal departments, they have different histories: the former is an initiative by the city of Boston, while the latter is a private product. Because their only visible differences are the interface and the community of users, the two systems offer an ideal opportunity to study how interfaces and the system legibility they afford shape the interaction between citizens and local governments. The first part of the chapter focuses on questions of categorization—how different cities and their constituents frame urban problems.2 The second part investigates the design paradigms and their assumptions about the user that guide design decisions for civic feedback systems.3

CitizensConnect

Since 2008, Boston has operated what it describes as a constituent management system (CRM)—a database framework for managing complaints and requests submitted through the city hall’s telephone hotline. A year later in 2009, based on an initiative by its office of New Urban Mechanics in collaboration with the mobile startup ConnectedBits, Boston was among the first cities to launch a smartphone application that allows people to submit reports directly from incident locations. Reports can be submitted anonymously and are referenced by the CRM with a case number allowing reporters to follow up on its resolution. Requests are made publicly accessible through the city’s open data portal. CitizensConnect further supports the Open311 standard, launched in the same year as an interoperability initiative between different cities.4 CitizensConnect represents a civic feedback system from the perspective of the government, enabling a direct connection to the municipal service providers. Through its simple and straightforward design, CitizensConnect has proved itself to be an exemplar of a successful mobile reporting system managed by a municipality.

SeeClickFix

The second popular system that Bostonians can use, SeeClickFix, is a website and smartphone app released in 2007 and 2009, respectively (SeeClickFix 2007; Berkowitz 2009). Unlike NYC311 and CitizensConnect, SeeClickFix originated from a private social accountability initiative meant to improve communication between citizens and the local government. It was allegedly based on founder Ben Berkowitz’s frustrations at getting graffiti removed from a neighboring building in his hometown, New Haven. In his words, “At first, we thought of calling it Little Brother, like ‘Little Brother is Watching,’ but then we realized we needed to be a bit more kind to government” (Roth 2009).

SeeClickFix is built around the idea of control from below. As in most accountability initiatives, collective action is a central instrument for creating public pressure. Citizens can create “watch areas” and assign them to public officials, who would then automatically receive all reports with or without their consent. In an ingenious second step, the startup then pursued service contracts with municipalities, offering to integrate SeeClickFix with the cities’ internal operations and provide tools for managing and scheduling service requests (Harless 2013). Since 2011, SeeClickFix has been integrated with Boston’s CRM, and in 2012, the Commonwealth of Massachusetts announced a collaboration under the name Commonwealth Connect with the company and Boston’s Office of New Urban Mechanics (SeeClickFix 2011; State of Massachusetts 2012).

10453_006_fig_001.jpg

Figure 6.1 Most frequently used service categories in 569 U.S. cities using the Open311 standard, May 2015.

What Exactly Is an Urban Problem?

What constitutes an urban problem beyond common nuisances like streetlight outages, potholes, or litter? CitizensConnect and SeeClickFix are just two examples of many similar projects connected through the Open311 standard, all of which answer this question differently. The question turns on subtle differences in terminology. The Open311 standard does not offer default categories for logging issues. Neither do commercial systems such as SeeClickFix or CitySourced. Boston uses “service requests” and “reports” interchangeably, while other cities draw a sharp distinction between the two. New York’s NYC311 reporting system uses the word “complaints,” while SeeClickFix speaks more neutrally of “issues.” Baltimore provides separate categories for City Employee Praise and City Employee Complaint. Notably, only Bloomington, Indiana, offers a category explicitly called “suggestions.” Comparing the service categories of over five hundred cities reveals a broad range of service categories, their specificity, and purpose (figure 6.1).

When launching a citizen reporting system, a city faces the task of routing the submitted reports to the correct recipient and dispatching work orders. In 311 call centers, an operator assigns the caller’s concerns to an internal category. When using a digital interface, the citizen has to select a category from a predefined list of options. Ideally, the service categories listed in the interface reflect the city’s own taxonomy. But this approach can quickly become unpractical and opaque for an outside user. At some point during 2014, the city of Toronto offered five service categories for reporting graffiti: Graffiti on a City road, Graffiti on a City bridge, Graffiti on private property, Graffiti on a City sidewalk, and Graffiti on a City litter bin. Daniel O’Brien of the Boston Area Research Initiative notes that the city internally defines a single broken streetlight as an “outage,” but four broken streetlights in a row is a “large system failure.”5 While these distinctions help resolve departmental responsibilities, citizens read landmarks differently, use different concepts to describe issues, and have no insight into how each department categorizes its responsibilities. The physical affordances of the device also play a role. One hundred categories may be manageable on a desktop monitor but unusable on a smartphone touchscreen.

Based on technological changes and the feedback from users, cities continually refine the categorizations presented to users. Until 2013, the NYC311 desktop interface used a structured menu of roughly eighteen hundred different service categories. The smartphone app offered a similarly high number, forcing users to traverse a hierarchy of services to find a suitable category. A year later, the NYC311 app provided a much shorter list of twenty general categories. The desktop interface now guides the user through multiple steps, offering additional information and instruction.

Unlike the highly specific NYC311, which covers a broad range of services, CitizensConnect initially offered only three case types: potholes, graffiti, and streetlight problems. Since then, the Office for New Urban Mechanics has faced constant demand from users and city departments to expand the categories. Resisting the idea of overly specific categories, it introduced an Other category that covered any type of concern.

Just as external categories are constantly refined, internal categories are frequently modified, subject to what Geoffrey Bowker and Susan Leigh Star describe as the “practical politics of classifying and standardizing” (Bowker and Star 1999, 44). In this process, citizens instigate changes to the taxonomy. As one analyst involved in NYC311 recounted, citizen requests led to internal discussions such as how deep a pothole has to be to become the responsibility of the Department of Sanitation as opposed to the Department of Transportation.6 By resolving citizen requests, the departments renegotiated their boundaries and relationships.

A similar reversal took place with Boston’s CitizensConnect due to its simplicity and the mobility inherent in a smartphone app. Noticing that city employees often used the citizen app themselves, New Urban Mechanics created a version for city workers. Adapting the city’s service categories for effective use in the field turned out to be a complex usability issue, requiring careful calibration of internal categories to the app’s interface.

The question of standardizing the definitions of urban problems has frequently been discussed on the Open311 developer mailing list. Standardization would allow citizens to report issues regardless of administrative boundaries. For example, the greater Boston area includes the nearby municipalities of Brookline, Cambridge, and Somerville, which independently established their own reporting systems with their own apps, websites, and service categories. In a continuous metropolitan area, however, most residents are not aware of where one city ends and another begins. A program manager from Massachusetts noted on the Open311 mailing list:

The challenge is that every municipality thinks about each issue differently, including prioritization, sub-categories, who’s responsible for service delivery, etc. … While pretty much everyone deals with streetlight outages, potholes, and missing street signs, there is no common understanding of what each encompasses. Any vendor with a standard, fixed classification would be setting themselves up for a struggle to convince a municipality to abandon their existing classifications (no matter how informal). I’m not saying it couldn’t be done, but it would be a struggle. (Heatherley 2012)

Citizens often have little patience for explanations that the issue does not fall under the city’s responsibility because it is on private land, on a federal highway, or just outside the city’s boundary. Cities mitigate these issues differently. Boston forwards outside requests to the relevant municipality or federal or state agency. The location-independent SeeClickFix app dynamically adjusts service categories and recipients based on user location. By offering geolocation tools and interfaces to third-party services like Twitter, software platforms are more malleable compared to the interfaces of physical infrastructures with baked-in standards. The consequence of this malleability is a constant renegotiation of interfaces between cities, city departments, companies, and individuals, including the occasional breakdown.

10453_006_fig_002a.jpg10453_006_fig_002b.jpg

Figure 6.2 Latent topics (selection) within the general “Other” category, Boston CitizensConnect, probabilistic topic models, figure by the author.

The “Other” Issues—Implicit Themes in the General Category

As of early 2015, around one thousand cities in the United States accepted digital citizen reports. Larger cities tended to offer more service categories while small towns frequently used a single catch-all category. Most cities offered between three and twelve categories (including “Other”) with the median at seven. In the case of Boston, the majority of reports submitted to CitizensConnect are categorized as Other. Is this a failure? Are the categories offered by the city inadequate for capturing the citizens’ perceptions of issues? Or is this a desirable feature for keeping reporting informal?

One approach to investigating these questions is to examine whether the reports submitted in the general category contain salient, recurring themes that might as well be grouped into their own service category. Between September 2010 and August 2014, over forty thousand reports were submitted under the Other category via the CitizensConnect smartphone app in Boston. Methodologically, different approaches are possible to identify themes in large collections of unstructured text documents such as citizen reports. Grounded theory offers a systematic, iterative approach to developing conceptual models through qualitative comparative text analysis (Glaser and Strauss 1967). It would be, however, very difficult to manually analyze forty thousand documents in this way, and a small random sample might not be sufficient to account for variations over time and the relative differences in the saliency of the identified themes. To supplement my qualitative analysis based on the grounded theory approach, I decided to use an unsupervised machine learning technique called probabilistic topic models (Blei 2012). In this context, “topics” are lists of words inferred using latent semantic analysis (LSA) (Deerwester et al. 1990), which assumes that words appearing in close proximity across multiple documents indicate related meanings. Topic models are based on the assumption that collections of text documents contain multiple salient themes. For example, the archive of the New York Times might contain topics such as baseball, finance, or the conflict in the Middle East (Blei 2012).

In the context of probabilistic topic models, a “topic” is a collection of terms such as {red, traffic, accident, light, car}, which is likely to refer to reports about (potential) accidents and traffic lights. Meaning is expressed strictly in the relation between words, not in the terms themselves. For instance, the terms associated with park likely resolve the ambiguity in reference to a public garden or a stationary vehicle. As with any machine learning technique, the resulting topic models have to be taken with a grain of salt. Not every discovered topic represents a meaningful theme. Meaningful themes may be split across multiple topics, or a single topic can combine multiple unrelated meanings (Schmidt 2013).

The most frequently identified topic {parked, parking, cars, blocking, lane, car, fire, illegally} concerns issues of cars obstructing and blocking traffic. Example reports include “Bus blocking fire hydrant and street. Issue not resolved” or “Driver parked several feet off curb, obstructing traffic.” Another top-ranking topic concerned vehicles parking in residential areas without a permit. The second most popular topic {trash, garbage, street, sidewalk, left, bags, week, days} concerned garbage bags left on the sidewalk, for example, one citizen reported that “[redacted] has dumped many bags and loss (sic) bit of trash on street. This is not a trash pickup location and it is not trash day. This is a frequent issue with this address please cite them. Mission Hill is not a dump.”

Other garbage-related topics include overflowing public waste bins and concerns about rodents (“neighbor across the street is continuously dumping rice and other food here, attracting rats, mice, and other pests”). Dangerous traffic situations are also frequently reported, as well as issues of overgrown weeds and fallen trees. Other salient themes include noise (“huge backyard student party at [redacted] VERY loud, underage?”) as well as dog owners and homeless people. The topic {illegal, park, Segway, tour, city, gliders, plaza, hall} is an interesting case that captures a group of reports from a coordinated protest against a Segway tour operator: “Illegal Boston Gliders. Boston by Segway tour monopolizing Long Wharf pedestrian park.”

How do the identified themes compare to the established categories? When run across all reports, not just those filed under “Other,” the algorithm correctly identified themes that correspond to specific categories the respective reports are associated with. Analyzing the Other category separately found a high incidence of reports related to garbage and parking topics, which were overall reported more frequently than issues assigned to specific categories such as “damaged sign.” Why does the city define a category for signs, but not one for parking or garbage? One answer might be that the city did not know better, and in fact, a trash category was added briefly following this analysis. However, there is also an important argument why the city would offer categories for relatively unpopular issues while excluding issues reported frequently. A damaged sign issue is fairly straightforward to resolve, while issues related to garbage require closer examination. Other matters such as traffic violations are not the responsibility of public works departments, and the city has good reason for not offering such categories.

How Classification Shapes Interaction

How do the categories offered for use influence the kinds of issues reported? Categories do not just organize information; they also encourage particular types of reports and discourage others. As Susan Star and Geoffrey Bowker observe, “Each standard and each category valorizes some point of view and silences another” (Bowker and Star 1999, 5). Categories can nudge users toward submitting reports that are, in managerial parlance, more actionable. A complaint about a damaged sign can be added to a work queue immediately. Complaints that a park should look prettier or that the streets should be cleaner might be better suited for starting a broad conversation.

A few hints exist about how the choice of categories influences reporting. In September 2011, SeeClickFix announced a partnership with the city of Boston with the goal of integrating the SeeClickFix service into the city’s CRM. As a result, the SeeClickFix interface adopted the same categories offered in CitizensConnect. Until then, SeeClickFix did not prescribe categories, and users could freely choose how to categorize the issue. Before the integration, graffiti was the subject of less than 1 percent of all submitted reports. After graffiti had become a category, the proportion of graffiti reports rose to approximately 4 percent of submitted reports, which is, however, still much lower than the 17 percent of graffiti reports reported to CitizensConnect (Offenhuber 2014). Notably, the city of Boston has operated a Graffiti Busters program with the support of volunteers since the late 1990s.

In designing categories for a feedback app, a city has to resolve the trade­off between using categories that reflect its internal operation and finding those that capture user perceptions. Going with the former lowers the friction of interpreting results and allows the city to be more responsive. Choosing the latter path can mean using no categorization or adopting categories created by users, known as folk-categorizations (Bowker and Star 1999, 59) or folksonomies (Voss 2007). However, the divide between the reports from citizens and public officials is less clear than it might seem. Many city employees use CitizensConnect in their daily inspections, and many citizens possess professional expertise in maintenance and repair issues.

The ambiguities inherent in defining categories are not necessarily a negative quality. They can encourage personal engagement and emphasize existing uncertainties (Gaver, Beaver, and Benford 2003). Commenting on their resistance to creating overly specific categories and incorporating all user suggestions, New Urban Mechanics noted that the “feature creep” of constantly adding new categories and functionalities ultimately diminishes a tool’s usability.7

10453_006_fig_003.jpg

Figure 6.3 Screenshot of two smartphone civic-issue trackers used in Boston, both 2011 versions. Left: SeeClickFix (notice buttons “neighbors” and “my profile”), right: CitizensConnect.

Design Paradigms of Feedback Systems

Categorization in a reporting system is just one means through which design influences interaction and data collection. Visual languages, system architectures, and the functionalities of the interfaces are equally important. Despite their different histories and goals, CitizensConnect and SeeClickFix have a remarkable number of similarities, which they share with the growing number of reporting platforms. Almost all systems offer smartphone apps with the same basic set of functionalities for submitting geocoded images and descriptive messages. All systems include mechanisms for tracking submitted requests and receiving responses. Frequently, users are able to browse other reports by time or location on a map. Despite these similarities, important differences remain that can be summarized in two different design paradigms that I will call the “direct route” and the “community-centered” approach.

The direct route model is characterized by restraint. The interfaces are limited to essential functions, and categories are fixed. The focus is on the submitted issue, not the reporter who remains anonymous. Communication channels are one to one between the user and a government recipient. Direct communication among citizens is rarely offered. With some exceptions, the direct route design is favored in city-developed systems focused on service delivery, with reporting categories closely corresponding to city services.

The community-centered model offers a rich palette of tools for many-to-many communication, aiming to foster a community of users who can evaluate and comment on other requests, even reopen issues that have been closed by the city. Users are encouraged to create self-descriptive profiles, often using pseudonyms. As in online forums, registered users are acknowledged for their contributions through a reputation system. Categories are less fixed, and users often have the opportunity to create their own or challenge existing categories.

The community-centered approach is more common in nonmunicipal systems focused on social accountability, such as SeeClickFix or the open source platforms Ushahidi and FixMyStreet (Okolloh 2009; King and Brown 2007). Lacking endorsement by the city, these approaches rely on an active community to attract participants and make the group voice stronger. A purpose secondary to reporting civic issues is to increase participation and encourage coordination through explicit and implicit channels. The many-to-many discussion resembles more a town hall meeting than a request for service.

Community-centered goals are not unique to volunteer-driven systems. Cities too are interested in encouraging participation and engaging citizens in infrastructure as a common good. However, their agenda is different, and ethical questions arise. According to Nigel Jacob from New Urban Mechanics, it is not appropriate for local governments to directly engage in building communities, but rather to listen to their concerns. Furthermore, when public services are involved, discussions are never open ended; there is always a filter of what issues are relevant in the context of urban maintenance. He frequently receives suggestions such as allowing people to vote on priorities, an idea the office resists.

Vibrant many-to-many conversations also present an interface problem. A mobile interface limits what can be accomplished through the size of the screen, the methods of user input, and the ability and willingness of users to learn complex interfaces. These constraints differ for private initiatives and municipalities that need to integrate their interfaces with an existing information infrastructure subject to legacy standards and historical contingencies.

The audiences for municipal and volunteer-driven systems are in many respects different. While social accountability initiatives seek to reach like-minded people who are willing to engage with a more complex system, municipalities need to maximize accessibility to reach the more casual user. Complex features that make an interface more expressive can come at the expense of accessibility. An IBM applications engineer advocated in an interview for a more minimalist approach, challenging the philosophy of rich social interfaces as the “the web way of thinking” that does not translate well into mobile applications and urban space.8

Social Presence and Operational Transparency

Can interface design shape interactions? An important factor is how participants are represented. Earlier in the book, I discussed the concept of social presence, which refers to the capacity of a medium to convey verbal, nonverbal, and contextual information (Short, Williams, and Christie 1976). How a message is interpreted depends on the reputation of the speaker as much as on the words. Is the person a notorious complainer or promoting an agenda? In online communities, contributors are represented in terms of authority as much as by authorship. How frequently does a person contribute? Are his or her contributions appreciated by others? What are his or her areas of expertise?

Media scholar Judith Donath refers to the representations that combine self-description and a track record of activities as “data portraits” (Donath 2014, 187). She conceptualizes online communication in terms of nonverbal and implicit signals that can be implemented through gestures of acknowledgment or support rather than through explicit messages. Online representations of oneself can also be a powerful motivator when reporters see their issues being acted upon and receive feedback from the community.

The differences between the design of the two reporting interfaces are striking in terms of how participants are represented. CitizensConnect offers no direct communication among users. A city department can respond to a user request with a standard reply or a customized message. Not only does SeeClickFix offer direct interaction among participants, but it also represents citizens and city officials the same way, drawing no principal distinctions between them. The Open311 standard supports little social presence and is strictly limited to one-to-one communication between a submitter and a department. To circumvent this limitation, systems such as CitizensConnect post reports on Twitter, enabling more channels of interaction among users.

Beyond the representations of users and governors, operational transparency involves the legibility of the city’s actions and priorities, for example, by showing where its workforce is currently active (Buell, Porter, and Norton 2014). The city’s website fulfills this function to some extent, but examples where users can watch maintenance transpire in real time are rare. The city of Boston has experimented with action shots that show in their mobile app how a submitted issue is being fixed and currently shows a picture of the respective city unit next to resolved requests. During the 2015 snowstorms, the city created a temporary website9 showing the real-time location of all municipal snowplows. It can be hypothesized that cross-indexing the locations of municipal work crews on a map of submitted reports conveys a more realistic image of what is involved in urban maintenance than seeing an issue isolated in the context of an overall task list.

Besides the elements that increase the visibility of users and issues, some elements limit visibility. All feedback platforms must deal with reports that misuse public visibility and anonymous reporting for the purposes of advertising, data collection, harassment, or vandalization through a flood of unrelated, often automatically generated reports. The boundaries between moderation and censorship can be difficult to draw. However, a city usually faces a more basic problem with a lack of resources to review submissions. It therefore has to rely on platform design to manage report visibility and add friction to the submission process. Submitting a report has to be convenient enough to encourage citizen participation while being inconvenient enough to deter spamming and other forms of abuse.

Even if reports are published immediately without review, they can be made more or less visible. For SeeClickFix, the website plays an important role; it shows all reports in the context of similar issues reported by others, and the responses from officials and other users. The textual descriptions, which often take a critical tone toward the city, therefore have a high degree of public visibility. In the municipal system, this is less so. As of this writing, the home page for the city of Boston prominently features its 311 system, but the page for browsing the submitted requests is no longer directly accessible through the city’s home page. In previous versions of the home page, the reports page was better integrated, but the visitor still had to traverse a series of links in order to read the submitted requests. While the city’s open data portal offers access to the real-time data set of 311 service requests comprising more than thirty data and metadata columns, the column containing the text of the actual complaints submitted by citizens is missing.10 Users of the mobile app can still read reports on their device, and technically savvy users can still access the text descriptions through the Open311 API. Overall, however, the submitted requests have become less visible to the public over time.

Unrestricted public accessibility in itself, however, does not ensure high visibility, which can be demonstrated through the website for browsing reports submitted to the city of Boston.11 The site uses a Twitter-like interface that displays a real-time stream of reports as they come in. The display is public but ephemeral: it offers little assistance to search for a specific report or to compare reports from different times. With about a hundred reports submitted daily, it becomes difficult to locate a specific report after a few months through this particular interface. One could call such a design principle “opacity through transparency”; as all reports are immediately published, information is obfuscated precisely because of—not despite—the amount of information.

These design decisions may introduce enough friction to discourage spammers and vandals, but the limited visibility also has implications for critics of the government’s priorities and decisions who use the system to voice their concerns. Every design decision, however accidental or whatever the underlying intent, has consequences for the politics of visibility.

10453_006_fig_004.jpg

Figure 6.4 Screenshot of the CitizensConnect website used to browse reports in its version from 2010.

Effects of the Interface on Submitted Reports

To what extent are these considerations reflected in user submissions? I addressed this question through a comparative text analysis of a sample of two thousand reports submitted to SeeClickFix and CitizensConnect (Offenhuber 2015). Again, Boston is a suitable case study because it has used the popular CitizensConnect for a number of years and has also integrated its services with SeeClickFix. With only the interfaces differing, it is possible to investigate the effects of design decisions on submitted reports, which were largely consistent across both interfaces, with a few exceptions. Comparing reports submitted to both systems shows that infrastructure repair issues are more prominent in SeeClickFix, while issues concerning graffiti and litter are notably absent. In CitizensConnect, these issues account for more than a third of all reports (figure 6.5).

10453_006_fig_005.jpg

Figure 6.5 Relative differences between reports submitted by CitizensConnect (CC) and SeeClickFix (SCF) users based on a randomly drawn sample of reports.

Although most reports are written in a neutral and factual tone on both systems, SeeClickFix reports tend to be more critical, meaning that they emphasize the importance of the issue and urge the city to act: 3rd report of crumbling stairway. Getting very dangerous or Light goes out periodically then comes on slowly. Dangerous area for drugs, assaults. Please fix. Thanks. Very critical reports involving blame and shame occur in about 5 percent of reported cases in both systems. For example: Paint the white lines. It’s horrible that the lines have been missing here for over 1 year. You are on notice, if someone gets hurt the city is liable. Shame that there is a school 20 feet away... Not just the city is shamed; frequently, fellow citizens are as well: Our neighbor always brings her daughter and dogs to poop in front of our house and they live in 433 in the 1st and 2nd apartment. I called the Animal Control for 2 years and nothing changed. SeeClickFix has a smaller proportion of reports that directly accuse other citizens. Irate reports in CitizensConnect are often triggered by trash and litter issues, which do not play a major role in SeeClickFix. Conversely, infrastructure repair issues frequently trigger critical reports on SeeClickFix, while such matters are among the most neutrally discussed on CitizensConnect.

The observed differences are consistent with the more private, service-oriented, one-to-one nature of CitizensConnect versus the more public, social accountability-centered, and discursive nature of SeeClickFix. Users might hesitate to report personal grievances such as graffiti, litter, and traffic violations to a publicly visible forum. A review of the SeeClickFix website shows that this concern is not baseless. SeeClickFix users are united in opinions about larger infrastructural issues, but when private issues emerge, so do multiple controversies. A report about a “stolen” parking spot quickly turned into a broad discussion about social norms for these types of situations.

Users on the two platforms tend to explain issues differently. To justify urgency, a larger proportion of SeeClickFix users invoke public safety: “This is a terrible intersection. Constant beeping every 5 mins disturbs the neighborhood. I’m afraid there will be an accident here all the time. I’ve almost been hit several times.”

Again, the more public nature of reports displayed in SeeClickFix and the desire to mobilize other users are a possible explanation. But not all reports were critical of the city’s services. Frequently, reporters offered ideas and suggestions regarding how to resolve a specific situation. For example: “Google maps says this area is a park. Doesn’t look like a park to me. This area has one of the best water views in Boston and looks awful. There should be a park bench or something nice there. Also the guardrail is very old looking and beat up. Makes the neighborhood look disgusting. The whole area is very un-looked after.”

Or: “Fallon (sic) field playground climber has come undone. Requires big-ass tamper-proof Torx bits. I think that’s all that’s needed.”

Many issues are of a similarly technical nature, but sometimes social tensions become apparent in the reports: “PANHANDLER/BEGGAR …, holding door open (to tracks 1 and 3), implying he’s asking for money. I shouldn’t have to put up with this while I’m paying $235 a month for my commute. Please have him removed and reinforce he should seek assistance elsewhere.”

In about 5 percent of reports in both systems, accountability is demanded: Whoever got paid to close this report ripped off the taxpayers TWICE.” A reporter in East Boston complained about unequal service provision, writing, “Does one have to live in a posh neighborhood to get something done? Isn’t an abandoned U-Haul truck a security concern?”

Despite its integration with city services, SeeClickFix presents itself as relatively independent. It therefore receives a higher percentage of critical reports, and its service requests tend to be less straightforward. The higher public visibility and lower expectation of privacy likely contribute to the different style of reports, which are more open ended and frequently emphasize the public good and safety implications.

Prioritization

Any feedback system is only as good as the city’s response at fixing the problem. A feeling that the city’s capacity to respond to requests does not match the convenience of the tool leaves users frustrated: “72 days ago I posted this under case id 101000405068 city forward info and details to DCR and forgot about it 72 days later nobody even care (sic) about this. What is the purpose of this citizens connect if we voters are not taken in consideration by just simply being ignored …”

Not surprisingly, issues such as actual infrastructure damages and violations receive a faster response than reports that raise open-ended, more diffuse issues such as suggestions for infrastructure improvements and discussions of civic issues (table 6.1). However, the tone of the report makes a difference. Reports using highly critical language were resolved quickest—in other words, the squeaky wheel gets the grease. Response time is, of course, not an appropriate measure of service quality, but it does indicate the priorities of the city or, better, the city’s perception of citizen priorities.

Table 6.1 Table 6.1 Citizens Connect: average response time by the city in days for closed issues (N=849)

10453_006_T6.1

Beyond questions of response time, both the city of Boston and SeeClickFix frequently emphasize the value of citizen feedback systems to facilitate coordination and self-service. Late Boston Mayor Tom Menino frequently cited the following exchange between two citizens over the CitizensConnect platform. A report from February 2011 reads: “Possum in my trash can. Can’t tell if it’s dead. Barrel in back of 168 west 9th. How do I get this removed?” Before the city’s animal control was able to respond, neighbor Susan Landibar submitted another report: “Walked over to West Ninth Street. It’s about three blocks from my house. Locate trash can behind house. Possum? Check. Living? Yep. Turned the trash can on its side. Walked home. Good night, sweet possum” (Gaffin 2011). However, such interactions between citizens are not directly supported by the interface, which does not allow citizens to comment on each other’s questions without submitting a separate report. Through its more community-oriented interface, SeeClickFix actively encourages such coordination among citizens when cases fall outside the city’s responsibility. During a Boston snowstorm, a neighbor initiative used the platform to organize the snow removal from private cars and driveways within the community (Snowcrew 2012).

Conclusion: The Designer as Regulator

Open data portals and real-time information feeds do not mean that a city or a provider has no control over how these data are perceived by the public. As these case studies from Boston make clear, digital interfaces do not merely augment public discourse; they produce and increasingly regulate it. The design of feedback systems determines the visibility of the reported issues, the people submitting and discussing them, and the response and actions taken by the city. The language and categories used in the interfaces frame what can be reported, and the modes of communication available govern the interaction among the actors. Through the design of the interfaces, interactions can be encouraged or discouraged, as well as steered toward a specific issue or an open-ended discussion.

It may be accurate that, despite the angry tone of some reports, discussions about snow removal and garbage on the street are business as usual for the city and hardly controversial in the larger picture of infrastructure governance. However, there is no reason to assume that a city struggling with the impact of a massive snowstorm might not be tempted to remove a pile of embarrassing complaints about insufficient snow removal. The idea of local government as an open-source “urban operating system,” sketched out by Government 2.0 advocates can be misleading, since even a technically mediated system is still negotiated by human agency. Compared to the idea of an urban operating system governed by incorruptible algorithms, design gestures are not subordinate to a totalitarian algorithmic logic, but instead are products of countless human decisions and negotiations that do not necessarily follow a comprehensive scheme. Often, design decisions are accidental, implemented by different people without coordination and without awareness of their implications. Components might be unintentionally broken by upgrades and consequently removed. By focusing our attention to design instead of the abstract logic of the algorithm, we can foreground governance as an ongoing conversation and negotiation.

The Politics of Interface Design

While interface designers stress the importance of responding to user needs, the opposite is often true. The interface configures the user, structuring her or his behavior according to the intentions of the designer and the constraints of technology (Woolgar 1991). Design choices can guide a conversation between a citizen and a city official toward either open-ended deliberation or efficient problem solving. By regulating and framing the interaction between citizens and the city, they have consequences for the governance of an infrastructure, assuming a role that is deeply political.

Although designers object to the view that interface design is a cosmetic task, they are frequently unfamiliar with their role as a political mediator. Often unknowingly, interface designers find themselves responsible for regulating and governing behavior. Not all submissions are desired by the city, including spam, personal attacks, or reports that touch on matters that officials prefer not to discuss. Design offers a way to manage the torrent of submissions and nudge it in a particular direction.

Using the example of air travel, geographers Robert Kitchin and Martin Dodge describe how the governance of the physical space is contingent upon interactions mediated through digital interfaces that are increasingly exposed to the user through ticket booking, check-in, and recently self-service passport control (Kitchin and Dodge 2011). In these cases, interface design assumes a crucial role in guiding, informing, and shaping data entry. As this chapter has demonstrated, the political implications of interface design for infrastructure governance are present at multiple levels or registers:

  • Terminology and categorization, whether institution-centered or user-centered, general or specific. Comparing the categorizations used in different cities reflects not only local characteristics and issues, but also different philosophies of citizen-city interaction.
  • How digital interfaces constrain and facilitate communication on the input side and how they govern the public visibility of contribution on the output side. As a consequence, how these arrangements encourage or discourage certain expressions and behaviors.
  • The seamful and seamless aspects of a system—can an amateur or expert appropriate and extend the system? Which visual aspects serve the novice user to get a better sense of what happens inside the black box of the system, even if not all technical details are accessible? Conversely, which aspects of the system should remain invisible to prevent obfuscating crucial information?
  • Specifications of standards and protocols such as Open311. Protocols and open standards should be considered as part of the democratic discourse as well as the basis of open source software ecologies.

The Digital and the Human Interface

The interfaces of both SeeClickFix and CitizensConnect are not stable in time, but subject to constant evolution. While working on the case studies, the user interfaces, APIs, and the underlying data structures have changed multiple times, making data collection challenging. SeeClickFix at some point abandoned its open-ended categories and adopted Boston’s service types, facilitating better integration with the city. The city of Boston has continuously iterated the design of websites and apps that display of citizen reports. These design decisions have consequences for the visibility of reports. The submitted texts of reports were initially accessible through the site, which is no longer the case. In return, the city has expanded the scope of its data API, making more sophisticated queries possible. At any given time, both services have offered a suite of different protocols, file formats, and interfaces to access and work with data, but this multiplicity has also changed over time.

While no universal interface can exist that is equally accessible to all users, a multiplicity of channels (phone, letters, emails, tweets, and so on), interfaces (websites, apps), and standards (Open311, SeeClickFix’s own API) can exist in parallel, offering different alternatives for access.

No discussion of digital governance can avoid the digital divide. Not everyone owns a smartphone or has Internet access. Nor is everyone comfortable with or capable of using digital interfaces to access government services. But at the same time, this divide is not binary—access or no access to digital services—but instead manifests itself in a more nuanced manner, in different expectations and attitudes toward services. The squeaky wheel fallacy assumes that the absence of negative feedback means that there is no problem, which results in the concentration of public resources on the most vocal neighborhoods, creating a system that captures wants rather than needs.

Access remains a challenge, and the city of Boston responds by offering many different channels of access rather than prioritizing one channel. The multilingual version of Boston’s CitizensConnect feedback system used simple SMS text messages, but it was not successful due to the cumbersome user experience. Inspired by popular food truck services, City Hall to Go brings government services to remote neighborhoods by truck (New Urban Mechanics 2014). According to Nigel Jacob and Chris Osgood, City Hall to Go reflects New Urban Mechanics’ goal of providing a space for civic experiments without relying on technological mediation as a universal solution. As Jacob noted, “When people come to us with ideas, there is no interface in-between; they are already inside.”12

Through the design of different feedback systems, cities are walking the line between managing criticism and inspiring engagement. It might seem counterintuitive that city departments would enthusiastically embrace social accountability mechanisms that can potentially put them under public pressure. But as the example of GuttenPlag has shown, if governments do not actively support building these systems, citizens might do so anyway, with or without their approval.

Notes