CHAPTER 7
Extending 311 across Massachusetts
IN SPRING 2015, the White House Office of Science and Technology Policy (OSTP) convened representatives from a handful of cities and their university partners in the Eisenhower Executive Building in Washington, D.C. The question for the day was: How do we build a national consortium that supports city-university partnerships in their efforts to create public good through the utilization of modern digital data and technology? The meeting was inspired by an emerging tension. On the one hand, there was a great opportunity for urban informatics to transform cities, but on the other hand, the full realization of this opportunity was hindered by the tendency for each city to become its own isolated laboratory. The hope was that the proposed consortium would facilitate learning and collaboration across cities, whereby advances in one locale could be implemented in another, or large-scale projects could be pursued simultaneously in multiple cities.
Shortly thereafter, OSTP and Carnegie Mellon University formally announced the MetroLab Network (or MetroLab), signed onto by 25 mayors and university presidents and deans. Founding members included some of the usual suspects mentioned repeatedly in this book, such as New York and NYU’s Center for Urban Science and Progress, Chicago and the University of Chicago’s Urban Center for Computation and Data, and Boston and the Boston Area Research Initiative, but also less obvious hubs for civic innovation, such as South Bend, Indiana, and Notre Dame, Houston and Rice University, and Memphis, Tennessee, and the University of Memphis. The network has since grown to more than 40 members and has hosted conferences on water, big data and human services, and sensor implementation. In its effort to expand the reach of technology-driven innovations in urban policy and practice, MetroLab has come up against a newly emerging digital divide between cities for which such advances are within reach and those for which they are not. As major metropolises forge the way, translating novel digital data and technology into new tools and policies, it is unclear whether and how smaller municipalities will be able to do the same. MetroLab sees these distinctions even among its membership, which is to say nothing of the disparities between those 40-plus member municipalities and the hundreds of cities nationwide that have not yet joined.
In common parlance, the term “digital divide” typically refers to how the poor are less able to afford computers, high-speed internet plans, and other digital tools necessary to keep up with their more affluent counterparts. This is the theme we observed in Part III as we examined penetration of the 311 system and the BOS:311 app across populations. It can also be used as an umbrella term for any case in which one group is seeing marked gains in its ability to utilize digital technology while others are being left behind. Part IV focuses on two such divides that have opened up in the world of urban informatics. In this chapter, I attend to the divide facing the MetroLab between technologically advanced cities and those that are trying to catch up, despite fewer resources and opportunities for collaboration. Chapter 8 then turns to the second divide, which is present among the institutions of a single city. Even where urban informatics is thriving through the efforts of city governments, universities, and private corporations, there has been little incorporation of the community. This is implicit in MetroLab’s emphasis on city-university partnerships, and the general lack of discussion on what a city-university-community partnership would look like. Community organizations, the institutions that sit closest to the public, are largely unable to tap the deep intelligence contained in modern digital data, meaning they are missing an opportunity to better understand and advocate for the communities they serve. We will explore the difficulties created by both of these digital divides and possible mechanisms for solving them through empirical case studies, effectively using the tools of urban informatics to identify and address the challenges the field faces.
I will examine the first digital divide of urban informatics through a case study of a single program that tests the ability to transfer data- and technology-driven programs across municipalities. Commonwealth Connect, an effort of the Massachusetts government, sought to extend the city of Boston’s BOS:311 app to any municipality in the state, regardless of its size or existing technological infrastructure. The study occurs in two parts. First, the data from the Commonwealth Connect app are largely analogous to Boston’s 311 data, and the methodologies developed in previous chapters will help us to evaluate levels of adoption across municipalities and in turn assess the exogenous factors that might determine the local effectiveness of such efforts. As we will see, an initial lesson from these analyses is that effectiveness lies far more with government implementation and utilization of the tool than with the characteristics of the local population. For this reason, the second part of the study is a survey of public officials who helped to implement or manage the system. From this, we are able to derive a deeper understanding of how and when such programs are likely to succeed and the hurdles they might encounter.
The Digital Divide
Anxiety about a digital divide first arose in the late 1980s and early 1990s, though in a different form than it typically takes today. Policymakers and practitioners were concerned that the accelerating advancement and proliferation of computer technology in industrial countries would dramatically increase their economic and social advantages over developing nations. 1 The concept was quickly reoriented to describe the same disparity in technological access emerging within industrial countries, where the well-off were buying computers and subscribing to internet plans but the poor could not afford the same amenities. For example, data from the United States National Telecommunications and Information Administration demonstrated that between 1984 and 2000 possession of computers rose from ∼20 percent to ∼75 percent of households with incomes over $50,000, compared to a rise from effectively 0 percent to ∼20 percent for those with incomes of $25,000 or less. 2 As recently as 2015, these same two income brackets had, respectively, 80 percent and 41 percent in-home broadband access. 3 Consequently, the disadvantaged are not able to access many forms of information that have become central to our modern society. The internet is an important tool for finding health services, browsing job postings, or finding directions (or bus schedules) for going to the grocery store. Although one may argue that people operated effectively without these channels of communication until just a few years ago, they have in many ways supplanted previous technologies for gathering the same information. For example, whereas pay phones made it easy for anyone with a quarter to make a call while away from home, the same action now requires ownership of a cell phone. Even when explicitly not necessary, the internet promises benefits that the user would not come by otherwise.
Income has not been the sole factor responsible for imbalances in the use of digital technology. Those who are older and less educated have also been less likely to utilize digital technology. 4 This has inspired a critique of the relatively simplistic model of the digital divide that dominated the rhetoric of the 1990s, which almost exclusively emphasized material access. It was believed that once people could afford computers and broadband, they would then avail themselves of all that the internet had to offer. This assumption was rooted in an earlier theory surrounding mass media known as the knowledge gap hypothesis. Proponents of the knowledge gap hypothesis posited that when advances in mass media occur, only those with higher socioeconomic status are able to afford it, and therefore they have sole access to the information therein. 5 Alexander van Deursen and Jan van Dijk have argued, however, that this fails to acknowledge the complexity of the internet, which, unlike print media or television, requires more active engagement from the user. 6 Therefore, material access is only one part of the digital divide. This has led multiple authors to propose more detailed models of the elements that lead to disparities in the utilization of modern technology. These generally converge on four main components: attitudes of the users, including the intrinsic desire or motivation to engage; material access; skills to effectively utilize digital technology; and the types of usage they undertake, thereby determining what advantages they reap. 7 This is generally cast as a progressive model, with attitudes preceding the decision to gain material access (or not), provided one has the necessary resources; access in turn makes relevant the need for skills, which then enable the choice of types of usage.
Early advances of the more sophisticated model of the digital divide focused on how disparities in skills could create inequality in the benefits gained from the internet. For example, Eszter Hargittai asked participants to complete 17 tasks involving information retrieval through the internet—including finding sports scores, places to do volunteer work, and prices for a 1995 Ford Escort—evaluating how many of these each person could accomplish. More recently, the emphasis has transitioned to gaps in usage. 8 A somewhat striking study of internet usage in the Netherlands found that people with low levels of education and disabled people were using the internet more than those who had higher education levels and were employed. 9 This apparent increase in material access was facilitated in part by what is now nearly universal internet access at public libraries. These different demographic groups were not using the internet in the same way, however. Whereas higher socioeconomic status was associated with greater use of the internet for information gathering, news, and personal development (e.g., seeking out job postings), those with lower education and the disabled were more likely to use the internet for games and social interaction.
The differences observed in the forms of internet usage across socioeconomic groups in the Netherlands have been reinforced by other studies. For example, those with lower education are less likely to utilize “eHealth” tools that leverage internet technologies to provide information about health care, facilitate patient-doctor communication, and permit tracking of health data. 10 Additionally, the Pew Research Center found that many less affluent individuals are replacing computers and broadband with smartphones, which are less expensive. 11 However, this decision in turn makes it more difficult to access web sites that provide information about and opportunities for career development. This is a notable case because a decision centered on material access is creating distinctions in the types of usage that can be effectively pursued. In sum, even if we achieved universal access to broadband, skills and personal decisions about usage continue to perpetuate a divide in the benefits that different groups derive from modern technology.
A Digital Divide across Cities
As urban informatics promises to transform urban policy and practice, we see analogs to the four components of the digital divide: attitudes or motivations, material access, skills, and types of usage. The middle two would appear to feature most prominently. Some cities, such as New York, Chicago, and Boston, have had access to the material resources necessary to implement modern data and technology infrastructures. They also have access to extensive skill sets through local academic and private sector partners, making it easier to translate these material resources into insights and tools that allow them to better understand and serve the city. A few city-university partnerships in smaller cities, such as South Bend, Indiana, and Notre Dame, have also undertaken well-constructed collaborations. Although these municipalities presumably have less capital on hand for such projects, the initial expenditures are justified by the cost savings that come from solutions to local problems. Furthermore, the projects were still made possible by the presence of a world-class institution of higher learning. Similar advances have been slower to arrive in other places, and even many of those that have joined MetroLab are still just “trying to figure out” what they can learn and pilot. In this sense, one of the challenges facing the network is to lower the prerequisite levels of material access and skills necessary to implement such projects.
The most common approach for expanding technology-driven policy innovation is to pursue scaling and diffusion; that is, if the technology is developed in a city with high levels of material access and skills, the final product can be implemented for less cost and effort in other municipalities. This approach makes certain assumptions, however. The first is that it is straightforward to take a policy or program from one city and replicate it in another with its own characteristic demographics and physical and social context. This is inevitably going to fall apart for certain cases, but I will set this assumption aside for our purposes here, in part because initial efforts to overcome disparities between municipalities will have to focus on programs that transfer naturally. The second assumption, which is more relevant to the current discussion, is that these innovations are even relevant or useful for other cities. At an objective level, I would argue that the simple answer to this question is, “Yes, they are.” While not every innovation in one city might be applicable to another—for example, South Bend’s sensor-driven sewer system might not be relevant to cities that do not sit on a river and therefore do not experience backflow during rainstorms—every city could benefit from one or more technologically savvy solutions that can help them better manage and track the systems and services of their community. This is validated by the implementation of 311 systems in over 400 municipalities of varying sizes. In fact, a nonacademic policy analysis of Buffalo, New York’s 311 system concluded that the extra cost of the system paid for itself in efficiencies and community benefits. 12 That said, there may be a limit to this logic as we consider towns of only a few thousand people. Furthermore, what if some cities simply do not see these sorts of programs and policies as interesting? Analogous to motivations and attitudes in digital divide parlance, this would be a significant if not irreconcilable barrier to their effective adoption, even given the necessary access and skills.
Here I present a study of a program called Commonwealth Connect, which sought to scale 311 to all interested municipalities in the Commonwealth of Massachusetts. The program leveled material access by subsidizing contracts with a private vendor to implement a single standardized 311 app and database infrastructure for all participating municipalities. By removing this particular hurdle, we can gain greater insight into the influence the other components of the digital divide can have on the effective implementation of a technologically driven program. In doing so, we can pursue three particular questions: (1) To what extent are implementation and adoption determined by the objective benefits that a program promises? (2) Will attraction to such a program be driven by these objective benefits, or can other motivations and attitudes intervene? (3) If municipalities differ in the skills necessary to thoroughly leverage the 311 system and its data, will that influence the perceived benefits of the program over time?
Commonwealth Connect: Scaling BOS:311
There have been many efforts in recent years to bring the advantages of 311 systems to smaller municipalities that do not have the resources to make the initial investment in a new call center and associated data system. One of the more common approaches is to construct a county-level call center that collects service requests for all municipalities and then communicates them to the appropriate authorities. Examples include Baldwin County in Alabama, Los Alamos County in New Mexico, and Henrico County in Virginia. This has mirrored a similar consolidation in 911 call centers, especially in rural areas. 13 The company SeeClickFix offers an alternative, private sector solution to this challenge. They have built a free smartphone app that allows users to report the sorts of issues typically handled by a 311 system. SeeClickFix then contracts with municipalities to organize and transmit all reports within the city or town as a work queue for an annual fee. Whether the mechanism of choice is the consolidation of public sector resources or a private sector product, the results is the same: the implementation of a basic technology in a manner that equalizes material access across municipalities.
Commonwealth Connect merges the public-consolidation and privatization approaches, creating a public-private partnership in the expansion of 311 to municipalities throughout the Commonwealth of Massachusetts. It is a collaboration of the state government, the Metropolitan Area Planning Council (MAPC), the city of Boston’s Mayor’s Office of New Urban Mechanics, and SeeClickFix. The state invited applications for municipalities to receive subsidized SeeClickFix contracts for an initial year, in the hopes of demonstrating the value of a 311 system and thereby leading the same cities and towns to incorporate the contracts into their annual budgets. SeeClickFix agreed to implement locally customized apps modeled after the city of Boston’s Citizens Connect (the precursor of BOS:311), maintaining the public sense of a statewide program. It would also accommodate local rebranding of the app (e.g., Salem Connect in Salem, Massachusetts). Effectively, this setup generalizes the ability to implement the same technology in use in Boston.
Seventy-six municipalities signed up for the first round, and BARI partnered with the MAPC to evaluate the program’s effectiveness, a project that occurred in large part thanks to the extensive efforts of Matt Blackburn, who was then a student in the master’s program in urban and regional policy at Northeastern University. Here I report the results of this evaluation, with particular attention to what it teaches us about the digital divide between major metropolises such as Boston and the smaller municipalities that might also benefit from the adoption of modern technologies in their policies and practices. Importantly, because Commonwealth Connect seeks to equalize material access across municipalities, we need to consider how skills and preferences for particular uses of digital technology influence adoption by both governments and their communities. As we embark, it is important to note that it is unclear whether our interest should be in the municipality or the individuals living therein. As we have seen in Boston, there are clear differences across individuals in their motivations to utilize 311, and this creates disparities in the utilization of the system across neighborhoods. This would suggest that individual attitudes (often proxied by demographics) drive the impact of technology adoption. That said, it is also possible that each municipality has characteristics that independently influence whether adoption there will be effective. We start here with the first question, examining how patterns of public use across municipalities are predicted by demographic characteristics. This will allow us to assess the extent to which individual-level skills and usage are an effective frame for considering transfers in technology across municipalities and in turn guide our examination of whether and how municipality-specific policies or practices play an additional role.
Data Sources
SeeClickFix provided three complementary data sets for the program evaluation: (1) the report database, analogous to the 311 database generated by the city of Boston; (2) a list of reporter IDs pertaining to municipal officials; and (3) information on how each municipality implemented and promoted Commonwealth Connect. For those who are interested, these are also available as both a downloadable, documented database 14 and interactive map 15 through the Boston Data Portal, the latter including some of the metrics used here to capture levels of adoption across municipalities. To give context to these data, we also accessed population and demographic data for all participating municipalities from the U.S. Census Bureau’s American Community Survey, utilizing 2008–2012 five-year estimates.
The Commonwealth Connect platform received 69,994 reports between January 2013 and October 2015. The structure and content of this database were sufficiently similar to that of Boston’s 311 system to support the same methodologies we have seen throughout this book. Because SeeClickFix, and Commonwealth Connect by extension, is largely focused on issues with public infrastructure, nearly all case types (and therefore reports) qualified as expressions of custodianship (67,533 reports, or 96 percent). As with Boston’s 311 system, SeeClickFix users are able to make requests either anonymously or through a registered user ID, enabling individual-level measures of reporting. SeeClickFix accounts do not include home address as a form of contact, however, so we approximated each registered user’s home municipality as the municipality in which they reported most often.
As part of its contract with municipalities, SeeClickFix maintains a list of user IDs associated with municipal officials, enabling those officials to directly add new cases to the department’s queue. When we merged this list with the database of requests, we were able to distinguish between those cases submitted that were made by constituents and those created by public officials. This list contained 258 municipal officials from 64 municipalities, who made 24,749 reports.
Finally, Commonwealth Connect tracked the status of each municipality’s implementation of Commonwealth Connect, providing an indication of the extent to which each local government had embraced the program. Status was described at two levels. First, some municipalities were in the implementation phase, meaning they were in the process of configuring Commonwealth Connect and had yet to hold an official launch or promote it publicly. The seven municipalities still in the implementation phase as of October 2015 were excluded from the following analyses because they had not formally launched the program. Municipalities that had moved past the implementation phase were described as having some combination of public launch, defined as one or more events to introduce Commonwealth Connect to constituents; promotion to the public through one or more media following its introduction; and city buy-in, or internal adoption of the platform. Of the 69 municipalities that moved past the implementation phase, 38 (50 percent) had all three of these elements, 4 (5 percent) had buy-in but limited promotion, 16 (21 percent) had limited buy-in and promotion, and 11 (14 percent) never publicly launched (or had a limited launch).
Public Use across Municipalities
Between the initiation of Commonwealth Connect at the beginning of 2013 and October 2015, there was a notable and consistent rise in usage of the system by constituents. In fact, the rate of increase of typical and exemplar custodians seemed to still be accelerating in 2015, as the absolute increase in users from the 10 months of available data in that year was greater than the total number of users from the previous two years combined. 16 This trend was quite varied in its manifestation across municipalities, however. Malden, Massachusetts, a city of ∼60,000 people about five miles north of Boston, led the way with its extensive use of Commonwealth Connect, generating over 16,000 public requests in this time. The municipality with the next highest level of activity, in contrast, produced just over 5,000 reports, and 80 percent produced fewer than 500 reports.
Custodians per capita in a given year, which captures the total penetration of the program in a municipality, also revealed stark differences in usage. In 2015, Salem had the highest one-year density of custodians, with 15 per 1,000 residents; Malden, its neighbor Randolph, and Nantucket (an affluent island community where the Obamas vacation) came next with five to six custodians per 1,000 residents. Only one-third of municipalities had as many as two custodians per 1,000 residents. This is particularly striking when one considers that the same value for Boston in 2015 was 42 custodians per 1,000 residents. For a fairer comparison, after about two years with a 311 system, Boston had 18 custodians per 1,000 residents. 17 Though this number is more comparable to the Commonwealth Connect municipalities, it is still greater than even the most active of those. It also indicates that many of these municipalities had sufficient time to reach a similar point of maturation in their use of the system. The low penetration of Commonwealth Connect might also be explained by the reliance on smartphones. Even in 2015, six years after its implementation, the BOS:311 app had the more modest penetration of 1.5 users per 1,000 residents. Nonetheless, the limited level of participation has practical implications because it is often the municipality’s only formalized 311 channel.
Explaining Differences across Municipalities
As we look to better understand why municipalities saw different levels of adoption of Commonwealth Connect by the public, we examine the relationship between demographic composition and custodians per capita across municipalities. This measure is our focus because it is the best proxy for a community’s efficacy in identifying and reporting issues (see Chapters 2 and 4 ). An important question for this analysis is whether comparisons across municipalities are the same as comparisons across tracts within a single municipality. Whereas the latter exist within a single administrative context, the former are subject to the manner in which the government promotes and embraces the system. If the activity of the government plays a major role in adoption, then we might expect demographics to be less important. The regressions that follow test both demographic and administrative variables. In addition, to control for implementation date, we use participation in 2015 as a point of comparison and control for the total time since implementation. The full results for these models are reported in Appendix F .
Surprisingly, demographic characteristics were thoroughly unrelated to the number of custodians per capita. This was true for median income and proportion of minorities in the population, which best predicted adoption by the public across the census tracts of Boston; education level, which was associated with greater adoption of BOS:311; and home ownership, which is theoretically associated with custodianship. In contrast, two measures of the government’s embrace of Commonwealth Connect played a major role. Municipalities had more custodians per capita if the government had promoted Commonwealth Connect publicly (β = .27, p < .05) and if they had more reports by public officials (per 1,000 residents; β = .29, p < .05).
These results offer three lessons. First, and somewhat obviously, for the public to utilize a coproduction program, they need to know about it, and this knowledge comes from government promotion. Somewhat less obvious, however, is that the government needs to model the utilization of the program for its constituents. Members of the public might be more likely to use the app if they observe officials doing the same. Such observation could occur through the app interface’s real-time list of local cases, through conversation with friends and family who work for the government, or some other method. In municipalities that have strongly embraced the app, government employees might also inform members of the public about the convenience of the app when they are in need. All of these relatively minor interactions would increase not only knowledge of but also confidence in the system and further spread its use. The third lesson is methodological in nature. As noted at the opening of the book, the Boston 311 data are limited to a single municipality, and any comprehensive study of custodianship in the commons should consider behavior across a wide range of contexts, from rural to urban. The Commonwealth Connect data might have offered that opportunity, but it would seem that any such cross-city analysis would be hampered by differences between governments in their implementation of the program. This highlights a broader need to establish equivalence between administrative data from different municipalities rather than assuming that it exists a priori.
Why Would a Municipality Implement a 311 System?
The analysis thus far has told a single clear story: adoption by the public depends heavily on the extent to which the government embraces and models its usage. This stands out in the current case because of the broad variation in how much local governments embraced Commonwealth Connect. Some utilized it as the main mechanism for collecting and tracking all work orders, including those identified by city employees, and others failed to even announce that they had signed up for the program. The next logical question, then, is why this variation in utilization and promotion exists. By combining the digital divide literature with perspectives on public administration, I put forward three different hypotheses. First, municipalities might differ in their attitudes toward such programs, based on the objective likelihood of reaping benefits from them. Elinor Ostrom’s writings offer a model of when and where there will be greater complementarity between the activities of the government and the public, thereby making a coproduction program more beneficial. 18 If municipal governments operate rationally, then this model should explain which ones would be more likely to embrace Commonwealth Connect. We might call this the coproduction model. Two alternative models come from a growing body of literature on “e-governance” that has examined how municipal governments vary in their inclination toward or against the use of technological innovations, with a theoretical foundation in earlier work on policy diffusion. The first of these models emphasizes administrative culture and the internal dynamics that make a municipality more or less likely to adopt technological innovations. The second is an imitation model, by which new policies diffuse thanks to learning or competition across governments, as each seeks to be at the forefront of current standards in governance. Here we focus particularly on the potential for this to operate via geographic contiguity. The following subsections will expand on each of these in turn and then determine how we might compare them in the case of Commonwealth Connect.
Rational Assessment of Coproduction
As the reader will likely recall from Chapter 5 , Ostrom defined coproduction as “one way that synergy between what a government does and what citizens do can occur.” 19 Critical to this synergy is that these two forms of “doing” complement each other. As I have already described for 311, the collaborative maintenance of the urban commons requires that urbanites observe and report instances of deterioration or denigration in public spaces and that city agencies provide the professional expertise and equipment for fixing them. Each fulfills their own role with an efficiency that the other cannot. The strength of this complementarity might vary across municipalities, however. To see when this might be the case, we turn to insights from Chapter 4 and the division of labor that arises between government officials and private residents. The results there suggest that we should attend to population density, which increases the effective level of custodianship and lowers the dependence on public employees, and the proportion of residential streets, which are more effectively monitored by the local population.
A second consideration is the ratio between the overhead cost of the program and the benefits it provides. If the optimal point of collaboration does not create sufficient savings to offset the overhead cost, we would not expect the program to be of interest to the municipality. Though Commonwealth Connect has largely removed the initial cost, it has not addressed the multiplier effect, where larger municipalities may see greater total benefits and therefore a lower cost-benefit ratio. This would suggest that larger municipalities are likely to gain more from the program.
Administrative Culture
Recent years have seen considerable research on “e-governance” and when and why governments do (or do not) incorporate digital communication technologies into their work. An influential theory in this domain has argued that e-governance must emerge in stages, progressing from the use of online media to keep constituents informed to more participatory, two-way forms of interaction between government agencies and the public. 20 From this perspective, leveling material access may not be sufficient for fostering effective adoption by municipalities that are technological newcomers. Although such newcomers are in a literal sense capable of implementing Commonwealth Connect, they may be taking an untenable shortcut. Because the administrative culture has not traversed the earlier stages of e-governance, it is unprepared for the responsiveness demanded by the tool.
Evidence suggests that the initial decision to adopt e-governance depends largely on the amount of available resources. Some have found that access to flexible, or “slack,” resources makes it possible to innovate. 21 Others have made the broader point that total revenue is a key determinant because it indicates a more complex apparatus that can incorporate new efforts effectively. 22 Caroline J. Tolbert and her colleagues have also found some evidence that wealth of the community, reflected in overall revenue, can play a critical role in the success of the later stages of e-governance, at least at the state level. 23 They also found that the presence of a dedicated IT department was a major predictor of the adoption of e-governance tools.
Imitation
The literature on e-governance also emphasizes the potential role of policy diffusion, or the extent to which governments learn from, copy, and compete with each other, particularly those with similar political, cultural, or geographic features. 24 As Frances Stokes Berry and William D. Berry argue, these external forces might occur through professional networks or geographical contiguity. 25 There is some evidence for each, though primarily at the level of U.S. states. Hyun Jung Yun and Cynthia Opheim, for example, have found that states whose leaders are part of professional networks geared toward technology were more likely to adopt new innovations. 26 For geographic contiguity, there is some historical evidence that states are more likely to adopt policy innovations if surrounding states have already done so, though the effect may differ by the context or nature of the innovation. 27 That said, one study at the municipal level found no such effect for cities in Turkey. 28 Taken together, these various studies suggest that, at least in some cases, municipalities may seek out innovations based on the decisions of their neighbors and colleagues, independent of objective benefits and administrative culture.
Data and Operationalization
To examine the factors associated with the overall implementation of Commonwealth Connect, we will again utilize the data provided by SeeClickFix. The analysis will also include municipal-level census information for population size and density. Additionally, MassGIS records on roads and tax units provided measures of road lengths and their zoning, and the Massachusetts Department of Revenue’s Municipal Databank provided details on each municipality’s budget. 29 From these contents, we are able to operationalize each of the three sets of hypotheses. First, Ostrom’s model of complementarity is assessed using population size (in 1,000s), density, total road length, and the proportion of streets that are residential. Second, the incorporation of technology within administrative culture was difficult to assess directly, as municipalities are not required by the commonwealth to report expenditures or employees working in IT in their annual budget. Because others have found that such features are generally best predicted by overall budget size and annual surplus, I use those measures as a proxy. 30 In order to avoid collinearity between the first measure and total population, I instead use expenditures per capita. Third, I did not have direct access to the professional networks of municipal leaders, so the analysis focuses specifically on geographical contiguity, assuming this will capture many of the cultural and political connections between municipalities as well. For this purpose, the models include spatial lag parameters 31 to examine whether the number of bordering towns with Commonwealth Connect influenced adoption and implementation. 32
The analysis examined three different outcomes. The first was whether a municipality had joined Commonwealth Connect at all (N = 351 municipalities in Massachusetts). The latter two were specific to municipalities who implemented the program (N = 64), whether they formally promoted the program (per SeeClickFix records), and the extent to which city employees used it (i.e., city employee reports per 1,000 residents). All results from the regressions are reported in Appendix F .
Testing the Three Models
I analyzed the three models for interpreting adoption in order. First, there was limited support for Ostrom’s complementarity model. The only relationship of note was that municipalities with a greater population were more likely to join Commonwealth Connect (O.R. = 1.03, p < .01). Otherwise, total population, population density, and road length and zoning did not predict joining, promoting, or employee utilization of Commonwealth Connect. Second, there was no evidence for the administrative culture model, as neither expenditures per capita nor surplus predicted joining, promoting, or utilization of Commonwealth Connect. Finally, we do see some support for the imitation model. Municipalities with more neighbors enrolled in the program were also more likely to have participated in Commonwealth Connect (O.R. = 1.39, p < .05). Notably, this measure, like those before it, had no relationship with promotion or utilization of the tool after joining the program.
Summary: The Crucial Role of Government in Driving Public Uptake of 311
This section has presented a two-part analysis of differences across municipalities in the adoption of Commonwealth Connect. The first focused on levels of use among the public, and the lesson was pretty simple: at least at this early stage of implementation, public adoption depended on the municipal government’s commitment to the program in word and practice. Municipalities that promoted the system and used it regularly as part of the course of daily service provision also saw active publics. These administrative impacts completely overrode any effect of material access or skills present in the public, at least as proxied by demographic characteristics.
The natural next question was when and why municipalities adopted, promoted, and thoroughly utilized Commonwealth Connect. There were three candidate models for this: the coproduction model, based on Ostrom’s 33 work, argued that municipalities would be more likely to join based on the likelihood that it would facilitate a collaborative effort between the public and the government to maintain public spaces and infrastructure; the administrative culture model emphasized the role of internal capacity for and attraction to technological innovations; and the imitation model highlighted the desire of local governments to keep up with the programs offered by neighboring municipalities. The findings were limited but telling. Only total population (a component of the coproduction model) and the number of surrounding municipalities also joining Commonwealth Connect (a component of the imitation model) were predictive of participation in the program. Furthermore, no measured variable was predictive of promotion and use of the system once it had been adopted.
We might derive three conclusions from the limited results of the second analysis. First, municipal governments are rational, but only to a certain degree of sophistication. Larger municipalities probably could envision the greater multiplier effect of investing in a program like this. Beyond that, it is likely that few if any took into consideration the particular physical form of their town and its zoning and how that might determine the magnitude of benefits the program could provide. (To be fair, Chapter 4 of this book is the first formal demonstration of those variations in potential benefits, so it would be hard to blame municipalities for not considering this.) Second, it does seem that learning and competition between neighboring municipalities is an effective mechanism for accelerating the adoption of such programs. The third conclusion arises from the inability to predict promotion and utilization of Commonwealth Connect. This suggests that the actual dynamics surrounding these higher levels of adoption are more complex than what can be accessed through quantitative analysis. It may be that more specific, localized processes are determining how the program is implemented in the end. To further explore this, the next section describes a series of interviews conducted with public officials who were responsible for Commonwealth Connect in their municipality.
Implementing Commonwealth Connect: The View from the Inside
The implementation of a program like Commonwealth Connect can be a complex process. It necessarily consists of the decisions and actions of many government employees from multiple departments and agencies, making it difficult to understand through regression models and related tools. For this reason, BARI, the MAPC, MONUM, and SeeClickFix undertook a second phase of evaluation of the program, in which we interviewed public officials who were designated as the primary contact for the Commonwealth Connect program for 17 municipalities that participated in the Commonwealth Connect program. Through a structured questionnaire, we asked each one to tell us the story of Commonwealth Connect in their municipality, in the hope that the corpus of these interviews would shed greater light on why such programs either flourish or falter.
The Interview Methodology
We conducted the interviews in summer 2015. For each of the 17 participating municipalities, we interviewed either the individual indicated as the primary contact for Commonwealth Connect or a colleague designated by that contact as the best person to interview. We initially generated a random sample that was stratified by three variables: city usage, a dichotomous variable regarding whether the municipality had any reports from identified public officials; high or low constituent adoption, based on the median number of custodians per capita over the course of the program (two per 1,000 residents for 2013–2014); and urban / suburban / rural. 34 This made for 12 possible categories (2 × 2 × 3). We created an ideal random sample with two municipalities from each category, but owing to low response rates, we eventually invited additional participants until we had at least one respondent from each category, except those with high constituent adoption and no city launch, as these were very uncommon (owing to the correlation we have already seen between the two forms of adoption). 35 Participation was reasonably well distributed across urban (n = 4), suburban (n = 5), and rural municipalities (n = 8). Not surprisingly, there was greater participation from municipalities that had used Commonwealth Connect internally (12 of 17 participants). Representation for each of the nine categories is reported in Table 7.1 .
TABLE 7.1 Classification of municipalities participating in the Commonwealth Connect survey by population density, constituent adoption level, and city employee usage |
||||||||
City employee usage |
||||||||
Yes |
No |
|||||||
Rural |
||||||||
High |
1 |
0 |
||||||
Low |
4 |
3 |
||||||
Constituent |
Suburban |
|||||||
adoption |
High |
3 |
0 |
|||||
Low |
1 |
1 |
||||||
Urban |
||||||||
High |
2 |
0 |
||||||
Low |
1 |
1 |
The survey contained a series of multiple-choice items regarding the adoption, implementation, and future goals for Commonwealth Connect within the municipality, including anticipated benefits from Commonwealth Connect before implementation; whether the program was supported by others within the government; whether it had been adopted as a work-order system or integrated with existing work-order systems; additional uses of the data generated by the program; the mechanisms used for launching the program both internally and within the community; perceived adoption of the app by community members; and overall satisfaction. Most items had a Yes / No structure followed by open-ended responses for elaboration, permitting a qualitative analysis.
Interview Results
Overview of Responses
An initial look at the responses indicates that implementation generally started strongly, with nearly all municipalities describing an internal launch (16 of 17) followed by training for employees in how to use the system (15 of 17). 36 All respondents indicated that they had anticipated some benefit from Commonwealth Connect from the outset, though the nature of those anticipated benefits varied. We asked specifically about the potential to increase constituent engagement, efficiency and responsiveness in service delivery, and the government’s technological sophistication. Each of the three benefits was endorsed by more than half of the sample, but increasing engagement was the most prominent (12 of 17 respondents vs. 10 of 17 for the two others). Six municipalities had anticipated all three as potential benefits, whereas eight had only anticipated a single benefit. These two groups seemed equally likely to have utilized the program internally (five of six vs. six of eight). Notably, regardless of initial perceptions, all respondents believed that all three benefits were apparent at the time of the interview, reflecting the effectiveness of the program’s stated goals, at least to those overseeing implementation.
Despite these strong beginnings, the program hit some speed bumps as it started to depend on collaboration across departments. Just over half of respondents indicated either “full” or “general” support from others in the government (10 of 17). Similarly, just over half of the municipalities surveyed utilized Commonwealth Connect as part of a work-order system (11 of 17), though unevenly across departments. All of those using it as a work-order system had done so for public works (11 of 17 municipalities), making it the unsurprising linchpin to demonstrating the utility of the system. Some who had success with this initial step extended this function to the mayor’s office (five), parks and recreation (three), and public safety (four). The vast majority of those using Commonwealth Connect as a work-order system had city usage (9 of 11), whereas the same was true for only half of the other municipalities (three of six). This would stand to reason, as city usage was defined as having at least one confirmed public official making reports, which would almost automatically follow if the municipality was using Commonwealth Connect as a work-order system. Finally, we can go a step further and see how many municipalities made additional use of the information generated by a 311 system. Eleven of 17 municipalities used Commonwealth Connect data in meetings to assess performance. Seven municipalities indicated that they use it at an even higher level, examining trends and geographic patterns to allocate resources more strategically. More, however (11 of 17), thought they would do so in the future.
Though these findings are suggestive, they are limited in two ways. First, it is difficult to come to robust statistical conclusions based on such a small sample size. Second, the findings make it difficult to distinguish cause and effect; for example, did limited support from others in government reflect limited buy-in, did it inhibit efforts throughout the government to use the system, or both? The open-ended responses from the interviews create the opportunity to further probe the factors that influenced implementation across municipalities. To do this most effectively, I will divide the sample into three groups: (a) those with city usage and high constituent engagement; (b) those with city usage but low constituent engagement; and (c) those without city usage.
Group A: City Usage, High Public Adoption
The six municipalities with city usage and high constituent engagement offer a window into how such programs work at their best. Of these, four had incorporated Commonwealth Connect as part or all of a work-order system, and a fifth was in the process of connecting it to their existing 311 system. In addition, four explicitly indicated further uses of the data. These included spending projections for infrastructure; assessment of performance, such as how long it takes a department to close a particular type of case; “[telling] the story” of the city and the services it provides; and statistical analyses of community needs. Among these, one municipality had a partnership with University of Massachusetts Boston’s Collins Center for Public Management to utilize the data more effectively. Another municipality specifically noted that it helped the highway department target impassable roads during snowstorms, generating a big “win” for the program locally.
The successes of Commonwealth Connect in these municipalities do not explain why such successes were attained in the first place. What we see through the surveys is that in each case there was a strong and consistent internal push for implementation. All six municipalities utilized all three media for an internal launch that our survey asked about—announcements, training, and additional informational meetings. Community launches were similarly robust, with all municipalities utilizing at least three different media, including an event, web site, social media, local publications, announcements in public spaces, and mailings. One suburban municipality suggested how this shared enthusiasm might arise. “The town manager [was] very clear” that Commonwealth Connect was a priority, and “leadership made sure all departments participated.” This aligns well with Goldsmith and Crawford’s argument that technology-focused transformation of municipal governance depends on strong leadership at the top. 37 As we will see, though, this top-down approach is not universally effective.
Group B: City Usage, Low Public Adoption
The six municipalities with city usage but low public adoption gave insight into how initial city uptake might fail to translate into a fully successful implementation. Effectively speaking, however, the category only rightly contains five municipalities, as I exclude one that had recently joined the program at the time of the interview and in the following year had one of the highest levels of public adoption. As we move forward, it is important to note that our definition of city usage—that any reports were generated by an account associated with an identified public official—is quite liberal, meaning we can have a city with “usage” but very little overall uptake within the government. This distinction plays a prominent role here, as half of the municipalities in this group indicated little or no internal support from others within government.
Two of the municipalities in this group shared stories that were telling about the critical role that municipal departments play in sustaining the program. First, a rural municipality indicated that the department of public works already had a work-order system and was not interested in utilizing Commonwealth Connect, either independently or as an adjunct to the current system. The representative went on to point out that the “citizens did not pick it up.” Though this was true, it speaks to a general assumption on the part of this municipality that uptake was going to happen shortly after the launch or not at all, and that usage did not require further modeling by the government. An official from a second rural municipality told a story that starkly contrasted with the narrative in Group A that a mayor’s leadership drove adoption. In her municipality, she reported, use of the system “was mandated by the town manager, and when something is mandated there can be resistance.” This municipality reported robust internal and community launches, and the official with whom we spoke described the system’s value in glowing terms. She indicated she was “completely satisfied” with the program, but added that she “would like to see the [municipal] staff take it more seriously. [It is] getting acknowledged, but follow-through is tough.”
Other municipalities described similar conflicts around the full utilization and adoption of the system. None indicated that they were currently using the data generated by Commonwealth Connect for additional purposes, and only two had any interest in exploring such options in the future. One rural community stated that there was little need for a true work-order system, limiting the overall value of the program. An aging industrial “city” (classified here as suburban) felt that the population simply was not sufficiently technological for the program to have much impact.
Group C: No City Usage
The five municipalities without city usage illustrated in even blunter terms the sorts of mechanisms that can undermine a program like Commonwealth Connect. In this category, I again set aside one of the municipalities because it had a marked increase in constituent adoption in the following year. Notably, it continued to have no city usage, for which reason I return to it later as its own distinct case. Of the four others, one indicated no internal support, and another described a situation in which support was strong “at the top, but not at [the] implementation level.” In both cases, “[Public Works] did not come to the table.” A third said the program was only “somewhat supported,” though with no elaboration. The fourth did not formally launch the system internally.
The fifth municipality in this category provides an interesting example, as it was the only one we interviewed that had no city usage and, at least eventually, a high level of constituent engagement. The interviewee indicated that the “City Manager was a big champion [of the program]” but that “one department head was not supportive.” Interestingly, Commonwealth Connect was used as a work-order system for the Mayor’s Office and Department of Public Works but had not been integrated with any internal technology. Work orders were printed out and handed to the worker who would complete them, and no issues were being followed on smartphones or tablets within the department. At the same time, the city had a reasonably strong community launch, had been trying to maintain “word of mouth” campaigns, and even the interviewee solicited the interviewer for additional advice while completing the survey on how to publicly promote the system.
Summary: The Many Ways to Undermine Full Adoption
The main lesson of the survey was that there was no single reason why some municipalities were successful in the implementation of Commonwealth Connect and others were not but that there were many points at which the process might be interrupted. Some rural communities indicated that the basic need for such a system was limited given the low volume of maintenance needs, echoing results from the quantitative analysis. Less extreme, almost all of the municipalities with no city usage or limited internal support had little interest in or understanding of how the data might support other uses, such as performance management or learning about community needs. In some cases, low levels of technological savvy made the system operationally irrelevant. In others, we see intriguing internal power dynamics, as in the three municipalities where a strong directive from the top was either ineffective or counterproductive in galvanizing broad-based adoption across departments. At times, the overall utility of Commonwealth Connect was undercut by an unwillingness to use it as a work-order system, and in some municipalities, it seemed as though the internal launch was only half-hearted. In sum, internal adoption could break down at any point between joining the program, internal launch, or continued reinforcement of its use. This is not all that surprising, but it illustrates the many challenges that might arise in the implementation of such a program, making any given failure largely contingent on the particulars of the local context. This also explains why the initial quantitative analysis failed to discover very much.
A second lesson from the survey offered some explanation for the correlations between the usage of Commonwealth Connect by city employees and adoption of the tool by the public. A few municipalities with city usage but low constituent engagement appeared to disregard the need to model use of the system for their constituents. This appeared to distinguish them from their colleagues in other municipalities who had been successful in translating internal adoption into effective use by the public. The converse was visible in the last example presented earlier, a municipality that had not made much internal use of the system but had succeeded in building sufficient outreach around it to elicit strong community participation. This mismatch between internal usage and marketing is uncommon for political and logistical reasons, but it serves to illustrate the point: even though custodianship is primarily motivated by care for the local space, 311, as a coproduction program, is still a collaboration between the public and the government. Members of the public have to be convinced that the municipality actually believes in the system itself. Without that reassurance, they may wonder whether the city is keeping up its end of the bargain. If not, they may view their efforts as largely ineffectual, whether they believe in the cause or not.
Conclusion
The rapid growth of urban informatics has created disparities across cities in their capacity to leverage technology-driven policies and programs. In simple terms, this emerging digital divide is between large metropolises that have access to both the financial and human resources to develop, implement, and effectively leverage such tools and their counterparts that do not. Placed within the framework put forward by scholars of the digital divide, this would largely point to differences in material access and skills and less to attitudes and choices. The Commonwealth Connect program gave us an opportunity to examine this assumption by making access to a smartphone-based 311 system attainable for all municipalities, effectively eliminating material access as a variable. As a result, we were able to see the extent to which the other components of the digital divide are implicated between technology “haves” and “have-nots” at the municipal level. Doing so also permitted us to evaluate the effectiveness of the scaling approach embraced by consortia like MetroLab and the types of challenges it might encounter.
Attitudes and skills each played prominent roles in determining the effectiveness of Commonwealth Connect across municipalities. Some attitudes were based on objective observations, such as the fact that many rural municipalities made the possibly prudent calculation that they would derive limited benefits from a 311 system. Other attitudes were a bit more subjective, such as a general resistance to technological solutions or, in contrast, the tendency to adopt the program because neighboring municipalities have already done so. In terms of skills, we saw that many municipalities failed to utilize the data in ways that would be beneficial. Just over half took the natural next step of translating the requests into a work-order system, and about the same number used them for performance management. Fewer still used the data in other creative, proactive ways, such as examining resource allocation, communicating with the public about government services, or quantifying public engagement. One might argue that these uses of the data should be classified as choices in the digital divide model, but the interviews suggested that they were rooted in an internal capacity for, or even awareness of, such opportunities.
The story of Commonwealth Connect, however, does not fit neatly into the existing models of the digital divide, suggesting a need for a model that more effectively addresses policy implementation. Put somewhat glibly, it is easier to generalize why the program worked than why it did not. When it was successful, all the necessary actors and institutions were engaged and supportive. When it was not, something did not go as planned, but that something could have been at any decision point, resting with any key official or department. Technological solutions are as subject to the complexities of bureaucracy as any government program. This is especially true for something like a 311 system, which requires the buy-in and participation of multiple agencies. In terms of the digital divide framework, the bureaucracy is an additional force that must be considered, strengthening or weakening any of the four other components of the model. For example, the attitudes of some leaders might be critical in that their endorsement is required for success, whereas the skills and vision of those closer to daily operations are essential for the full realization of the program’s potential. As such, the bureaucracy is not so much a fifth, stand-alone component to the digital divide model but rather the very context within which the four others must operate during municipal technology adoption.
The practical implications of this chapter are promising. There is evidence that scaling and transferring technological innovations in policy and practice can successfully bridge this particular digital divide, provided the necessary actors are supportive and the bureaucracy is properly navigated. These sorts of challenges are true for any change in governance and should not be taken as unique to the current case. Unfortunately, as discussed at the beginning of this chapter, this addresses only one of two different digital divides arising from urban informatics. Throughout this book, I have discussed modern digital data as being a public resource, and yet the true public—community members and the grassroots organizations and nonprofits that seek to support and serve them—rarely have the analytic skills necessary to utilize them effectively. Chapter 8 takes up this second digital divide facing the field.