10 Intelligently Steering Toward More Communitarian Technological Societies

Despite all the risks and undesirable unintended consequences produced by technological innovations,1 rarely have they been consciously and intelligently governed to lessen or more quickly respond to deleterious effects. In contrast to discourse that frames technological change as a natural evolutionary process, a more apt metaphor is that of a driver steering an automobile, who has the option of steering, accelerating, and braking in response to where he or she wants to go. Often it seems like innovation proceeds, as Neil Postman and Charles Weingartner described, as if important decision makers were “driving a multimillion dollar sports car, screaming, ‘Faster! Faster!’ while peering fixedly into the rearview mirror.”2 Although the two authors were describing the state of the mid-twentieth-century education industry rather than technological innovation writ large, the image remains apt. Massive technological undertakings such as nuclear energy and the space shuttle, as political analysts have shown, largely proceeded with the assumption that the relevant experts had it all figured out, only to be proved wrong by catastrophic or near-catastrophic accidents and products that came in over budget, past due, and nowhere close to delivering on early promises.3 Today, one thinks of Toyota’s attempts to ignore sticky electronic accelerator pedals, which subjected some drivers (and their passengers) to uncontrolled accelerations, or British Petroleum’s negligence in the blowout of the Deepwater Horizon oil rig, which gushed some five million barrels of oil into the Gulf of Mexico over eighty-seven days in 2010.

Much the same tends to be true of technological innovation with respect to what could be termed psychocultural goods like community, although these goods are more seldom considered by those studying the consequences of emerging technologies. As Edward Woodhouse noted, Philo T. Farnsworth could hardly foresee the broader cultural changes that would result from his invention of television tubes, namely their role in civic decline as they became omnipresent in North American homes.4

The failure to adequately govern sociotechnical change partly stems from the belief that technological development evolves autonomously, “progressing” societies all on its own. Consider historian Daniel Boorstin’s assertion that “the advance of technology brings nations together” with “crushing inevitability” and his celebration of TV for “its power to disband armies, to cashier presidents, to create a whole new democratic world.”5 Boorstin’s attitude no doubt reflected the broader reluctance of mid-twentieth-century citizens to seriously consider the potential of technologies to enable undesirable or unwanted changes. Similar sentiments were common a century ago regarding electrification and have remained prevalent—consider the contemporary belief that the Internet will automatically democratize despotic governments.6

Although considerable advances in thickening different dimensions of community life would be enabled by addressing the sociopolitical barriers outlined in the last three chapters, any gains would likely be partial and short lived without more foresighted public decision making regarding technological innovation. How would new innovations need to be governed given the fact that “as we invent new technical systems, we also invent the kinds of people who will use them and be affected by them?”7 How exactly might citizens more intelligently steer technological development in ways that protect, sustain, or enhance the experience and practice of thick community? How could intelligent trial-and-error strategies be applied to technological innovation to protect against the potential risks to community life?

Intelligent trial and error, however, is as much a political framework for avoiding unintended consequences as a set of strategies for assuring organizational success. How could trial-and-error strategies help improve the implementation of communitarian technologies? Finally, how can the framework be extended to already built and obdurate technologies? Indeed, many of the technologies that I have described thus far, such as suburbia and limited-access freeways, are massive sociotechnical barriers to thicker community life. Communitarians would not only benefit from the more intelligent steering of new innovations but also from the ability to prudently dismantle or reconstruct existing technologies.

Intelligent Trial and Error

The intelligent trial-and-error (ITE) framework has been developed by political scientists engaged in the study of risky technologies and organizational mistakes.8 ITE begins with a call for early deliberation by well-informed and diverse participants. Decision-making processes would, in turn, fairly represent those who might be affected by an innovation, be highly transparent, and broadly distribute both the burden of proof and authority to decide. If deliberations result in the choice to proceed, innovation would continue prudently and with adequate preparations to learn from experience. That is, activities including premarket testing, establishing redundant back-up systems, building in flexibility, and scaling up gradually would help to ensure that mistakes would be either averted or as small as possible. Moreover, the existence of diverse, well-funded monitoring groups and disincentives for mistakes, such as fees and fines establishing victim funds, would create a policy environment that encourages error correction. Finally, deliberation, testing, and monitoring would involve relevant experts, and their results would be broadly communicated. Advisory assistance, especially concerning the environmental and social effects, would also be readily available to have-nots, ensuring equity. The above reforms, furthermore, would be unlikely to be effective unless polities instituted significant protections against widely shared conflicts among technical experts, business executives, politicians, and other elites.

Some degree of trial-and-error learning no doubt occurs for most technological innovations; the problem is that learning frequently happens too late for easy error correction. Consider the Internet. A lack of ITE during its development has harmed cyberasocials—people who are unable or unwilling to feel a sense of social copresence or belonging through digital devices—and others whose needs for thick community are poorly served by contemporary communication networks. Any political deliberations regarding the psychocultural risks of the emerging Internet appear to have been largely ineffective, given that today’s Web is ambivalent at best with regards to thick community. This ineffectiveness is further clear from the absence of direct steering of the Internet’s development by governments with respect to social values and goals, despite substantial humanistic concerns being raised throughout its history.9 The relative lack of controversy surrounding governmental decisions, such as the one to shift from the research-focused NSFNET to the commercial Internet and the choice to subsidize high-speed Internet infrastructures, gives the impression that such moves were made without any conscious consideration of the potential unintended consequences.

Table 10.1 Major Components and Requirements of Intelligent Trial and Error

Major Components of ITE Requirements
Deliberation - Started early
- Fair representation of those potentially affected
- Transparent
- Burden of proof and decision authority shared
Precaution - Initial testing
- Gradual scale-up
- Redundant safety measures
- Built-in flexibility
Error reduction - Recognition of need to learn
- Well-funded, diverse monitoring groups
- Disincentives for resistance to error correction
- Incentives for error correction
Analysis and communication - Sophisticated social and environmental analysis
- Protections against conflicts of interest
- Findings readily available through appropriate means
- Extra assistance to have-not participants

Much like other emerging and similarly hyped technologies, diverse and divergent partisan positions regarding the Internet were frequently dismissed. As journalist Lee Siegel wrote in 2008, “Anyone who does challenge Internet shibboleths gets called fuddy-duddy or reactionary.”10 A debate framed much in the same way the former U.S. president George W. Bush presented the second Iraq War, “You’re either for [the Internet] or against [the Internet],” was woefully unprepared to head off the unintended consequences. Most societies jumped headlong into the process of restructuring their informational and communicative infrastructures vis-à-vis the Internet.

Active preparation for learning about the potential social effects of digital mediation would have entailed a very different evolution of the Internet. Experimental testing uncovering the existence of cyberasocials could have been performed in 1994 rather than in 2014.11 Doing so would not only have tempered hyperbolic rhetoric about the coming “global village” but also could have identified and perhaps even energized a clear set of stakeholders likely to be harmed by the increasing digital mediation of social life. Cyberasocials could have then worked alongside advocates for local, embodied community, and against Silicon Valley cyberutopians, in deliberations regarding how to best proceed with computing technologies. Rather than pursue a poorly controlled, market-led diffusion of Internet technologies, with psychological and sociological research being performed largely too late to influence decision making, deployment could have proceeded more gradually. Governments might have pushed for certain cities and neighborhoods to be wired earlier than others and funded dozens of longitudinal studies similar to that performed in a Toronto suburb (Netville) during the 1990s.12 Such studies, moreover, might have had an explicit focus on the consequences for cyberasocials and involved staunch critics, not just apologists. Results from these studies could have then been funneled back into venues for deliberation.

If the decision had been made to proceed, funds could have been set aside to compensate the people harmed if and when existing modes of community life became partly undermined by digital technologies. Those unable to feel a close social connection through online social networks would have benefited from a tax on Internet access and digital devices that, in turn, subsidized more communitarian neighborhood design, publicly oriented pubs and cafés, and technologies that could have helped them to limit the intrusion of Internet technologies into their social lives. Internet firms might have been incentivized to experiment with ways of providing Internet access that coincided better with the geographic reach of local community. Internet service providers could have been encouraged to set up networks that afforded substantially higher local data speeds and special applications that work only within neighborhoods and districts, incentivizing the use of such technologies to connect with more proximate friends and loved ones.

ITE, Community, and Emerging Technologies

Looking back on the past is only helpful, however, for illustrating how the strategies that make up intelligent trial and error could have been useful. The more important matter is the application of ITE to emerging technologies. How might research and development of driverless cars and companion robots be steered to protect against potential undesired effects on community?

Driverless Cars, Sprawl, and Job Loss

As is the case for most hyped innovations, public debate on driverless cars in various media outlets rarely includes a careful consideration of the full spectrum of potential risks. Some observers, such as a blogger for Freakanomics, breathlessly describe the autonomously driving automobile as a “miracle innovation” and a “pending revolution.” Even the more circumspect take by ethicist Patrick Lin was focused fairly narrowly on the ethical issues arising in the process of adaptation rather than conscious governing, namely how to assign blame when accidents occur, the possible effects on the insurance industry, and the risks of hacked driverless cars. Moreover, Lin ended his analysis with unmitigated technological determinism: “The technology is coming either way. Change is inescapable.” Authors of a Mercatus Center white paper likewise contended that “social progress” will be best served by opening up space for driverless cars to be an area of “permissionless innovation.”13 They argued that governments need to limit regulation of autonomous automobiles so that firms can “innovate” with abandon, working under the presumption that everything always works out for the best when innovators proceed largely unhindered.

Hence, the first major barrier to more prudently steering driverless car technology is the lack of recognition of the need for ITE learning. Despite some forty years of scholarly activity within science and technology studies, many people continue to believe that technological development proceeds autonomously, that technical change automatically results in humanistic advancement, and that everyone benefits equally from technological innovation; otherwise intelligent journalists, politicians, engineers, and professional ethicists have not stopped embracing incorrect ideas about technological change. Getting over this initial hurdle will be necessary before ITE-like governance could be feasibly applied to driverless cars or any other emerging technology.

The self-driving car is still, nevertheless, in its incipient stages, the ideal time for effective deliberation regarding the potential consequences. Although Google technicians have logged thousands of hours of test driving, their cars have not yet been fully deployed—although forms of automated assistance have recently become features in high-end automobiles. Besides the harms accruing to car salesmen, cab drivers, and long-distance truckers potentially put out of work, there are real concerns about potential secondary and tertiary effects of autonomous vehicles on urban landscapes. Positive unintended consequences are, of course, possible. If deployed as more inexpensive cabs, decreased rates of private car ownership might lead to denser urban environments—given the lessened need for copious amounts of parking. On the other hand, self-driving automobiles might help spur an increase in hypercommuting and domestic cocooning. Freed from the need to pay attention while driving, people seeking to reduce their rents and mortgages might be enabled to extend their commutes, helping to create additional sprawl as they seek to leap frog their fellow citizens in pursuit of an ever more bucolic suburban home.

Any debate forum regarding these potential harms would need to have suitable precautions against the discussion being premised on the assumption that the technology is inevitable and other scientistic or technocratic framings. Ensuring that the technology’s inevitability is not taken for granted would ensure a more appropriate distribution of the burden of proof between advocates and critics. Likewise, deliberation would need to be not overly tied to quantitative risk assessment or notions of “efficiency” but remain open to debates over values. As scholars who study scientific controversy have shown, a narrow focus on quantitative risk assessment or technical expertise in public deliberation often biases decision making toward the unexamined cultural values of the technoscientific experts involved.14 Scientistically or technocratically framed debate on genetically modified crops, for instance, would primarily consider the effects on crop yields, cost, and potential human harms via ingestion. Little attention would be devoted to how such crops might reinforce a food politics based on industrial or factory farming, centralize economic power in large agricultural conglomerates and biotech firms, or otherwise influence consumers’ relationship with what they eat. Hence, inquiries into autonomous automobiles would not simply examine questions concerning speed, cost effectiveness, and calculable risk of accident, but these vehicles’ effect on the good life and well-being as well. Is the mode of living offered by driverless cars actually desirable or might a large constituency prefer steering toward a world less shaped by automobility?

If the decision were made to proceed with autonomously driving automobiles, governments would need to require and help fund initial precautionary measures, extensive testing, gradual scale-up, and the flexible implementation of the technology. One fairly straightforward initial precaution against sprawl would be to shift road and highway funding from gas surcharges and general taxation to a tax on vehicle miles traveled.15 Doing so would force riders and drivers to bear more of the costs of automobility, and make it easier to implement congestion charges and other fees. By preventing the user-borne costs of automobile travel from decreasing too much, a vehicle-miles-traveled tax would help avoid certain anticommunitarian outcomes like increased hypercommuting or the abandonment of multimodal transit. Moreover, implementing such a tax on driverless cars would be simple from a technical standpoint, given that they already depend on global positioning satellites for navigation. Indeed, experiments suggest that it would be relatively straightforward to assign close to 99 percent of the tax receipts to the correct jurisdictions.16

Testing requirements could go far beyond the fairly narrow examinations of safety currently being conducted by Google and a few academics.17 Evaluators would further need to consider the potential effects on individual and social well-being. Indeed, the easily recognized advantage accruing to Google, if driverless cars were implemented, is that automobile travel would become yet another corner of contemporary living colonized by digital devices and advertising-driven surveillance. Would the ability to play Candy Crush or check Facebook in a driverless car do much to counter the psychological effects of gridlock or the civic deficits produced by commuting?18 Would companies expect greater output from their nondriving commuting workers? Would the interior environments of driverless cars encourage passengers to interact with one another, or would they exacerbate the problem of digital cocooning?

Secondary and tertiary effects of driverless cars could be discerned before too much financial and physical capital has been invested, if their broader deployment were limited to a relatively small set of cities or regions for several years or a decade. It would be far easier to change or reverse course on driverless cars if there were hundreds of thousands on roads instead of several million. Flexibility, moreover, could be maintained by not permitting hype over autonomously driven automobiles to lead to the neglect or dismantling of sidewalks, bike lanes, and public transit. Decision makers, in their excitement over driverless cars, risk repeating the same mistakes made in the mid-twentieth century when many people saw the original automobile as the end of transportation history. Might the driverless car be viewed by some observers to be a “miracle innovation” partly because it continues rather than opposes the sociotechnical momentum of the status quo of automobility? In any case, it would be wise to build in flexibility. Future generations might decide that a transportation system built around the driverless car is just as undesirable as traditional automobility. Increasing energy and resource scarcity coupled with rising global demand, barring some miraculous innovation like nuclear fusion, might render infeasible the vast communication networks, highway systems, high-end electronics manufacturing, and stable electricity grids that driverless cars need to function. Intelligent trial-and-error strategies would help ensure that driverless cars do not end up being the twenty-first-century analog to the abandoned roadside statues of Easter Island: a testament to the consequences of a myopic pursuit of unsustainable notions of progress.19

Besides effective deliberation, publicly accountable and fair decision-making processes, and a gradual deployment process, ITE steering could be encouraged by funding monitoring efforts and incentivizing error correction. It is already quite clear, from cases like Ford’s negligence regarding the Pinto’s exploding gas tank or Toyota’s delayed recall on sticky accelerator pedals, that automobile technology firms will often disavow responsibility or drag their feet in dealing with the harms resulting from their products.20 The tendency of automobile companies to shirk responsibility and fight regulation rather than engineer solutions frequently stymies sensible and largely beneficial innovations.

Governments might avoid a similar situation for driverless cars by assigning some accident liability to the firms designing the product by default, placing part of the burden of proof on producers rather than exclusively on drivers/occupants. Firms would likely respond by carrying “technical malpractice” insurance, similar to the policies doctors keep in order to compensate victims of medical mistakes. Indeed, many architects and engineers in the construction industry already carry professional liability or errors and omissions insurance.21 This legal arrangement would encourage firms to more quickly correct mistakes, because they would want to keep their insurance premiums low. Some legal scholars, however, contend that potential civil penalties are weak deterrents of irresponsible design, arguing that they should be supplemented with criminal penalties for negligent engineers and corporate officers.22

As important as such policies would be, legal liability tends to extend only to primary undesirable consequences, namely accidents, and not to secondary and tertiary sociocultural effects. For example, it is a near certainty that autonomous automobiles would lead to job losses among taxi drivers and truckers, and in other professions. As observers of similar economic changes have noted, such “economic dislocation [generally] entails the wholesale destruction of civic networks.”23 To incentivize driverless car firms to correct the “errors” wrought on working-class families and their communities, rather than run roughshod over them, an additional sales tax on rides and driverless car purchases could be used to force consumers and producers to subsidize the welfare benefits and retraining of people put out of work. Alternatively, funds might be raised by charging driverless car firms for the access to the public wireless band upon which their innovations depend. Such charges could be scaled alongside increases in unemployment, which might incentivize some large firms to devise their own schemes to get the unemployed back to work. Finally, urban scholars and geographers would be employed to monitor changes to residency, work, and social patterns. They would be tasked to investigate questions concerning the effects of driverless cars on social inequity and the practice of community.

Companion Robots and Psychocultural Risk

Attempting to address the risks of driverless cars is similar to preventing the harms of toxic chemicals or automation. Working hard to lessen the effects on the urban environment, accidents, and job losses probably would not seem too controversial to governments and many citizens. Technologies such as companion robots, on the other hand, pose what could be called psychocultural risks. The possible harms have less to do with the environment or people’s livelihoods than with potentially insidious changes to culture and users’ psychological dispositions. A case in point is how relying on surveillance and security technologies to induce moral behavior in children, as well as to protect them from perceived outside threats, creates new risks at the same time that it fails to address the underlying social causes of violence.24 Such techno-fixes for the problem of safety and moral development not only direct attention and money away from better social support programs, but also risk producing a false sense of security in adults, creating feelings of depersonalization and the loss of privacy in youth, and discouraging adults from intervening in children’s moral development through empathic dialogue.

Psychologist Sherry Turkle has described the dangers of companion robots as emerging from the fact that they “promise a way to sidestep conflicts about intimacy” as well as the possibility that users will begin to judge their human relationships in terms of what robotic companions appear to offer.25 Some of her interviewees expressed a preference for robotic relationships, citing how they would ostensibly have a bigger “database” from which to give “better” advice and that relationships with them would entail fewer demands and risks. Such robots, moreover, are unlikely to have subjectivities of their own; they would be programmed to be maximally compatible with the whims and desires of users.26 In all likelihood, they would not be beings with their own imperfections, needs, and desires but objects to be consumed, hence posing particular risks for people’s sexual development. Research on pornography has associated it with declines in the quality and quantity of sexual intimacy within romantic relationships and lower scores on measures of marital well-being.27 Although couples for whom pornography use is not so damaging no doubt exist, there is good reason to worry that the promise of riskless robotic surrogates for human intimacy will not be a boon to some people’s aspirations for lovingly fulfilling sexual relationships. Many partners would not want to be measured against an always ready, willing, and compliant sex robot. Lastly, as Turkle points out, the ability of robots to serve as alluring anodynes for loneliness might enable children to feel less guilty about not visiting their elderly parents, and might reduce governmental and charitable organizations’ concerns about the chronically isolated.28

Despite these risks, there are those who want to charge forward with the development and deployment of companion and sex robots. AI expert David Levy has written an entire book expounding the ostensible inevitability, normality, and advantages of relationships with sex robots, advantages he appeared to attribute partly to their presumed greater knowledge of the mechanics of the human orgasm.29 MIT robotics researcher Cynthia Breazeal has contended that her work is “not about … replacing people” even as she released a robot called JIBO that, along with other features, reads bedtime stories to owners’ children.30 To be fair, Breazeal appears more cognizant of the potential unintended consequences than advocates such as Levy. Nevertheless, her misreading of the history of automation as having merely eliminated “the jobs that people don’t necessarily want to do anyway” and as “[empowering] people to do more interesting work” suggests a great deal of naiveté about the politics of technological change. If societies are to avoid the potentially undesirable unintended consequences produced by companion robots, techno-enthusiasts should not be alone at the helm—given their clear biases and conflicts of interest.

If technological societies were to intelligently steer the development of sociable robots, products such as JIBO would likely not be on the market without some form of premarket testing. Technological societies might rise to the challenge posed by such devices by instituting a version of the U.S. Food and Drug Administration (FDA) to evaluate technologies with psychocultural effects. Technologies under their purview would include devices promised to help users realize happier and less harried lives, education technologies claimed to produce smarter children, and social robots declared to “strengthen human relationships” rather than detract from them. Firms might be forced to have independent scholars perform the kind of clinical and observational studies that Sherry Turkle has done for decades, asking questions including, “How does the use of this technology influence and interface with users’ expectations, dispositions, desires, and anxieties?”

Different populations undeniably embrace and use any given technology very differently. Technologies with potential psychocultural effects do not pose the same risks for everyone. Nevertheless, citizens and policy makers currently have little to no information to guide their decisions about community-influencing technologies. Making conclusions about potential psychocultural consequences, moreover, is less straightforward than determining whether an anti-inflammatory medicine contributes to heart attacks. The desirability of these technologies depends on the vision of the good life embraced by those viewing it. Robotic love advocate David Levy, for example, has seemed guided by a vision of the good sex life characterized by a more mechanistic than emotionally thick understanding of intimacy.31 Some of Sherry Turkle’s interviewees explicitly privileged relational convenience over emotional depth in their considerations of possible robotic companionship.32 Nevertheless, improved premarket assessment of these technologies, at a minimum, could enforce greater honesty by the firms producing them. Imagine a storytelling social robot coming with a label stating, “Studies have shown that this robot enables busy parents to substitute time with their children with digital surrogates, which could negatively affect the emotional depth of familial relationships. Children, in turn, may be taught that their needs for belonging and intimacy are best met through gadgets rather than by relating to other human beings. They may even come to lag behind their peers in social and emotional development.”

Other aspects of an FDA-style governance might be difficult to map to psychocultural devices. At what point would a recall of a social robot be made? Unlike drugs such as Vioxx, which contributed to heart attacks and strokes in users,33 the potential harms of psychocultural devices may not manifest in clear physiological symptoms. Moreover, due to entrenched patterns of thought concerning “personal responsibility,” people tend to assign more blame to users of digital devices than to those making them. Problems are taken to emerge from improper use rather than improper design. Consider Internet sociologist Zeynep Tufekci’s recent tweet: “Do all these people who write about fake & wasteful social media ever ponder if this relates to their choices in friends & media usage?”34 Professor Tufekci seemed to suggest that those who are concerned that social media too easily encourages shallow social practices should be putting the blame on themselves, not the technology.

Clearly the relationship between design and effect is more complex for gadgets than it is for pharmaceuticals. Users of digital devices no doubt coshape their effects in ways unavailable to people taking medicines: a patient’s attitude about medicine is unlikely to change whether his or her heart suffers an infarction in response to taking Vioxx. On the other hand, people regularly struggle to follow through with their decisions regarding the use of both substances and technologies. Consider those who struggle to abstain from alcohol or electronic gambling machines, despite the existence of those who can more easily enjoy both with moderation.35 People can and do use technologies in ways that are at odds with their longer-term or more highly valued goals and struggle to alter their entrenched sociotechnical habits. Any argument that people’s usage of technologies “reveals” their “true preferences” or is reflective of “rational” choosing has little logical or empirical basis. Consider a study of classroom instant messaging, which concluded that students “seem to be aware that divided attention is detrimental to their academic achievement; however, they continue to engage in the behavior.”36 Similarly, many readers have probably experienced and regretted a night spent at home on Facebook or Netflix rather than going out to socialize. The agency of users regarding the effects of technologies on their lives is always limited and circumscribed.

The same would almost certainly be true of companion robots: Many users would enjoy them and continue to use them despite an awareness of how they conflict with other aspects of their well-being. Although technological libertarians are likely to protest, many citizens would welcome policies that provided helpful nudges toward behaviors they consider more desirable. Taxing devices that encourage cocooning and other anti- or asocial behaviors and using the funds to develop communitarian alternatives, as I have suggested throughout, would be a good first step.

Another potentially helpful move by a psychocultural technology assessment organization would be to encourage developers to make their products more flexible by allowing users to lock out some aspects of the functionality. Indeed, reflective, intelligent choice making when using cell phones and other screen technologies is frequently difficult, because they too easily cater to the users’ whims and anxieties. These devices’ apparent functionality hides how they persuade users toward undesired and inflexible habits. Users’ own behavior reflects this fact. Consider the popular computer program Freedom, which allows users to shut-off their Wi-Fi access until they reboot their computers. Technology writer Evgeny Morozov has even gone so far as to lock up his Wi-Fi card in a timed safe (along with the screwdrivers he might use to circumvent the timer).37 Users would not be driven to such lengths if their ostensibly empowering devices merely enhanced their own agency. Such features, in any case, ought to be built into risky devices. Breazeal’s company, for instance, would be required to provide users with an inexpensive service through which they lock out certain functions of JIBO, such as storytelling, until the user comes back and requests that it be unlocked. Other social robots might be sold only under the condition that they be nonfunctional between the hours of five and eight, discouraging users from cocooning with them in the evening. In the same way that many European countries do not allow most stores to remain open on Sundays, recognizing that doing so encourages friends and family to gather over meals and drinks at least once a week, new gadgets could be limited so as to leave similar openings for or nudges toward embodied togetherness.

Implementing ITE Governance

The above suggestions only outline what the ITE-like governance of technologies has to offer in the abstract. Where would ITE exist? At what scale? Who would be responsible? I will not pretend that the next few paragraphs provide an entirely satisfactory answer. Nevertheless, some of the more concrete possibilities are easily imaginable by looking to the governance structures that already exist in the United States and elsewhere.

I have already suggested that something like the U.S. FDA could regulate products with psychocultural risks at a state or national scale. Yet, given the greater complexities in ascertaining psychological and cultural—as opposed to physiological—harm and the cultural barriers to regulating consumer goods outside the realm of health and human bodies, the more recently formed Consumer Financial Protection Bureau (CFPB) may be a more apt model. An institution modeled after the FDA would serve as a gatekeeper, having the power to prevent the entry of new innovations into consumer markets and take them off store shelves. Even though such capacities might be highly desirable in certain cases, it may be too politically difficult to enshrine an institution with such powers in the foreseeable future. Citizens may be too well accustomed to the status quo of innovation without permission or representation to demand it, and the resistance of powerfully connected technology firms may prove too great. An institution modeled after the CFPB, in contrast, would be tasked with educating consumers, enforcing greater transparency by firms regarding potential harms, and conducting behavioral research. Moreover, it would have the ability to use appropriate mechanisms (e.g., taxation/subsidy) to incentivize more communitarian technologies and disincentivize potentially harmful ones. Regardless, in the United States’ political context, the creation of either such organization would entail an act of Congress, involving congressional approval of a presidentially appointed director. Given these political challenges and barriers, other nations might be the first place to look for the first psychocultural technology assessment institutions. Indeed, countries such as Denmark are much further along in this regard.

What about technologies whose effects on community life are not merely psychocultural but simultanously economic, material, and political? With the demise of the Office of Technology Assessment, which advised Congress on new technoscientific innovations, evaluation of technological effects happens in a far more decentralized and ad-hoc fashion.38 Components of assessing and regulating driverless cars might fall under the purview of the Departments of Transportation, Labor, and Housing and Urban Development as well as the Federal Communications Commission, without any clear means of coordinating among these disparate agencies. The visions of individual governmental agencies are often too narrow to adequately envision the range of possible desirable and undesirable consequences. The National Highway Traffic Safety Administration is concerned primarily with the possible consequences of autonomous vehicles for individual safety, emissions, improving route planning data, and the mobility of the disabled, which are among but far from exhaust the possible sociopolitical implications of the technology.39 Reinstituting an improved version of the Office of Technology Assessment, perhaps modeled after the Danish Board of Technology that has advised Denmark’s public and parliament since 1986, would be the most obvious pathway for ensuring that some minimal level of ITE technological governance is performed.

With any sort of institutional design, the devil is in the details. Careful attention ought to be paid to establishing institutional independence. Without it the bureaucracy faces a significant risk of regulatory capture, wherein the actors being regulated by an institution manage to manipulate it to serve their own interests.40 Consider how hydraulic fracturing methods for drilling for natural gas are not subjected to the standards of the Clean Water Act. This exception was implemented at the behest of then-vice president Dick Cheney—arguably on behalf of the oil industry, which in turn tied the hands of the Environmental Protection Agency. The FDA, moreover, is itself often criticized for being a “captured” agency and having too close of a relationship with industry.41

The benefit of providing a bureaucratic institution with a large amount of independence, although commonly framed as “protection from politics,” is that a commitment to serve a particular set of partisan interests can be rendered obdurate. A feature that seems antidemocratic on its surface can actually enhance democracy by ensuring that the needs and wants of less empowered groups continue to be served despite opposition by political elites. As the political scientists Charles Lindblom and Edward Woodhouse have pointed out, “business groups … tend to have the advantage of better organization and finance compared with most other organized interests.”42 Hence, any organization tasked with helping to prevent consumers from becoming psychocultural or socioeconomic victims of innovation would need some assurances of financial and other kinds of independence, protecting it from capture by the powerful business firms that produce new gadgets.

ITE-style governance, at the same time, need not occur only at such large scales. Towns, cities, and regions could more often regulate the technologies existing within their borders. Doing so would have the added benefit of enhancing the practice of local political community, as do social movements more generally (See box 10.1). Such efforts might draw lessons from the Amish. Contrary to the widespread belief that they are simply technology-fearing luddites, the Amish have been practicing something similar to ITE within their district communities for generations.43 The introduction of any technology is heavily debated and voted on by the adult residents of each district, with new technologies being given trial periods. As a result, most Amish districts far from replicate life in the seventeenth century: one sees pneumatic power tools, diesel generators for charging batteries, and neighborhood telephones alongside horse-drawn buggies and gas lighting. This selective embrace of technology reflects the cultural momentum of Amish ways of life as much as their practice of evaluating new technologies against highly revered collective values: humility, community, equality, and simplicity. Technologies, including automobiles, television, and personal phones, have been rejected because the Amish believe that they lead users to be prideful or neglectful of their ties to their families and neighbors. My point here is not that the Amish are perfect or that societies should reject electricity. Rather, I mean only that other communities could employ a similar willingness to govern technologies with regard to their values.

Box 10.1
From Tahrir Square to Zuccotti Park: Community, Place, and Politics

For a few months in 2011, thousands gathered daily at an obscure Manhattan park to protest American inequality. Inspired by Egypt’s Arab Spring protests in Tahrir Square, the Occupy Wall Street movement went further than most demonstrations: hundreds literally occupied Zuccotti Park, overnighting in tents and sleeping bags. The occupation’s level of self-organization rivaled that of many small towns. A finance working group managed a budget of eventually almost seven hundred thousand dollars. Comfort stations maintained and distributed a stock of clean clothes, blankets, and tents. Regular meals were prepared for participants and for a portion of the local homeless population. Dishes were washed and the park cleaned. A sizable library as well as a communications center offering Wi-Fi and livestreaming capabilities were located on-site. Occupiers, moreover, practiced thick political community through deliberative processes and near-consensus voting. Although eventually taken down by police, the protest has lived on through spin-off movements—including the fight for a fifteen-dollar minimum wage—and organizations such as Rolling Jubilee, which buys delinquent debts in order to forgive them.

The distinguishing feature of Occupy and Arab Spring, contrary to hype about them being Internet-based revolutions, is that they coalesced in a place. Both no doubt relied somewhat on Internet technologies and national networks of supporters. However, it was their sheer physical presence that gave them strength and worried those in power. Indeed, compare their influence and effects to the online petitions and other purely networked political activities often derided as mere “clicktivism.” Occupiers hashed out strategies and aims through face-to-face deliberation, though not always constructively. Prodemocracy agitation in Egypt benefited as much from local mosques, amenable urban spaces, and the tradition of the post-Friday-prayer protest as from Twitter. Both cases illustrate how, even in the purported network society, place and copresence are still vital to social movement politics.47

In the eyes of some participants, the failure of the Occupy movement to sustain itself or leverage its energy into more concrete and substantive changes was partly the product of communitarian deficits. Indeed, Yotam Marom has argued that the movement “tore at the seams” with enough state pressure because participants “weren’t … grounded in communities enough.” The collective struggled to productively steer internecine disputes into shared agreements and was further hampered by a “call-out” culture frequently more focused on tearing off strips than collaboratively addressing the underlying roots of perceived wrongdoings. Most of all, the protest faltered when it came to concretely outlining a shared future for both the participants and the 99 percent.48

Seen from a different angle, Occupy was more than a protest: it was a prototype. If a group of people without electricity and running water can run a small tent village for months as a democratic political community in the chaos of lower Manhattan, than the same could be done for other organizations, infrastructures, and towns. Occupy’s lost promise is not so much failing to sustain an exhausting long-term demonstration but that it never shifted from being a political gesture to generating the building blocks for an alternative technological society.

Rather than wait for the federal government to act, communities could begin to regulate new technologies on their own. I have already suggested that cell phone or Wi-Fi signal is one possible area: communities might designate certain areas to be free of cell phones and other devices, apart from emergency calling, to cultivate the public sociability they desire. Towns and cities might choose to prohibit driverless cars, despite whatever their state government decides. Imagine local sheriffs impounding Google’s self-driving automobiles! Indeed, recent moves by cities to step in where national governments are laggard, such as Seattle’s banning of plastic grocery bags and implementation of minimum wage increases, suggests a growing willingness to turn to municipal-level politics in dealing with twenty-first-century problems.44 In any case, devolving technology assessment to the local level has the added benefit of further thickening political community. Indeed, studies of deliberative forums, namely citizen panels, find that they strengthen participants’ sense of community and practice of citizenship.45

The barriers to acting more like the Amish, however, are significant. Localities working in such a mode risk drawing themselves into legal battles with large corporations and higher levels of government. Fortunately, the Amish already provide lessons regarding how cities might respond: they have maintained a National Amish Steering Committee, which is tasked with hammering out compromises with governments concerning military service and a range of regulations that conflict with their communities’ rules and ways of life.46 For instance, mid-twentieth-century Amish objected to paying into or receiving government funded social security. In an analogous fashion, likeminded cities could band together to press against or encourage changes to federal and state rules that would otherwise stand in the way of experimentation with technological governance.

Most towns and cities, at the same time, would likely elect to be more selective than the Amish regarding the technologies they would choose to regulate. Since most localities lack the cohesiveness and insularity of most Amish settlements, many of the informal sanctions (e.g., shunning) used by the Amish to regulate personal technologies would be ineffective and probably opposed by most residents anyway. Community-level ITE would likely focus on the technologies most clearly working in the public realm, and towns and cities would discourage the use of certain personal technologies through a variety of indirect means, such as by supporting public alternatives or eliminating perverse subsidies.

Widespread disbelief in the feasibility of technological steering is likely, however, to remain a significant barrier. Officials in many localities appear more interested in the twenty-first-century equivalent of chasing smokestacks: attempting to woo creative class hipsters, rather than ensuring that technological developments do not threaten important social values. Trying to keep up with “technologically hip” cities inhibits more reflective thinking about ends and means. Citizens too frequently parrot, albeit often as a lament, the thought-stopping cliché “You can’t stop progress.” Equally important as the search for ways to institute ITE are interventions into the patterns of thought that can thwart even easily achievable actions.

ITE as a Set of Strategies

The benefits of an intelligent trial-and-error approach do not consist solely in the avoidance of undesirable unintended consequences but also in enhanced organizational effectiveness. ITE is not merely a mechanism for better steering the introduction of toxic chemicals, the application of genetic engineering, and the mitigation of global climate change; deficits in ITE likewise lead to cost overruns, lengthy delays, and inflexible technologies that fail to deliver on their promised benefits. Organizations that overestimate their expertise and the ability to plan for every eventuality rather than anticipate learning partly via experience are more likely to experience these results. Twentieth-century nuclear power, for instance, was not a failure merely because it resulted in disasters such as Three Mile Island, but because it was scaled up too quickly and without enough input from skeptical voices. Only after the construction of dozens of reactors was it realized that neither the plants nor the resulting electricity would be as inexpensive as hypothesized and that citizens would come to see meltdown-prone light water reactors as insufficiently safe. The Space Shuttle program not only produced the Challenger disaster but left NASA with an inflexible and expensive system of reusable shuttles and launch centers. Large initial capital investments led them to “stay the course,” despite the shuttle’s many inadequacies.49 The tactics of sensible initial precautions, gradual scale-up, inexpensive trials, quick feedback, diverse participation, and shared political power are as useful for more successful innovations as they are for averting disaster. How might they guide the implementation of thick communitarian technologies?

Trial-and-Error Urbanism

Despite the promise of new urbanist designs to enhance the communality of neighborhoods, their actual implementation frequently leaves much to be desired. Indeed, critics mobilize a litany of complaints that sound eerily similar to those arising in case studies of organizational failure. First, the design process can fail to enable fair or diverse participation. Although most new urbanist developments involve a public charrette, many contend that charrettes are more often a ritualistic performance of participation than a substantial implementation of shared governance. Participation is further limited in cases where the original designer’s architectural vision is written into the local code or ownership covenants.50

The typical new urbanist project is a massive greenfield or infill development undertaken by large institutions or firms, entailing long lead-ups, high initial capital investments, and few mechanisms for eliciting timely feedback prior to attempting to sell large swaths to potential homebuyers. Consider Mesa del Sol, a twenty-square-mile development south of Albuquerque. Taken on by Forest City Enterprises in a public-private partnership with the city and publicly subsidized through a tax-increment financing scheme,51 the massive planned community is projected to take some forty years to complete. Major errors are already apparent. Forest City has recently tried to free itself of the project as homebuyers have proved to be apprehensive about purchasing housing in an experimental neighborhood in the middle of the New Mexico desert.52 Likewise, the city of Edmonton appears to be forging ahead—without the involvement of a private developer—with a similarly massive twenty-five-year redevelopment of its former city airport: Blatchford. As for Mesa del Sol, one wonders if proposed sustainable and communitarian features will last over the long development period or in the face of unforeseen errors. The city has already dropped the idea of a pneumatic garbage system.53

Analysis of similar projects points to several foreseeable shortcomings to large new urbanist developments.54 These planned communities tend to fail to deliver on promises of increased walking, often being “transit ready” but neither well-integrated with public transportation nor cutting back on parking enough to discourage driving. Because developers tend to be fairly risk averse when it comes to deviating from the status quo, resulting neighborhoods often fail to substantially increase densities, lending credence to the critique that new urbanism is merely more photogenic sprawl. Furthermore, new urbanist developments are usually much more expensive, reflecting higher-quality building along with the mark-up typically associated with “niche” goods. Indeed, few new urbanist developments are affordable for lower- and middle-class wage earners.55

Compare such results with that of a more ITE-style process implemented during the development of the pedestrian-friendly, sustainability-focused neighborhood of Quartier Vauban in Freiburg, Germany.56 Deliberation began early on and with diverse interests represented. Citizens were not merely consulted or informed but participated continuously in the project’s planning via the Forum Vauban. Experts in the planning bureau responded to citizen input, although mediation was necessary, and citizens often felt they had to “‘fight’ planners to convince them of the validity of their suggestions.”57 Still, the influence of traditionally powerful actors was lessened. Gradual scale-up and incremental experimentation was attained by subdividing the development into pieces that were purchased by several private developers and around forty citizen-organized “building collectives,” or Baugruppen.58 Selling parcels off to individual Baugruppen took financial pressure off the city, which otherwise would have been incentivized to offload the property as quickly as possible to a single developer.

The overall planning approach was referred to as “Learning while Planning” by locals. The chief urban planner was open to “allowing the development plan to change as a result of continuous learning and evolving standards of the Baugruppen and Forum Vauban. As goals and energy standards evolved, the city was able to incorporate these by putting new restrictions on builders via sale contracts, thus steering development.”59 Though far from perfect—power could always have been more democratically distributed and monitoring improved—the planning process at Vauban displayed a degree of preparation for learning that is incredibly rare in residential development projects of that scale.

The promise of the planning approach pursued in Vauban is apparent in how participants succeeded where similar efforts elsewhere failed. The neighborhood contains a much higher percentage of renewable energy and passive design homes as well as a much lower rate of driving and car ownership than the rest of Germany—and even compared to elsewhere in Freiburg. Indeed, citizens managed to achieve quite radical features, including designating many areas as car-free and requiring owners to pay to park their automobiles in garages at the edges of the neighborhood. Over the course of the planning process the vision for Vauban’s development became less and less conventional. This starkly contrasts the North American experience with new urbanism and other alternative urban designs, where grand visions are frequently watered down as neighborhoods actually get built and sold.60

This result is probably not solely due to the more incremental and deliberative planning process. Certainly, Germany, and especially Freiburg, has a favorable environment for these kinds of developments. Moreover, its successes could be, at least partly, attributed to parceling out pieces of the neighborhood to soon-to-be residents to design themselves. There is an undeniable psychic value to the ability to participate in the design and construction of one’s environment and dwelling space as opposed to purchasing from a list of models imagined by someone else. This no doubt can partly explain the large degree of citizen buy-in in Vauban when compared to developments such as Mesa del Sol.

A Tale of Two Groceries

Similar lessons about the utility of intelligent trial-and-error steering can be seen on a smaller scale by comparing two different New York food cooperatives. The Pioneer co-op in Troy opened to much fanfare in October 2010—after some five years of planning—in a newly renovated four thousand-square-foot building with five hundred members and forty-one workers. The Mohawk Harvest Co-op in Gloversville opened around the same time after a year of planning, crammed into an eight-hundred-square-foot rental space with only around one hundred members. Within a year the Pioneer co-op closed, having amassed some 1.9 million dollars in debt.61 Two years after opening, the Mohawk Harvest Cooperative was consistently earning sufficient revenue to justify moving into a much larger space, continuing to operate to this day.

Given the characteristics of the two towns, one would have hardly expected Gloversville to have been more successful than Troy in producing a flourishing food cooperative. While they are both struggling postindustrial American cities, Troy boasts double the population, a college and a university, and a downtown that did not contain a full-service grocery. In contrast, there is a Price Chopper on Main Street in Gloversville, less than a mile from Mohawk Harvest, and a Hannaford not much farther away. Troy, moreover, is located in the capital district of New York (population around 1 million). Hence, the Pioneer co-op would have been able to attract shoppers from neighboring towns who might have wanted to avoid driving all the way to Albany or Schenectady to patronize a cooperative grocery.

Harvest Mohawk succeeded where the Pioneer co-op failed partly because the organizers of the former emphasized learning over time rather than attempting to imagine and implement the “perfect” cooperative grocery from the outset. This emphasis by organizers of the Harvest Mohawk Cooperative was driven largely by circumstances. While their compatriots in Troy were able to locate around 2.5 million dollars in grants, bank loans, and member investment, the organizers of Harvest Mohawk had their initial grant and loan applications rejected. They could locate only tens of thousands of dollars in member investments and a personal loan from a local retired dairy farmer.

Easy financing for the Pioneer co-op was a mixed blessing. Organizers spent five years purchasing a somewhat suitable location for around two hundred thousand dollars and then proceeded to sink hundreds of thousands of dollars into renovations and equipment prior to ever making a single sale. Indeed, a recent real estate listing for the vacant building mentioned three walk-in freezers, and the store boasted several brand new produce, dairy, and meat coolers. Along with an initially bloated payroll, an array of overhead costs including some sixty-five thousand dollars a year in interest payments alone meant that turning a profit required a level of patronage that the Pioneer co-op never saw during its first year.62 Moreover, and despite years of planning, the organizers of the Pioneer co-op never seemed sure of their exact mission. They appeared to be trying to compete on price for conventional foodstuffs with nearby supermarkets as well as Troy’s bodegas and corner beer stores, simultaneously trying to cater to customers interested in primarily organic and local products. Likely this was partly the product of the co-op’s debt load, which may have encouraged managers to attempt to reach out to every possible demographic.

The development of the Pioneer co-op displayed several aspects of poor preparation for learning. High initial capital investments made the effort inflexible. The need to recoup these expenses dominated later decision making. Furthermore, important feedback concerning community needs and interest came much too late, arriving after organizers were already locked into owning an expensive building and a surplus of brand new equipment. The desire to create an ideal cooperative grocery from the outset rather than learn from experience resulted in an expensive and lengthy development process, which limited enthusiasm and led to some unforeseen and undesirable consequences. Members received numerous worried emails from managers fretting over the fact that half of the Pioneer’s members were not shopping there. Lacking adequate feedback mechanisms, it was too easy for organizers to overlook the obvious reason: Troy is a college town. The five-year wait meant that many of the initial member-investors had since graduated and moved somewhere else.

In contrast, organizers of the Harvest Mohawk Cooperative moved quickly with a shoestring budget, hoping to grow revenue and get grants as they incrementally learned their community’s needs and wants regarding a cooperative grocery.63 They rented out a small, inexpensive location on Main Street, upgrading their space only after showing consistent revenue. Equipment was bought used whenever possible. Ironically, some of their more recent coolers were purchased from the failed Pioneer cooperative. Furthermore, rather than attempt to compete with Price Chopper, Harvest Mohawk organizers saw their mission primarily as a service organization for local farmers. They viewed their purpose as providing local farmers with a six-day-per-week market and, eventually, as a distributor between them and local restaurants and institutions. They arrived at this mission in part because of stakeholder surveys, from which they learned that many farmers neither desired nor considered themselves well suited to selling their own products at local farmers’ markets. Given their long work hours during the growing season and the innumerable tasks still needing to be done over weekends, it is understandable that some farmers might not look forward to a Saturday morning spent bagging produce and counting change.

For all Harvest Mohawk Cooperative’s ongoing successes, there have also been failures. Although organizers sought community feedback regularly, some patrons were inadvertently excluded when the grocery changed locations. Reflecting the cultural and middle-class background of volunteers and many members, the new location’s interior was decorated with what could be termed a “foodie chic” aesthetic. As a result, organizers soon found that the previously small but significant population of food stamp shoppers ceased to frequent the store. Manager Chris Curro described seeing them stop at the front door, as if they were debating with themselves whether they belonged.64 Yet if organizers had neither proceeded gradually nor monitored their efforts, they might not have ever recognized the exclusionary effects of subtle changes in store design. Hence, they were well prepared to recognize the error and begin to take corrective action.

Extending and Steering Toward ITE

Although promising, the intelligent trial-and-error framework nevertheless suffers from one major limitation and also faces a number of barriers. ITE primarily consists of a set of strategies for shaping the trajectories of emerging or developing technologies. It is a precautionary approach focused on preventing new mistakes rather than addressing old ones. This emphasis is no doubt apt, given that it is usually far less costly to fix mistakes early on. Nevertheless, as I have described throughout, several already established and entrenched technologies hinder the seven dimensions of community. Having more compact, walkable neighborhoods filled with third places and vibrant social interaction is a matter not simply of ensuring that new neighborhoods have these features but also of incrementally dismantling and reconstructing the existing transportation networks, zoning laws, and retail networks that make automobility and networked individualism nearly obligatory. How might ITE be extended to the project of eliminating or rebuilding technologies?

Despite any appearance of obduracy or fixedness, even the grandest artifice demands constant maintenance to continue functioning. Major renovations, moreover, are often incredibly expensive. After many decades, highways cannot simply be resurfaced but must be laid anew and have their large concrete supports replaced. Hence, the obduracy of many technologies, especially large infrastructures or urban spaces, is not simply a result of the expense and difficulty of alteration, for major maintenance is often little different in this regard. Rather, it is because maintenance decisions are too often routinized, leaving no opportunity to restart a process of intelligent trial and error. Major renovation and reconstruction of Interstate 787 in Albany, for example, has proceeded with too little discussion or debate regarding whether a transportation network designed in the mid-twentieth century remains desirable. As soon as a few hundred million dollars are dumped into the project, short-term prospects for replacing the highway with a boulevard and opening up access to the riverfront are likely to become quite dim.

Standing in the way of substantially refashioning existing technologies are several questionable patterns of thought. No doubt, given the discomfort felt by most people when faced with uncertainty, “staying the course” has a certain psychological appeal. More onerous are arguments by market conservatives that treat past consumer and political choices as if they were sacrosanct. The current predominance of the suburban built environment, to take one example, is mistakenly believed to be the result of rational actors “voting with their feet” rather than a mixture of government subsidy, lobbying by the construction industry, racially motivated white flight, and regulatory decisions.65 Within such a belief system, more self-conscious and deliberate steering of technology is framed as an attack on freedom writ large. Similarly, established institutions are often defended under the questionable logic that previous generations’ political actors were undoubtedly more thoughtful and objective than today’s activists and politicians. Consider the attitudes of some “originalist” readers of the United States’ constitution, who insist on merely trying to “correctly” interpret the intentions of its writers or the then-prevalent understandings of constitutional meaning rather than debate the merits of those intentions and understandings. On a smaller scale, the ultraconservativeness of homeowners’ associations regarding rule and design changes seems rooted in the belief that the original developer possessed a near-infallible understanding of how to maintain the coherency, and hence market value, of the neighborhoods.

Such ways of thinking are unlikely to disappear in the short term. Nevertheless, decision-making structures could be set up so that they are more frequently and substantively contested. Major infrastructural decisions typically have public comment periods, but these are sometimes as short as thirty days and leave too little opportunity for substantive revision. Some improvements could be achieved if state regulations forced departments of transportation and planning offices to hold public hearings for revising master planning documents whenever a major component of the infrastructure neared the end of its lifespan. Renovation would be considered by a maximum feasible diversity of stakeholders as if it were an emerging technology and, hence, as if maintaining the status quo were merely one possible trajectory among many. Similarly, federal and state law could require that zoning and planning codes as well as homeowners’ association guidelines and condominium board rules contain sunset provisions. That is, they would be in effect for a period roughly consistent with the time in which buildings and infrastructure would not need major renovation. After that period, they would require the simple, or perhaps super, majority consent of stakeholders to be maintained. If consent is not forthcoming, then a full reappraisal of codes and rules would be required. Such laws would reflect the spirit of Thomas Jefferson’s arguments for holding a constitutional convention for each new generation.66 They would recognize that it is through not only the persistence of laws but also the obduracy of technological structures that decisions made by the dead end up ruling over the living.

What might be done to counter the tendency to make technological decisions without the conscious implementation of the trial-and-error learning necessary to reduce unintended consequences? Rarely is flexibility ensured or the pace and trajectory of innovation quickly changed in response to errors and evolving values. As I alluded to earlier, the reasons for this are clear: lack of widespread recognition of the need for intelligent trial and error. This can be partly attributed to plain old ignorance. If citizens have never been exposed to the idea or practice of politically governing technology, they can hardly be expected to advocate for it.

At the same time, a pervasive governing mentality—“a tacit and often ill-considered pattern of assumptions that fundamentally shapes political relationships, interactions and dialogues”67—concerning technology, politics, and progress appears to often stand in the way of democratically steering innovation. Specifically, many people subscribe to a technocratic viewpoint. They believe that social progress inevitably or even only results from the application of rationalistic means by scientific or technical experts.68 In the mid-twentieth century, technocracy mainly manifested as the faith in bureaucratic management techniques. As is made clear by some observers’ insistence that driverless cars become an area of “permissionless innovation,” today’s technocratic outlook centers around the belief that entrepreneurial innovators, often coming from Silicon Valley, will create “disruptive” technologies that will inevitably lead to economic growth and widely shared increases in well-being.69

Regardless of the form of technocratic governing mentality, the result is a culture in which it is mostly unthinkable to question the motivations, expertise, and abilities of technoscientific elites—or their affluent investors. Democratic steering is seen, in response, as both unnecessary and undesirable, because innovation is believed to inexorably steer itself toward the best of all possible worlds. As technology scholars Jathan Sadowski and Evan Selinger have noted, “By focusing on technology as the dominant force in society—a force that progresses in inevitable ways—technocrats can justify their actions as merely being the outcome of rational, mechanical processes.”70 That is, it is imagined that technoscientific elites are not political partisans likely to steer innovations in directions where they themselves are likely to enjoy the greatest benefits but rather experts simply helping technology evolve toward a preordained, purely rational destination. Left out of this depiction are fundamental political questions: For whom is this innovation good? Who decides?71

Technocratic thinking remains dominant despite decades of research within STS demonstrating that it fails to correspond with reality; indeed, the previous chapters are examples of such research. Possible reasons for this include the fact that many contemporary elites benefit from this state of affairs and that media outlets, namely the Science Channel and Wired magazine, often put forth simplistic, almost mythical portrayals of scientific and technological advancement. Some of the blame, however, can justifiably be placed on STS scholars, who could take their work more seriously as a political and inevitably partisan activity. STS research too often resembles ecology prior to the development of conservation biology: a banal cataloguing of failures rather than proactive advocacy for a better future. There could be more counterexamples to Langdon Winner’s characterization of much of the field as “blasé, depoliticized scholasticism.”72

If the governing members of the Society for the Social Study of Science, the main organizing body for the field of STS, were to measure their success in terms of affecting public controversies involving science and technology rather than, or at least along with, conference attendance and the launching of new journals, people in the field might conduct themselves very differently. Fewer annual prizes would be awarded to largely inconsequential studies, such as a recent student prize for the analysis of the complexities of ornithological field recordings, and privilege those focusing explicitly on the barriers to a better technological civilization.73 Furthermore, funds might be set up to support the PhD dissertations of social psychology students attempting to discern the best strategies for undermining technocratic governing mentalities, both within the classroom and in public discourse. Is it reasonable to expect intelligent trial and error (or a similar framework) to be implemented in the foreseeable future without these and similar efforts to enhance the political influence of STS research?

My goal in this chapter has been to outline how intelligent trial-and-error learning strategies could assist in the development of more communitarian technological societies. If technological societies were to embrace such strategies, emerging technologies, such as driverless cars and companion robots, would be subjected to more substantial testing and gradual scaling prior to their wider deployment. Moreover, there would be more careful monitoring and debate, in addition to stronger mechanisms for disincentivizing errors and for providing compensation to victims. In the case of companion robots, special attention would need to be paid to the tension between short-term hedonic gains and longer-term life aspirations. Companion robots could doubtlessly alleviate the pangs of loneliness for individual users, but they are probably unlikely to help people better integrate themselves into more loving and welcoming social communities of other human beings.

At the same time, ITE is not simply a strategy for governing technologies but a set of tactics aiding organizational success. Communitarian endeavors such as planned communities and food cooperatives could be more successful if they acted like the planners of Quartier Vauban or the managers of the Mohawk Harvest Cooperative: begin with diverse and public deliberation, scale up gradually, monitor efforts to correct for mistakes before they become too costly, and be prepared to change practices in response to lessons learned via experience.

Finally, intelligent trial and error could be improved by applying it to already existing technologies as much as to emerging innovations. Infrastructures such as highways and organizational technologies (e.g., zoning codes) could be made less obdurate if they were treated as if they were emerging technologies whenever some component of the sociotechnical systems they are a part of needed major renovations and repair.

Despite all its potential to help bring about more communitarian technological societies, ITE is unlikely to be applied to emerging or established technologies without addressing the wider barriers to its implementation. I have focused primarily on the cognitive barriers. Dominant patterns of thought frame technological developments as ideally free from conscious steering by governments and depict the consequences of those developments as unavoidable, albeit also sometimes lamentable. What philosopher Langdon Winner noted over thirty-five years ago continues to be largely true today: “People are often willing to make drastic changes in the way they live to accord with technological innovation at the same time they would resist similar kinds of changes justified on political grounds.”74 Given the ongoing problem of citizens being unable to realize the kinds of communities they desire—much less deal with industrial pollution, climate change, job losses from automation, and the decline or stagnation of well-being within technological societies—there may be no question more important today than how to better persuade citizens that the more responsible governance of technological innovation is both possible and desirable.

Notes