10   Fixing Fake News: Self-Regulation and Technological Solutionism

Stephanie Ricker Schulte

After the 2016 presidential election, Barack Obama and Donald Trump did not agree on much, but they agreed on this: fake news threatened American democracy.1 By the end of 2017, fake news was a “fixture of contemporary politics” and a “profitable business.”2 While some ambiguity existed about what the term actually meant and what these leaders meant by the term, the majority of Americans agreed it was a problem.3 Despite these complications and incoherencies, it had become a “national common sense,”4 something everyone just knew, and a dominant cultural framework through which Americans understood the relationship between media and government and assessed the health of their democracy. Under public scrutiny, policy makers, news organizations, and social media platform leaders proposed a variety of solutions—such as flagging, content moderation, and news consumer guides—aimed at controlling content and empowering consumers.5

This piece historicizes both the fake news crisis and these proposed solutions. As Victor Pickard wrote, “Historicizing current media debates allows us to reimagine the present and reclaim the alternative trajectories.”6 This piece, then, maps the field of what was thinkable about the fake news problem by charting the often-unacknowledged assumptions that led us to the current moment. Indeed, neither concerns about fake news nor the proposed solutions emerged in a cultural vacuum. Long-standing American cultural beliefs, or national common senses, about technology, about news, and about government are at play, ideas with the power not only to actively shape the ways Americans understood fake news as problematic, but also to shape the range of solutions. In an effort to understand why the problem and solutions emerged as they did (and did not), this chapter inserts the fake news into two cultural histories—media self-regulation and technological solutionism.

The History of Nonfactual Information

First, important to this conversation is the recognition that the exchange of nonfactual information has played a central role in human history.7 During the formation of the United States, for example, printed rumors or outright lies were not just commonplace, they were key strategies for drumming up support for the American Revolution.8 The fakeness of information was of little concern at the time. In 1731, Benjamin Franklin, then Pennsylvania Gazette editor, “expressed a widely-held sentiment” that “when Truth and Error have fair Play, the former is always an overmatch for the latter.”9 Clearly, much has changed since Franklin’s era. Chief among those changes are evolutions in media technology, including the creation of the internet. This technology ramped up cultural ambivalence stemming back to the printing press about the promiscuous spread of information.10 The ability of international computer networking systems and the platforms operating on them to facilitate the potentially global, instantaneous, and anonymous circulation of information struck a nerve in popular and scholarly circles in the decades after the technology’s popularization. For decades, the internet was predominantly understood as a destabilizing agent, as a “disrupter,” as a technology that challenged established systems of power and distributions of power.11 Since the 1980s, the internet has caused Americans considerable anxiety.12 Contemporary scholars focus on the ways the internet preys on human cognition to disrupt fact gathering and verification practices. Social media scholar Kate Starbird described information networks online as “almost perfectly designed to exploit psychological vulnerabilities to rumor” because information mirrored in multiple places online reaffirms its validity.13 She said, “Your brain tells you ‘Hey, I got this from three different sources,’ but you don’t realize it all traces back to the same place and might have even reached you via bots posing as real people.” For Starbird, this could have deep consequences, bringing forth “the menace of unreality—which is that nobody believes anything anymore.”14 With the evolution of communication technologies, Franklin’s “widely held sentiment” lost its dominance; the era of public faith in the “fair play” between truth and error is over.

News media often cited communication technologies as creating, worsening, or enabling the fake news problem. In 2017—for the first time in its sixty-four-year history—The Associated Press Stylebook included the term fake news. This definitive guide for American journalists applied the term to “deliberate falsehoods or fiction masked as news circulating on the internet.”15 News industries marked fake news as primarily an internet problem, a problem caused by the promiscuous circulation of information through online platforms, and not a problem of news itself. One important exception to this “internet problem” narrative is President Donald Trump, who also applied the term to legacy print and television news institutions such as the New York Times and CNN.16 For President Trump, fake news was an internet problem and a news problem. As this illustrates, the debate about fake news engaged historical American narratives that presumed antagonism between news industries and government, which positioned news media as the primary agents of government accountability.17 Americans had lost faith in the fourth estate and the state itself as cynicism took hold.18 The fake news debate positioned the internet as a powerful piece in the ongoing chess match.

Media Self-Regulation

Part of this chess match has involved media regulation, or at least the specter of regulation. Historically, Americans and American media companies have valued self-regulation, or “governance without government,” a model in which industries self-regulate, that is, “establish agreements, standards, codes of conduct or audit programs that address all firms of a particular industry with varying degrees of formalization and bindingness.”19 Self-regulation is a form of “private government” in that industry insiders retain control by enacting “self-imposed and voluntarily accepted rules of behavior,” thereby preventing the loss of control to an external agent, in this case the government.20 Self-regulation shielded the government from criticisms about news manufacturing and suppression American leaders so often lobbied at fascist and communist regimes, especially during the Cold War.21 Although self-regulation may result from governmental regulatory threats, it is important to note that self-regulation is “never fully controlled by public authorities.” Self-regulation is driven by private industry’s “interest in shaping the rules that they depend on and by the public-sector’s interest in delegating or leveraging regulatory powers in the face of formidable regulatory challenges.”22 In other words, industry executives like self-regulatory practices because they can set their own rules; legislators like it because it allows them the power to enact change, but relieves them of the responsibility or accountability for those changes.

The history of American media is an excellent example of this history of self-regulation.23 Government has historically raised the specter of direct media regulation during moments of moral panic about the corruptibility of citizens.24 Time after time, through internal mechanisms media producers, distributors, and outlets walked back content deemed problematic or outrageous, shifting content on radio channels, film screens, and television broadcasts to diminish public and political outrage. The film industry, for example, faced public concern in the 1920s and 1930s about “the effect movies might have on public morals” and corrupt “political attitudes.”25 William Harrison Hays, an industry trade group leader, created an industry agreement nicknamed the Hays Code, which restricted the content of films to within culturally normed bounds and “avoided social and political issues”; Hays and others “feared that, unless the trend in pictures was curbed, the federal government would step in to censor the movies or break up the industry.”26 In the 1960s, the industry replaced the Hays Code with the Motion Picture Association of America (MPAA) film rating system, a citizen empowerment mechanism designed so consumers could self-select content. The film industry, thus, effectively staved off government regulation through self-regulating content and through empowering citizens to make informed choices.

The fake news panic was heavily influenced by similar fears of citizen corruptibility surrounding the 2016 presidential election, questions about the potential influence of Russian online disinformation campaigns,27 questions about how (or if) citizens might vet information, and questions about how (or if) platforms might control information online. Social media and search engines like Google were singled out in particular because their “proprietary ‘black box’ technologies, including opaque filtering, ranking, and recommendation algorithms, mediate access to information at the mass (e.g., group) and micro (e.g., individual) communication levels.”28 Up to 126 million Facebook users could have viewed content created by Russian agents; Twitter identified 2,752 accounts under Russian control; Google found 1,108 suspicious videos.29 One study showed that between February and November 2016, “fake news items received 8.7 million ‘engagements’” on Facebook while “ ‘real news’ received 7.3 million.”30 Political leaders judged the agents as responsible for solving the problem because social media and search companies controlled the black box systems that in turn controlled the “misinformation ecosystems.”31 At the same time, only about 20 percent of the accounts spreading fake news were animated bot accounts, meaning that most of the accounts appeared to belong to humans.”32 Fake news was presented as an internet problem and a news problem, but also a human problem. Indeed, as media scholar Jonathan Albright wrote, “social interaction is at the center of the ‘fake news’ debate.”33 In the fake news debate, familiar fears about citizen corruptibility mixed with long-standing cultural anxieties about the internet.

Fake News as a National Security Challenge

Following historical patterns, public outrage caused concerned political leaders to investigate fake news, in particular where it came from and how it spread. Members of Congress took their job seriously, presenting their investigation as high-stakes. In his opening remarks at the Senate judiciary subcommittee investigation, Senator Lindsey Graham said, “This is the national security challenge of the 21st Century.”34 These and other investigations resulted in calls for regulation, most of which focused on internet technologies.35 And, following historical patterns, industries began to self-regulate. Legacy news industries and social media industries reacted differently. Legacy news agencies focused on defending their news products through transparency initiatives and on promoting media literacy. Transparency initiatives informed the public about the news-gathering process. For example, in January 2016, the Washington Post published “Policies and Standards,” which detailed the organization’s seven principles for conduct, ethics policy, fact-checking standards, and source and attribution policy, with the goal of “clarity in our dealings with sources and readers.”36 Literacy initiatives included the development of websites to help news consumers self-regulate their information diets. The New York Times, for example, rolled out a guide to citizens to help them protect themselves from fake news, to “determine the reliability of sources.”37 Such initiatives received support from scholars advocating for the integration of media literacy training into educational curricula, such as those who wrote that fake news in particular made media literacy “vital to the future of democracy in the United States” if expanded to take into account the nature of online economics and the “emerging spreadable ecosystem for information.”38

These self-regulation, transparency, and citizen empowerment strategies made sense in that these solutions follow a path similar to that of the film industry. Legacy news industries self-regulated their content by making the process of news gathering more vigilant and transparent and by deploying citizen empowerment measures. Leveraging these market solutions has thus far shielded them from government regulation. These solutions also allowed legacy news agencies to mark the fake news problem as not a problem with news industries themselves and thereby push back at Trump’s accusations. These solutions allowed news industries to mark the fake news problem as a problem of noise in the news ecology. This solution also worked to drive traffic back to legacy news. As an article in the Seattle Times in 2017 noted, “Infowars.com alone is roughly equivalent in visitors and page views to the Chicago Tribune.”39 But, like the self-regulation of the film industry, these solutions served regulators as well. Government regulators could present themselves as having pushed but not regulated news media industries, an important distinction since regulation in general, but especially media, remained taboo in the above-mentioned American watchdog narratives. Instead, regulators could be “change makers” who sparked market solutions that empowered citizens. And legislators understood the value of transparency as a regulatory mechanism. As legal scholar Cass Sunstein noted, transparency regulation, or “regulation through disclosure,” has been “one of the most striking developments in the last generation of American law.”40

Social media outlets were slower than legacy news organizations to react. Mark Zuckerberg’s initial reaction to charges that fake news on Facebook influenced the election was to name it a “crazy idea.”41 Still, by November 2016, the founder of Facebook posted on his Facebook page that his company would “take misinformation seriously” and was working on plans for, among others, “stronger detection,” “easy reporting,” and “disrupting fake news economics.”42 Shortly thereafter, in December 2016, Facebook announced that it would allow users to report fake news by clicking the story and choosing “It’s a fake news story.”43 In August 2017, it announced that it created a new algorithm to flag stories and divert them to third-party fact-checkers.44 Companies such as Google also funded an initiative to “flag fake news online and remove posts if they are found to violate the companies’ terms of use or local laws.”45 These flagging methods were also a type of transparency initiative, one called “participatory transparency,” a system in which a “community of third parties” acts as “an autonomous body.”46 They addressed concerns about the role of internet technologies in the fake news problem by creating a new algorithm, providing a technological solution to the technological problem, and addressed concerns about the role of users in the process by providing a system of flagging.

Facebook’s plan met mixed reviews. One the one hand, the plan received accolades for using crowdsourcing techniques to empower users and for using external evaluation systems for transparently judging the materials.47 On the other hand, some claimed the system worked too slowly and questioned Facebook’s motives in creating these systems, which were critiqued as more about solving the publicity problem than the actual problem.48 One former operations manager, Sandy Parakilas, wrote in the New York Times that when he was at Facebook, “the typical reaction” was to “try to put any negative press coverage to bed as quickly as possible, with no sincere efforts to put safeguards in place or to identify and stop abusive developers” because the company “just wanted negative stories to stop.” Parakilas remarked that “lawmakers shouldn’t allow Facebook to regulate itself. Because it won’t. The company won’t protect us by itself, and nothing less than our democracy is at stake.”49

Technological Solutionism

These platform self-regulation solutions engaged by Facebook made sense given the cultural history of what media scholar Evgeny Morozov has called “solutionism,” the idea dominating Silicon Valley that “recasts” complex problems either as “neatly defined problems with definite, computable solutions” or as “transparent and self-evident processes that can be easily optimized—if only the right algorithms are in place.”50 In short, it is the idea that technology can and should solve humanity’s problems. Technological solutionism played heavily in the fake news regulation conversation in part because fake news was understood as a technological problem or, at the very least, as a social problem solvable by technology.

These technological solutions emerged in American cultural and historical contexts, explaining why they contrast with the solutions that emerged elsewhere. In Europe, government emerged as a solution to the problem. In Brussels, the East StratCom team “serves as Europe’s front line against this onslaught of fake news”; this team of diplomats, former journalists, and government employees “tracks down reports to determine whether they are fake,” researches them, and then “debunks the stories for hapless readers.”51 Similar European Union state agencies emerged in Finland and the Czech Republic.52 In Germany, legislators discussed the option of fining American technology companies such as Google and Facebook for allowing “false stories to be quickly circulated.”53 These kinds of solutions did not make common sense in the United States, where the federal government had a public approval rate of 25 percent and technology companies had an approval rate of 71 percent.54 Fixing this problem directly with government agencies or initiatives was not a viable option for political leaders. A direct government solution as emerged in Europe was not what Paul Pierson and Theda Skocpol would call the “path not taken” in the United States.55 It was the path not even considered.

Conclusion

This chapter illustrated the ways that the U.S. fake news crisis found its roots in larger American cultural understandings of news media, government, and technology. These cultural understandings helped shape the fake news problem as primarily an internet problem that would result in the corruption of citizens. Solutions such as flagging, content moderation, and news consumer guides were put forward as solutions only because the problem was defined as primarily a technological one and because the history of media in the United States has been one dominated by self-regulation and reliance on market solutions to problems over government intervention. Longer historical and cultural contexts of self-regulation and technological solutionism illustrated that the self-regulatory efforts put forward by internet platforms and legacy news industries “make sense” because these solutions continued the pattern of market (not government) regulation and because they perpetuated the illusion that technology is the best solution to humanity’s problems, in particular when contrasted with government. Ultimately, the fake news crisis was both a point of historical disjuncture and continuity, although generally understood as only the former. It was embedded in the conflicting history of American faith in and fear of technology.

Notes

  1. 1. Olivia Solon, “Barack Obama on Fake News,” The Guardian, November 17, 2016, https://www.theguardian.com/media/2016/nov/17/barack-obama-fake-news-facebook-social-media; Jordan Taylor, “Why Trump’s Assault on NBC and ‘Fake News’ Threatens Freedom of the Press—and His Political Future,” Washington Post, October 12, 2017, https://www.washingtonpost.com/news/made-by-history/wp/2017/10/12/why-trumps-assault-on-nbc-and-fake-news-threatens-freedom-of-the-press-and-his-political-future/.

  2. 2. Uri Friedman, “The Real-World Consequences of ‘Fake News,’” The Atlantic, December 23, 2017, https://www.theatlantic.com/international/archive/2017/12/trump-world-fake-news/548888/; Tom Standage, “The True History of Fake News,” The Economist, June/July 2017, https://www.1843magazine.com/technology/rewind/the-true-history-of-fake-news. Also see John Corner, “Fake News, Post-Truth and Media-Political Change,” Media, Culture and Society 39, no. 7 (2017): 1100–1107.

  3. 3. Among Americans surveyed, 62 percent view fake news as a serious problem with significant impacts on America. Amy Mitchell, Jesse Holcomb, and Michael Barthel, “Many Americans Believe Fake News Is Sowing Confusion,” Pew Research Center, December 15, 2016, http://www.journalism.org/2016/12/15/many-americans-believe-fake-news-is-sowing-confusion/. Survey data show that 47 percent of respondents said fake news was “sloppy or biased reporting,” 39 percent said it was “an insult being over-used to discredit news stories,” and 15 percent called it “a Russian weapon to disrupt democracies” or a “political dirty trick.” Friedman, “The Real-World Consequences.” For a summary of conflated terms, see Caroline Jack, “Lexicon of Lies,” Data and Society, August 9, 2017, https://datasociety.net/output/lexicon-of-lies/.

  4. 4. Karma Chavez, “Common Sense, Conservative Coalitions and the Rhetoric of the HIV+ Immigration Ban” (keynote presented at the Southern Colloquium on Rhetoric, Fayetteville, AR, October 5, 2017).

  5. 5. Elizabeth Dwoskin, “Twitter Is Looking for Ways to Let Users Flag Fake News,” Washington Post, June 29, 2017, https://www.washingtonpost.com/news/the-switch/wp/2017/06/29/twitter-is-looking-for-ways-to-let-users-flag-fake-news/; Casey Newton, “Facebook Is Patenting a Tool That Could Help Automate Removal of Fake News,” The Verge, December 7, 2016, https://www.theverge.com/2016/12/7/13868650/facebook-fake-news-patent-tool-machine-learning-content; Alicia Shepard, “A Savvy News Consumer’s Guide: How Not to Get Duped,” Moyers and Company, December 9, 2016, http://billmoyers.com/story/savvy-news-consumers-guide-not-get-duped/.

  6. 6. Victor Pickard, “Media Reform as Democratic Reform,” in Strategies for Media Reform, ed. Des Freedman, Jonathan Obar, Cheryl Martens, and Robert McChesney (New York: Fordham University Press, 2016), 211.

  7. 7. Joanna Burkhardt, “Combating Fake News in the Digital Era,” Library Technology Reports, November/December 2017, 10–13. For more contemporary examples, see Edward Herman, “Fake News on Russia and Other Official Enemies,” Monthly Review, July–August 2017, 98–111.

  8. 8. Taylor, “Trump’s Assault.”

  9. 9. Ibid.

  10. 10. Leo Marx, The Machine in the Garden (Oxford: Oxford University Press, 2000).

  11. 11. Stephanie Ricker Schulte, “United States Digital Service: How ‘Obama’s Startup’ Harnesses Disruption and Productive Failure to Reboot Government,” International Journal of Communication 12 (2018): 131–151; Michael X. Delli Carpini and Bruce Williams, “Let Us Infotain You: Politics in the New Media Age,” in Mediated Politics: Communication in the Future of Democracy, ed. by W. Lance Bennett and Robert Entman (Cambridge: Cambridge University Press, 2001), 160–181.

  12. 12. Stephanie Ricker Schulte, Cached: Decoding the Internet in Global Popular Culture (New York: New York University Press, 2013).

  13. 13. Danny Westneat, “The Information War Is Real, and We’re Losing It,” Seattle Times, March 29, 2017, https://www.seattletimes.com/seattle-news/politics/uw-professor-the-information-war-is-real-and-were-losing-it/.

  14. 14. Starbird, quoted in ibid.

  15. 15. Laura Hazard Owen, “Fake News Might Be the Next Issue for Activist Tech-Company Investors,” Nieman Lab, June 2, 2017, http://www.niemanlab.org/2017/06/fake-news-might-be-the-next-issue-for-activist-tech-company-investors/.

  16. 16. Yen Nee Lee, “Trump and GOP Attack CNN, New York Times and Washington Post in ‘Fake News Awards,’” CNBC, January 17, 2018, https://www.cnbc.com/2018/01/17/fake-news-awards-by-donald-trump-gop-cnn-new-york-times-washington-post.html.

  17. 17. Christian Christensen, “WikiLeaks and ‘Indirect’ Media Reform,” in Freedman et al., Strategies for Media Reform, 58–71.

  18. 18. Priscilla Meddaugh, “Bakhtin, Colbert, and the Center of Discourse: Is There No ‘Truthiness’ in Humor?,” Critical Studies in Media Communication 27, no. 4 (2010): 376–390.

  19. 19. Reinhard Steurer, “Disentangling Governance: A Synoptic View of Regulation by Government, Business and Civil Society,” Policy Sciences 46, no. 4 (2013): 395.

  20. 20. Jean Boddewyn, “Advertising Self-Regulation: True Purpose and Limits,” Journal of Advertising 18, no. 2 (1989): 20.

  21. 21. Doris Graber and Johanna Dunaway, Mass Media and American Politics (Los Angeles: Sage, 2015).

  22. 22. Tony Porter and Karsten Ronit, “Self-Regulation as Policy Process: The Multiple and Criss-Crossing Stages of Private Rule-Making,” Policy Sciences 39, no. 1 (2006): 67.

  23. 23. Pickard, “Media Reform.”

  24. 24. Clayton Koppes and Gregory Black, Hollywood Goes to War: How Politics, Profits, and Propaganda Shaped World War II Movies (Berkley: University of California Press, 1987).

  25. 25. Ibid., 12–13.

  26. 26. Ibid., 14.

  27. 27. Jim Rutenberg, “RT, Sputnik and Russia’s New Theory of War,” New York Times Magazine, September 13, 2017, https://www.nytimes.com/2017/09/13/magazine/rt-sputnik-and-russias-new-theory-of-war.html.

  28. 28. Jonathan Albright, “Welcome to the Era of Fake News,” Media and Communication 5, no. 2 (2017): 87–88; Albright cites Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Cambridge, MA: Harvard University Press, 2016).

  29. 29. Hamza Shaban, Craig Timberg, and Elizabeth Dwoskin, “Facebook, Google and Twitter Testified on Capitol Hill,” Washington Post, October 31, 2017, https://www.washingtonpost.com/news/the-switch/wp/2017/10/31/facebook-google-and-twitter-are-set-to-testify-on-capitol-hill-heres-what-to-expect/.

  30. 30. Corner, “Fake News,” 1102.

  31. 31. Ibid.

  32. 32. Rutenberg, “RT, Sputnik.”

  33. 33. Albright, “Era of Fake News,” 87.

  34. 34. Shaban, Timberg, and Dwoskin, “Facebook, Google and Twitter.”

  35. 35. Elizabeth MacBride, “Should Facebook, Google Be Regulated? A Groundswell in Tech, Politics and Small Business Says Yes,” Forbes, November 18, 2017, https://www.forbes.com/sites/elizabethmacbride/2017/11/18/should-twitter-facebook-and-google-be-more-regulated/; Edward Herman, “Fake News on Russia and Other Official Enemies,” Monthly Review, July–August 2017, 109.

  36. 36. Washington Post Staff, “Policies and Standards,” Washington Post, January 1, 2016, https://www.washingtonpost.com/policies-and-standards/.

  37. 37. Katherine Schulten, “Skills and Strategies: Fake News vs. Real News; Determining the Reliability of Sources,” New York Times, October 2, 2015, https://learning.blogs.nytimes.com/2015/10/02/skills-and-strategies-fake-news-vs-real-news-determining-the-reliability-of-sources.

  38. 38. Paul Mihailidis and Samantha Viotty. “Spreadable Spectacle in Digital Culture: Civic Expression, Fake News, and the Role of Media Literacies in ‘Post-Fact’ Society,” American Behavioral Scientist 61, no. 4 (2017): 450.

  39. 39. Westneat, “The Information War.”

  40. 40. Cass Sunstein, “Informational Regulation and Informational Standing: Akins and Beyond,” University of Pennsylvania Law Review 147 (1999): 613 and 635.

  41. 41. Shaban, Timberg, and Dwoskin, “Facebook, Google and Twitter.”

  42. 42. Zuckerberg, Facebook, November 18, 2016, https://www.facebook.com/zuck/posts/10103269806149061.

  43. 43. Bill Chappel, “Facebook Details Its New Plan to Combat Fake News Stories,” NPR, December 15, 2016, https://www.npr.org/sections/thetwo-way/2016/12/15/505728377/facebook-details-its-new-plan-to-combat-fake-news-stories.

  44. 44. Andrew Bloomberg, “Facebook Has a New Plan to Curb ‘Fake News,” Fortune, August 3, 2017, http://fortune.com/2017/08/03/facebook-fake-news-algorithm/.

  45. 45. Mark Scott and Melissa Eddy, “Europe Combats a New Foe of Political Stability: Fake News,” New York Times, February 20, 2017, https://www.nytimes.com/2017/02/20/world/europe/europe-combats-a-new-foe-of-political-stability-fake-news.html.

  46. 46. Stephan Dreyer and Lennart Ziebarth, “Participatory Transparency in Social Media Governance: Combining Two Good Practices,” Journal of Information Policy 4 (2014): 533.

  47. 47. Davey Alba, “Facebook Finally Gets Real about Fighting Fake News,” Wired, December 15, 2016, https://www.wired.com/2016/12/facebook-gets-real-fighting-fake-news/.

  48. 48. On the system’s slowness, see Madison Kircher, “Facebook’s Tactics to Stop Fake News Work (After It’s Already Been Spreading for 3 Days),” New York, October 12, 2017, http://nymag.com/selectall/2017/10/facebook-fake-news-flag-works-but-takes-3-days.html.

  49. 49. Sandy Parakilas, “We Can’t Trust Facebook to Regulate Itself,” New York Times, November 19, 2017, https://www.nytimes.com/2017/11/19/opinion/facebook-regulation-incentive.html.

  50. 50. Evgeny Morozov, To Save Everything, Click Here: The Folly of Technological Solutionism (New York: Public Affairs, 2013), 5.

  51. 51. Scott and Eddy, “Europe Combats a New Foe.”

  52. 52. Ibid.

  53. 53. Ibid.

  54. 54. Pew Research Center, “Beyond Distrust: How Americans View Their Government,” November 23, 2015, http://www.people-press.org/2015/11/23/11-how-government-compares-with-other-national-institutions/.

  55. 55. Paul Pierson and Theda Skocpol, “Historical Institutionalism in Contemporary Political Science,” in Political Science: State of the Discipline, ed. Ira Katznelson and Helen Milner (New York: W. W. Norton, 2002), 693–721.