Following the major political events of 2016–2017 and the Facebook/ Cambridge Analytica revelations, phrases such as “alternative facts,” “post-fact,” “post-truth,” and “fake news” have deluged global channels of communication. Of these terms, the use of “fake news” is now so commonplace—and vulgarized—that it has been included on the annual “List of Words Banished from the Queen’s English for Misuse, Over-use and General Uselessness” as of January 1, 2018.1 These fuzzy terms point to larger social problems that not only concern the authority, credibility, and believability of information, but its very manipulation.2
Under the guise of facts, shades of false, unvetted information, plagiarized stories, and clickbait from nation-states, contractors, advertisers, social media, news conglomerates, and hidden actors flood public communication spaces, usurping the traditional 24-hour news cycle. The technological ability to influence information choices, as well as to predict, persuade, and engineer behavior through algorithms (a “recipe” or set of instructions carried out by computer) and bots (applications that run automated, repetitive tasks) now blurs the line between potentially meaningful information and micro/targeted messaging.3
Several research studies illustrate the deleterious effects of false information on public communication.4 For example, a survey by Pew Research Center and Elon University’s Imagining the Internet Center found that the “fake news ecosystem preys on some of our deepest human instincts.”5 A Pew Research Center Journalism and Media survey of 1,002 US adults revealed two-in-three Americans (64 percent) find that “fabricated news stories” create confusion over current issues and events and believe that this confusion is “shared widely across incomes, education levels, partisan affiliations, and most other demographic characteristics.”6 This survey also revealed that “16% of US adults say they have shared fake political news inadvertently, only discovering later that it was entirely made up.”7 Still other research uncovered the influence of fake news, propagated by algorithms and bots, on the 2016 US election8 and Cambridge Analytica’s role in the Brexit and Leave.EU movements.9
But there is something more telling about the influence of fake news, which pertains to global perceptions of the media and trust in their sources. An investigation of approximately 18,000 individuals across the United States, United Kingdom, Ireland, Spain, Germany, Denmark, Australia, France, and Greece found reduced trust in the media due to “bias, spin, and agendas.”10 That is, a “significant proportion of the public feel that powerful people are using the media to push their own political or economic interests”; moreover, “attempts to be even-handed also get the BBC and other public broadcasters into trouble. By presenting both sides of an issue side by side, this can give the impression of false equivalence.”11
In this chapter, I briefly outline the ways fake news is characterized in the research literature. I then discuss how fake news is manufactured and the global entities responsible for its propagation. I close the chapter by reporting on ongoing technological and educational initiatives and suggest several avenues in which to explore and confront this controversial, geopolitical social problem.
The term “fake news” was reported as early as the 6th century CE, and persisted into the 18th century as a means of “diffusing nasty news . . . about public figures.”12 Merriam-Webster, however, situates fake news as “seeing general use at the end of the 19th century,” and defines it as news (“material reported in a newspaper or news periodical or on a newscast”) that is fake (“false, counterfeit”).13 The term “fake news” is “frequently used to describe a political story, which is seen as damaging to an agency, entity, or person . . . [I]t is by no means restricted to politics, and seems to have currency in terms of general news.”14 This expanded description by the esteemed dictionary allows for falsehood to be viewed within the sphere of harm, fallout, and consequence.
In addition to Merriam-Webster’s definition, scholars have divided “fake news” into categories such as commercially-driven sensational content, nation state–sponsored misinformation, highlypartisan news sites, social media, news satire, news parody, fabrication, manipulation, advertising, and propaganda.15 The use of the term “fake news” is now so prevalent that it is considered a “catchall phrase to refer to everything from news articles that are factually incorrect to opinion pieces, parodies and sarcasm, hoaxes, rumors, memes, online abuse, and factual misstatements by public figures that are reported in otherwise accurate news pieces.”16
Fake news is often framed as misinformation, disinformation, and propaganda in mainstream and scholarly literature, which complicates collective understanding of what constitutes “counterfeit” information and the actors behind its manufacture. Misinformation, for example, is defined as “information that is initially assumed to be valid but is later corrected or retracted. . . [though it] often has an ongoing effect on people’s memory and reasoning.”17 Misinformation is either intentional (e.g., manufactured for some sort of gain) or unintentional (e.g., incomplete fact-checking, “hasty reporting,” or sources who were misinformed or lying).18 Misinformation has been identified with outbreaks of violence.19 Misinformation is also described in relation to disinformation as “contentious information reflecting disagreement, whereas disinformation is more problematic, as it involves the deliberate alienation or disempowerment of other people.”20
Disinformation, as characterized by the European Commission’s High Level Expert Group (HLEG), “goes well beyond the term fake news.”21 Similar to Merriam-Webster’s definition of “fake news” and its emphasis on damage and resulting harm, the HLEG likens fake news to disinformation to include “all forms of false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit.”22
To further muddy the waters, in an analysis of “mainstream and social media coverage” during the 2016 US election, disinformation was linked to “propaganda as the ‘intentional use of communications to influence attitudes and behavior in the target population’”; disinformation is the “communication of propaganda consisting of materially misleading information.”23 But such characterizations are not as seamless as they at first appear. A specific category of propaganda called black propaganda is described as “virtually indistinguishable” from disinformation and “hinges on absolute secrecy . . . usually supported by false documents.”24 These particular definitions perhaps supplement longstanding descriptions of propaganda as a “consistent, enduring effort to create or shape events to influence the relations of the public to an enterprise, idea or group” and as a “set of methods employed by an organized group that wants to bring about the active or passive participation in its actions of a mass of individuals, psychologically unified through psychological manipulations and incorporated in an organization.”25
Perhaps the most dramatic evolution in the study of fake news as it applies to Internet society is the proposal by Samuel C. Woolley and Philip N. Howard, who offer the term computational propaganda as the “assemblage of social media platforms, autonomous agents, and big data tasked with the manipulation of public opinion.”26 Computational propaganda—a mashup of technology, ideology, and political influence, coupled with deception and secrecy—results in calculated messaging designed to influence opinion and beliefs. Here it is critical to interject that some researchers find the term “fake news” “difficult to operationalize” and instead suggest in its place junk news, or content consisting of “various forms of propaganda and ideologically extreme, hyper-partisan, or conspiratorial political news and information.”27
Under certain conditions, “fake news” may be infused with noise or classed as information distortion. Described as “undoubtedly the most damaging to the clarification of a catastrophe,”28 information distortion occurred, for example, during the Las Vegas shootings when Google prominently displayed search results derived from 4chan, Twitter, and Facebook as “news.”29 Distortion also featured in the media coverage of Devin Patrick Kelley, who opened fire on 26 people in a Sutherland Springs, Texas, church on November 5, 2017. During the early hours of the disaster, Kelley was described as both a “liberal, Antifa communist working with ISIS” and a supporter of Bernie Sanders, creating “noise”—a conspiratorial, inaccurate, rumor-fed, politically-charged news stream run amok.30 A similar scenario occurred during the aftermath of the Parkland shootings. Researchers found that individuals
outraged by the conspiracy helped to promote it—in some cases far more than the supporters of the story. And algorithms—apparently absent the necessary “sentiment sensitivity” that is needed to tell the context of a piece of content and assess whether it is being shared positively or negatively—see all that noise the same.31
Sidestepping well-worn terms such as misinformation, disinformation, and propaganda, truth decay was recently suggested as a vital model for understanding the “increasing disagreement about facts and analytical interpretations of facts and data” and diminishing trust in sources of factual information.32
From this overview, it is apparent that “fake news” and related conditions of information suffer from epistemic fluidity. On one hand, recent attempts to flesh out fake news in relation to misinformation, disinformation, and propaganda potentially advance our previouslyheld notions. On the other hand, the lack of universally-established definitions and refinement of concepts creates Babel-like complexity in the journalism and research communities, resulting in a kitchensink approach to the study of falsehood and the disordered information-communication ecosystem where it thrives.
Falsehood—fake news—as misinformation, disinformation, and/ or propaganda, is a crisis of collective knowledge. Whatever definitions or concepts with which we choose to categorize the phenomenon, the reality is that falsehood contributes to a cacophonous, polluted information-sharing environment. The infosphere now resembles a light-pollution map. In confronting fake news, we now face the challenge described by Karl Mannheim in his Ideology and Utopia: “not how to deal with a kind of knowledge which shall be ‘truth in itself,’ but rather how man deals with his problems of knowing.”33
It is critical to acknowledge that fake news as counterfeit information is created by humans and/or humans tasking technologies to moderate information and/or disrupt communications. Often this moderation and disruption is conducted in secret by anonymous actors. For example, patented algorithms construct a search engine results page (SERP)’s appearance and content (e.g., ad words, knowledge panels, news boxes, rating stars, reviews, snippets), thus directing an individual’s information-seeking and research gaze; hackers or “pranksters” manipulate search results through the practice of “Google bombing”;34 and algorithms and bots fabricated by troll factories and/or nation-states actively engage in the shaping of information in order to sow confusion and discord. In the following section, I briefly illustrate how fake news is promoted by these search features and global entities.
One product of a SERP, the snippet, is “extracted programmatically” from web content to include a summary of an answer based on a search query. Google describes the snippet as reflecting “the views or opinion of the site” from which it is extracted.35 The DuckDuckGo search engine also supplies snippets in the form of “Instant Answers” or “high-quality answers” situated above search results and ads. Instant Answers are pulled from news sources such as the BBC, but also include Wikipedia and “over 400 more community built answers.”36 The Russian web search engine Yandex also provides “interactive snippets,” or an “island,” to supply individuals with a “solution rather than an answer.”37 Although the snippet is not entirely “counterfeit information,” it is designed to confederate knowledge from across the web to make it appear “more like objective fact than algorithmically curated and unverified third-party content.”38
More to the point, third-party advertising networks connect advertisers with website creators to attract paid clicks by producing often misleading, sensational, tabloid-like information or ads that mimic news stories.39 To stem the rising tide of fake news monetization, Google disabled a total of 112 million ads that “trick to click,” with Facebook following suit with new guidelines that informed “creators and publishers” that they must have an “authentic, established presence.”40 However, Facebook’s “Content Guidelines for Monetization” do not include a category for false or deceptive information that would disqualify this type of content from monetization.41 In the end, “deciding what’s fake news can be subjective, and ad tech tends to shrug and take a ‘who am I to decide’” stance.42
At the Halifax International Security Forum in November 2017, Eric Schmidt, former executive chairman of Alphabet, Google’s parent company, disclosed that “it was easier for Google’s algorithm to handle false or unreliable information when there is greater consensus, but it’s more challenging to separate truth from misinformation when views are diametrically opposed.”43 To address this technological quandary, Google altered its secret patented algorithm to de-rank “fake news,” relegating it to a lower position on subsequent search result pages. The search giant also revised its “Search Quality Evaluator Guidelines” to assist its legion of search quality raters with “more detailed examples of low-quality webpages for raters to appropriately flag, which can include misleading information, unexpected offensive results, hoaxes and unsupported conspiracy theories.”44 Referring to Google’s multi-pronged approach, Schmidt stated that he is “strongly not in favour of censorship. I am very strongly in favour of ranking. It’s what we do.”45 In direct response to reports of Russia’s alleged election interference, Google de-ranked content from Russia Today (RT) and Sputnik.46 Contesting Google’s actions, an op-ed published on RT argued “there is nothing good or noble about de-ranking RT. It’s not a war against ‘fake news’ or misinformation. It’s a war of ideas on a much, much wider scale.”47
Google’s techno-fix may have artificially repaired one problem while posing others. First, research on information-seeking behavior suggests that individuals tend to focus on the first page of search results, often not venturing to “bottom” layers or pages that may potentially contain meaningful links to sites representing a diversity of views.48 Secondly, it is not only reactionary censorship we must guard against in the “war of ideas.” It is gatekeeping as well.49 In allowing behind-the-curtain algorithmic technology—the invisible hand—to distinguish “true” from “false” information without agreement, nuance, or context, the open pursuit of knowledge and ideas without boundaries is challenged on a fundamental level.50 In this regard, “the human right to impart information and ideas is not limited to ‘correct’ statements.” This information right also “protects information and ideas that may shock, offend and disturb.”51
In addition to behind-the-curtain assemblages of information by search engines and algorithmic determination of search results, “fake news” is produced by way of information warfare and information operations (also known as influence operations).52 Information warfare (IW) is the “conflict or struggle between two or more groups in the information environment”;53 information operations (IO) are described in a Facebook security report as
actions taken by governments or organized non-state actors to distort domestic or foreign political sentiment, most frequently to achieve a strategic and/or geopolitical outcome. These operations can use a combination of methods, such as false news, disinformation, or networks of fake accounts (false amplifiers) aimed at manipulating public opinion.54
With its emphasis on damage, it is possible to apply MerriamWebster’s definition of “fake news” (and possibly the High Level Expert Group’s as well) to IW and IO, as they are conducted by cyber troops, or “government, military or political-party teams committed to manipulating public opinion.”55 These often anonymous actors from around the globe—invisible to oversight and regulation—are involved in delivering weaponized falsehoods to unwitting consumers of information via fake accounts.56 In these cases, deception is a key element in the manipulation of information.57
The earliest reports of organized social media manipulation came to light in 2010, with revelations in 2017 that there are “offensive” organizations in 28 countries.58 Less is known about corporations, such as Facebook’s global government and politics team, and private actors that moderate information and troll critics in order to influence public perceptions.59 With concealed geographic locations and secret allegiances, cyber troops engineer information machines to produce an embattled Internet. For instance, paid disinformation agents at the Internet Research Agency in St. Petersburg operated the “Jenna Abrams” Twitter account, which had 70,000 followers.60 Coined “Russia’s Clown Troll Princess,” “Abrams” produced tweets that were covered by the BBC, Breitbart News, CNN, InfoWars, the New York Times, and the Washington Post.61 The Internet Research Agency is also implicated in more than 400 fake Twitter accounts used to influence UK citizens regarding Brexit.62 In addition, the Guardian and BuzzFeed reported that the Macedonian town of Veles was the registered home of approximately 100 pro-Trump websites, many reporting fake news; “ample traffic was rewarded handsomely by automated advertising engines, like Google’s AdSense.”63 And to support the “infowar,” Alex Jones, founder of the alternative media sites infowars.com, newswars.com, prisonplanet.com, and europe. infowars.com, likened purchases from his online store to purchasing “war bonds, an act of resisting the enemy to support us and buy the great products.”64
Both IW and IO are useful in framing widespread allegations of “fake news” and micro/targeted messaging during major political events, including the 2016 US election,65 the French election coverage in 2016–17,66 the 2017 UK and German elections,67 and the 2018 Italian election.68 IW and IO potentially fueled “purported” fake news posted on the Qatar News Agency’s website (which inflamed relations between Iran and Israel),69 the spread of a false story about a $110 billion weapons deal between the US and Saudi Arabia (reported as true in such outlets as the New York Times and CNBC),70 and “saberrattling” between Israel and Pakistan after a false report circulated that Israel had threatened Pakistan with nuclear weapons.71
The conceptual framework of IW/IO has profound implications for the transgressions of Cambridge Analytica, whose techniques influenced the US and Nigerian elections.72 Information warfare and/or information operations as high-octane opposition research certainly describes Donald Trump’s former campaign manager Paul Manafort’s alleged involvement in a covert media operation. This operation included revision of Wikipedia entries “to smear a key opponent” of former Ukrainian president Viktor Yanukovych and activating a “social media blitz” aimed at European and US audiences.73 Ahead of the May 25, 2018 vote to repeal the 8th amendment to the Irish Constitution on a woman’s right to terminate her pregnancy, journalist Rossalyn Warren wrote that “Facebook and the public have focused almost solely on politics and Russian interference in the United States election. What they haven’t addressed is the vast amount of misinformation and unevidenced stories about reproductive rights, science, and health.”74
In 1927, philosopher John Dewey remarked that, “until secrecy, prejudice, bias, misrepresentation, and propaganda as well as sheer ignorance are replaced by inquiry and publicity, we have no way of telling how apt for judgment of social policies the existing intelligence of the masses might be.”75 In the age of “post-truth” and “fake news,” Dewey’s comments might be construed as patronizing or perhaps elitist. But embedded in his remarks is respect for democratic values, which recognize the power of openness and the essential role of education in transforming social action. Below I report on ongoing technological and educational initiatives and suggest several interconnected steps to address the fake news challenge.
Echoing John Dewey’s confidence in the formative power of education and literacy, several approaches originating from higher education can be employed across educational settings to address fake news. Since 1976, Project Censored’s faculty–student partnerships have developed the Validated Independent News (VIN) model, subjecting independent news stories to intense evaluation in order to validate them as “important, timely, fact-based, well documented, and under-reported in the corporate media.”76 Supporting multiple literacies, the Association of College & Research Libraries (ACRL)’s “Framework for Information Literacy for Higher Education” offers tools for librarians and faculty to emphasize “conceptual understandings that organize many other concepts and ideas about information, research, and scholarship into a coherent whole.”77 Both Project Censored’s VIN model and the ACRL’s “Framework” encourage the development of metaliteracies and metacognition, leading to increased awareness and skill regarding the creation of collective knowledge and the reporting of facts.78
Governmental response to the fake news problem varies internationally. In the US, for instance, the State Department’s Global Engagement Center (GEC), established during the Obama administration, aims to “counter the messaging and diminish the influence of international terrorist organizations and other violent extremists abroad.”79 The 2017 National Defense Authorization Act expanded the GEC to
identify current and emerging trends in foreign propaganda and disinformation in order to coordinate and shape the development of tactics, techniques, and procedures to expose and refute foreign misinformation and disinformation and proactively promote fact-based narratives and policies to audiences outside the United States.80
The US Congress also convened several high-profile hearings on combatting fake news and foreign disinformation.81
On a state level, the Internet: Social Media: False Information Strategic Plans (SB-1424), introduced into the California State Legislature in February 2018, would require “any person who operates a social media, as defined, Internet Web site with a physical presence in California” to create a strategic plan to “mitigate” the spread of false news. The proposed legislation, which duplicates numerous ongoing educational-literacy programs and NGO activities that address “fake news,” mandates the use of fact-checkers to verify news stories and outreach to social media users regarding stories that contain “false information” (which the proposed bill does not define), and requires social media platforms to place a warning on news that contains false information.82
European governmental agency responses to counterfeit news include legislation and regulatory stopgap measures. For example, Croatia, Ireland, Malaysia, and South Korea proposed legislation to counter fake news; legislation in France includes an emergency procedure that allows a judge to delete web content, close a user’s account, or block access to a website altogether.83 During its 2018 elections, the Italian government launched an online “red button” system for people to report “fake news” to the postal police, the federal agency responsible for monitoring online crime.84 Sweden created a “psychological defence” authority to counter fake news and disinformation,85 while the German Ministry of the Interior proposed a “Center of Defense Against Disinformation.”86 Linking fake news with national security concerns, and subtly with IO/IW, the United Kingdom is in the process of creating a national security communications unit to battle “disinformation by state actors and others.”87 In early February 2018, the UK Parliament’s 11-member Committee on Digital, Culture, Media and Sport traveled to Washington, DC, to convene an “evidence session” with Facebook, Google, and Twitter executives on the subject of fake news, and to meet with members of the US Senate Intelligence Committee regarding Russian influence on social media during the 2016 election.88
Going several steps further, Chinese President Xi Jinping appeared to suggest increasing control and censorship of China’s digital communications system when he remarked that “without Web security there’s no national security, there’s no economic and social stability, and it’s difficult to ensure the interests of the broader masses.”89 The Russian Federation’s Security Council recommended an alternate Domain Name System (DNS) for BRICS countries (Brazil, the Russian Federation, India, China, and South Africa), citing “increased capabilities of western nations to conduct offensive operations in the informational space.”90 If implemented, the new DNS would essentially create another layer of a walled-off, compartmentalized Internet in these countries.
Nongovernmental organizations (NGOs) and social media companies have responded to the spread of counterfeit information by forming fact-checking programs and initiatives. Fact Tank, the International Fact-Checking Network at Poynter, Media Bias/Fact Check, Metabunk, Mozilla Information Trust Initiative, PolitiFact, and Snopes.com analyze specific, often memetic, news stories, while Google News and Google’s search engine now include a “fact check” tag that alerts information consumers that a particular slice of information was scrutinized by organizations and publishers. Google is now partnering with the Trust Project, which—much like food ingredient labels for fat and sugar content—developed a “core set” of indicators for news and media organizations that range from best practices to the use of citations, references, and research methodologies, to having a “commitment to bringing in diverse perspectives.”91 Facebook now requires political ads to reveal their funding sources, and the creation of a nongovernmental, voluntary accreditation system to “distinguish reliable journalism from disinformation” is another proposal toward stemming the creation and dissemination of fake news.92
Technological approaches to curbing fake news include the formation of the World Wide Web Consortium (W3C)’s Credible Web Community Group, which investigates “credibility indicators” as these apply to structured data formats. MisinfoCon is a “global movement focused on building solutions to online trust, verification, fact checking, and reader experience in the interest of addressing misinformation in all of its forms.”93 Browser add-ons, such as Project FiB, a Chrome-based extension, detect fake news on Facebook, Reddit, and Twitter, while the controversial PropOrNot browser plugin misreports websites as delivering “Russian propaganda” targeted toward Americans. The Hamilton 68 Dashboard, a project of the German Marshall Fund’s Alliance for Securing Democracy, monitors 600 Twitter accounts linked to Russian influence efforts online, “but not all of the accounts are directly controlled by Russia. The method is focused on understanding the behavior of the aggregate network rather than the behavior of individual users.”94
In the following discussion, I suggest additional interconnected steps to bring clarity to public and academic investigations of false information and to combat fake news on a local, national, and global scale.
The preponderance of fake news traversing public channels of communication challenges established editorial practices, information and media ethics, transparency, publicity, and the public right to know. In addition to the proliferation of fake news, newsbots created and employed by media organizations have shifted human authorship to algorithmic or “robo-journalism.”95 The rise of elected officials claiming that the media spreads “fake news” by “lying” to the public increases “the risk of threats and violence against journalists” and undermines the public’s trust and confidence in members of the Fourth and Fifth Estates.96
In the US, Congress must reconstitute the Office of Technology Assessment (OTA). For 23 years, OTA policy analysis included research on adult literacy, information security, emerging technologies, and US information infrastructure.97 On an international level, a nonpartisan body of stakeholders (e.g., the Tallinn Manual Process) should be formed to address the policy implications of tech platforms, algorithms, bots, and their use in weaponizing information.98 Above all, as Marietje Schaake suggests, “regulators need to be able to assess the workings of algorithms. This can be done in a confidential manner, by empowering telecommunications and competition regulators.”99 This proposed international body might also address the global implications of algorithmic accountability and algorithmic justice in order to confront “dissemination of knowingly or recklessly false statements by official or State actors” within the established framework of international human rights.100 Both OTA v.2 and the proposed international body suggested above would address the influence of fake news and other information conditions that impact the right to communicate and freedom of expression.
Institutional support and funding for inter/multi/transdisciplinary courses and initiatives that address intermeshed literacies across the curriculum, continuing education, and community is imperative. Studies of how editorial practices and ethics contribute to the construction of knowledge in crowdsourced and academic reference works would be an essential part of this proposed curriculum. Such initiatives can only be realized through global partnerships of “intermediaries, media outlets, civil society, and academia.”101
Dewey observed that “tools of social inquiry will be clumsy as long as they are forged in places and under conditions remote from contemporary events.”102 Using Dewey as our North Star, theoretical forays into how “fake news” differs from various information conditions (e.g., misinformation, disinformation, or propaganda) serve to build a common language for deeper studies. Evidence-based research on those factors that influence the public’s information-seeking and media habits is essential. This research would go a long way toward explaining how and why individuals adopt certain ideas and are influenced by specific information. For example, research suggests that “the greater one’s knowledge about the news media . . . the less likely one will fall prey to conspiracy theories.”103 There is also a need for intensive qualitative and quantitative investigations into those actors who manipulate information for sociopolitical gain and to cause damage; one study indicates that “robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it.”104
One solution to the problem of fake news, media conglomeration, and prepackaged news is community-based journalism. In forging educational partnerships with communities (see above), universities, libraries, and organizations have the opportunity to cultivate a climate focused on freedom of information and skill building. These partnerships allow citizens, including student journalists, to investigate local perspectives, stories, and events that national, corporate media ignore, marginalize, or abbreviate. According to one account, citizen journalism “ranks low on revenues and readers. It ranks high on perceived value and impact. While it aspires to report on community, it aspires even more to build community.”105
In his book The Public and Its Problems, Dewey established that knowledge is social. As such, knowledge is a function of communication and association, which depend “upon tradition, upon tools and methods socially transmitted, developed, and sanctioned.”106As discussed in this chapter, “fake news” as falsehood disrupts established ways of knowing and communication. In its wake, fake news destabilizes the very trust in information required to sustain relationships across the social world.
The 2015 House Committee on Foreign Affairs, Confronting Russia’s Weaponization of Information: Hearing Before the Committee on Foreign Affairs, House of Representatives, 114-1, April 15, 2015 (Washington, DC: US Government Publishing Office, 2015), online at House.gov, https://docs.house.gov/meetings/FA/FA00/20150415/103320/HHRG-114-FA00-Transcript-20150415.pdf, undeservedly received less media scrutiny.