Claire Wardle
The term fake news has become a mechanism for undermining individual journalists and the professional media as a whole. As a result, the term is now almost entirely meaningless: when audiences are asked about the term, they believe it describes poor reporting by the “mainstream media.”1 The term is also grossly inadequate at capturing the variety of information pollution choking public discourse. Misleading content can take many forms: satire; clickbait; inaccurate captions, visuals, or statistics; genuine content shared out of context; manipulated quotes and imagery; and outright fabricated stories. Almost none of this detail is captured by the term fake news. For these two reasons, I argue that the term fake news should be avoided where possible.
Much of the discourse about information pollution conflates two notions: misinformation and disinformation. In our report Information Disorder, Hossein Derakhshan and I argue that it’s important to distinguish true and false messages, as well as messages that are and are not created, produced, or distributed with intent.2 We defined misinformation as false information shared by someone who believes it to be true. Disinformation, by contrast, is false information shared with knowledge of its falsity and thus intention to deceive or otherwise do harm. It is a deliberate, intentional lie. We also defined a third category, malinformation, which is information based in reality that is shared to do harm to a person, organization, or country. This term can refer to instances where private information is made public (e.g., revenge porn) or genuine imagery is reshared in the wrong context. Finally, we chose information disorder as an umbrella term encompassing all forms of disinformation, misinformation, and malinformation.
Figure 5.1
Three types of information disorder.
As I noted above, there are many aspects to this issue, and many of the debates are not grasping its complexity. If we want to think about remedies to the various kinds of information disorder polluting our social media streams, we need to start thinking about the problem with more care. We also need to give more thought to the people who are creating this content. What is motivating them? What types of content are they producing, and how are they being received by audiences? And when audiences reshare their posts, what’s motivating them? In the first place, however, we still don’t have enough empirical evidence about the scale of the different varieties of information disorder and the impact it has on audiences.
This chapter examines seven different categories of information disorder, underlining the complexity of this ecosystem. They are used as a foundation for discussing the responsibilities of journalists in helping audiences navigate information that is circulating online, in this new information environment. If journalists debunk rumors or misleading content too early, they can give unnecessary oxygen to something that may have died out on its own accord. If a rumor is left unchecked, it can take hold and can be very hard to effectively debunk. What are the new professional guidelines for reporting on disinformation?
Including satire in a typology about information disorder is uncomfortable. Satire and parody should be considered forms of art. However, in a world where people increasingly receive information via their social media feeds and all types of information appear identical, some fail to realize that content is satirical.
Figure 5.2
A story by the debunking organization Snopes, about a story in the Babylon Bee, a Christian satirical site.
Figure 5.3
A screenshot from the Political Insider using a classic clickbait headline.
When headlines, visuals, or captions don’t support the content, we call this an example of false connection. Unfortunately, the most common example of this type of content is clickbait headlines. Competing for eyeballs, editors increasingly have to write headlines to attract clicks, and may not remain faithful to the content of the article.
This type of content is the use of information in a misleading manner to frame an issue or individual in a particular way, such as by cherry-picking images, quotes, or statistics.
Information doesn’t have to be wrong to be misleading. This is one of the reasons why the term fake news is so unhelpful. For example, during breaking news situations, we often see old imagery from similar, past events recirculate.
Figure 5.4
An infographic that circulated widely during the U.K. general election in May 2018. The diagram is misleading, as the black “did not vote” bar does not use the same scale as the rest of the graph.
One increasingly common issue is when journalists have their bylines used alongside articles they did not write or organizations’ logos are attached to videos or images they did not create. For example, in the run-up to the Kenyan election in August 2017, BBC Africa found a video it had not created that was using a BBC logo and tagline. It was circulating on WhatsApp. In response, BBC Africa made a video warning people not to be fooled by the impostor report and shared it on social media.
Manipulated content is genuine content that is manipulated to deceive. In the example below, two genuine images have been stitched together. The first is a photograph of people waiting in line to vote. It was captured in March 2016 in Arizona, during the primary election. The second photograph is a stock image of an arrest by a U.S. Immigration and Customs Enforcement officer; one can it find online by simply searching “ICE arrest.” This composite image was shared in the weeks leading up to the November 2016 presidential election.
Figure 5.5
This tweet was put out after a Twitter account, @HistoricalPics, shared the photo and claimed that it was taken in Nepal shortly after the earthquake.
Figure 5.6
A video put out by BBC Africa on its Twitter account to explain that a fake video was circulating that purported to be a broadcast from BBC Africa.
This type of content can be text-based, such as the article suggesting that the Pope had endorsed Donald Trump, published by a completely fabricated “news” site. It can also be visual, as in the case of a graphic that incorrectly suggested that people could vote for Hillary Clinton via short message service, or SMS. These graphics targeted minority communities on social networks in the lead-up to the U.S. presidential election.
Figure 5.7
A photo that circulated in the lead-up to the 2016 U.S. presidential election. The photo is made up of two images stitched together.
Figure 5.8
An image that circulated on Twitter and Facebook in the lead-up to the 2016 U.S. presidential election.
In our report, Derakhshan and I also argue that, in addition to understanding these different categories of information disorder, we must separately examine the “elements” of information disorder: the agent, messages, and interpreters. In the matrix below, we pose questions that need to be answered for each element. As we explain, the “agent” who creates a fabricated message might be different from the agent who produces that message—who might still be different from the “agent” who distributes the message. Similarly, we need a thorough understanding of who these agents are and what motivates them. We must also understand the different types of messages being distributed by agents, so that we can estimate the scale of—and address—each. (The debate to date has been overwhelmingly focused on fabricated text news sites, although visual content is just as widespread and much harder to identify and debunk.)
Finally, we also emphasize the need to consider the three different “phases” of information disorder: creation, production, and distribution. In particular, it’s important to consider the phases of information disorder alongside the elements, because the agent that creates the content is often fundamentally different from the agent who produces it.
Figure 5.9
The three elements of information disorder.
Figure 5.10
The three phases of information disorder.
For example, the motivations of the mastermind who “creates” a state-sponsored disinformation campaign are very different from those of the low-paid “trolls” tasked with turning the campaign’s themes into specific posts. And once a message has been distributed, it can be reproduced and redistributed endlessly—by many different agents, all with different motivations. Only by dissecting information disorder in this manner can we begin to understand its many nuances.
The bedrock of professional journalism is accurate reporting. As such, journalists have not previously had to think about ways to responsibly report what wasn’t true. When rumors or hoaxes passed across their desks, the newsroom was warned to be on its guard, but rarely would any reporting take place. As gatekeepers to the information that people consumed, there was no responsibility to discuss false information with audiences. The arrival of the widespread adoption of social media late in the first decade of the twenty-first century changed conversations inside the newsroom.
The Iranian revolution in June 2009 is often cited as the news event that convinced journalists of the value of Twitter. I would argue that the Arab Spring was the moment when questions of how to report false information became much more widespread. Andy Carvin, who was working at NPR at the time, developed a following on Twitter by sharing information and asking his community to help him verify imagery.3
At the time, I was working with BBC News, designing a training course to help journalists find and verify sources and “content” on the social web. Carvin’s tweets were being discussed at length among those interested in this new form of news gathering. Should a public broadcaster share information that hadn’t been 100 percent verified? Would that confuse the audience? A concern shared by many journalists was that if the BBC began “debunking” rumors and false information, would that mean they would have to do that for all information that was false? If the audience got used to the service, would the absence of a “false” stamp suggest that it was automatically true. There was a great deal of heated debate, and the decision was taken that the BBC should not report—on either official channels or social media accounts—any information unless it had been fully verified. And while the newsroom’s user-generated hub played a crucial role in helping the organization discern what was true and false, it was not a public debunking service.
If the Arab Spring was the story that thrust these questions into journalists’ collective consciousness, Hurricane Sandy, the storm that hit the United States’ eastern coast in October 2012, was the story where newsrooms began to actively help audiences differentiate fact from fiction. Alexis Madrigal in The Atlantic published a stream of images with yellow “true” or “false” stickers, and journalists worked overtime on Twitter to verify tweets, such as those of the flooded New York Stock Exchange from @ComfortablySmug, manipulated images of the Statue of Liberty shrouded in storm clouds, or old, recirculated imagery from Arlington Cemetery.4
The Boston Marathon bombing in 2013 provided another turning point. Active, amateur sleuths on Reddit worked to identify the two young men whose images had been shared by the Boston Police Department. Incorrect names were shared widely, and the dangers of crowdsourcing from social media became clear.5 When CNN made a couple of serious on-air errors, it renewed concern about the role of mainstream media in helping audiences parse their increasingly chaotic information streams, particularly during breaking news events.6
These conversations seem quaint in the context of the current reality of the information ecosystem. Early discussions about whether professional journalists should play a role in helping audiences navigate hoaxes online took place when the only real concern was mistakes happening in the heat of a breaking news situation—old photos and false casualty figures. We hadn’t yet recognized the potential of sophisticated, social media–focused campaigns to manipulate.
We now have agents of disinformation—people deliberately creating and disseminating false information to cause harm and targeting technology companies’ trending topics or search algorithms.7 These techniques are designed to deceive, and their ultimate goal is to get journalists to publish their work—even if it’s just to report on the campaign, and not its claim. For agents of disinformation, this coverage is as valuable, because they see it as amplifying their message.
As Alice Marwick and Rebecca Lewis noted in their 2017 report Media Manipulation and Disinformation Online, “For manipulators, it doesn’t matter if the media is reporting on a story in order to debunk or dismiss it; the important thing is getting it covered in the first place.”8 BuzzFeed’s Ryan Broderick confirmed these concerns when, on the eve of the French presidential vote, he tweeted that 4channers were celebrating sober news stories about #MacronLeaks as a “form of engagement.”9
In actuality, we know little about how reporting on disinformation campaigns and tactics influences audiences. Experiments suggest that conspiracy-like stories can inspire feelings of powerlessness and lead people to report lower likelihoods to engage politically.10 With trust in institutions in decline, reporting that highlights these campaigns of manufactured amplification could run the risk of further weakening trust in institutions. Again, we need further research into these issues.
Figure 5.11
A tweet by Ryan Broderick highlighting how “debunks are a form of engagement.”
Scott Shane, in an article in the New York Times, points out the new challenges posed when foreign governments are involved in leaking information.11 He states:
The old rules say that if news organizations obtain material they deem both authentic and newsworthy, they should run it. But those conventions may set reporters up for spy agencies to manipulate what and when they publish, with an added danger: An archive of genuine material may be seeded with slick forgeries.
He is right to draw attention to the need for additional protocols for reporting on this form of information. However, the same challenges exist for reporting on any form of disinformation. When getting a falsehood debunked is the ultimate goal of the person pushing the false information, how should journalists respond? Choosing not to report on something makes many journalists feel uncomfortable, as it challenges their professional ethics. Journalists are taught from their first day at journalism school or in the newsroom that transparency is the central tenet of the profession. But this belief is now being used as a powerful weapon against the news industry itself. What does responsible coverage look like here?
As danah boyd argues, newsrooms know how to report on the powerful. Attempts by the Pentagon to suppress reporting results in long meetings, including with all senior editors, to assess the potential fallout in terms of national security. Similarly, when presented with public relations stunts by corporations, newsrooms know how to act responsibly. In this new information environment, when reporting on niche communities and their conspiracy theories can provide much-needed oxygen and legitimacy, where are the ethics policies that deal with this type of reporting?
At First Draft, when we work on debunking projects, we regularly talk about the journalists’ need to understand the tipping point—the point at which debunking a rumor becomes necessary and advantageous.12 If you debunk a rumor too early, you provide unnecessary oxygen and risk playing a role in spreading it further. If you wait too long to debunk, the rumor becomes extremely difficult to dislodge.
Identifying this tipping point is a complex task, partly because there is no one tipping point. In different countries, where popular platforms and population sizes vary, news industries have to work collaboratively to discuss when and how to publish. News industries are vulnerable exactly because of their competitive nature. If one news organization reports, it puts pressure on others to do the same—a particularly dangerous fact, as not all newsrooms undertake their own verification checks, seeing another newsroom’s reporting as enough of an insurance policy. This is another reason why disinformation agents are so keen to use the news industry as a source of amplification. If you get one, you can get them all.
Another challenge we face is the limited amount of academic literature on effective debunks. John Cook and Stephan Lewandowsky’s The Debunking Handbook underlines the need for falsehoods to be dealt with thoughtfully. Repeating falsehoods in headlines, for example, can improve someone’s memory of the false information.13 As little as we know about how to write effective text-based debunks, we know far less about how to effectively debunk false or fabricated imagery. A professional norm of stamping false imagery with a red “false” or “fake” label on an image has developed, but we do not know the impact this has on the way people process the image.
The news industry is wholly unprepared for the contemporary information ecosystem. Journalists and platforms are being targeted. We need new ethics guidelines and new training courses. We also need more research to understand the scale and complexity of the challenges before us, as well as the best remedies for them. Silencing coverage is not the answer, but we need to understand the most effective ways of reframing the narrative to minimize unintended consequences.
1. Rasmus Kleis Nielsen and Lucas Graves, “News You Don’t Believe”: Audience Perspectives on Fake News (Oxford, UK: Reuters Institute for the Study of Journalism, 2017), https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2017-10/Nielsen%26Graves_factsheet_1710v3_FINAL_download.pdf.
2. Claire Wardle and Hossein Derakhshan, Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making (Strasbourg: Council of Europe, 2017), https://firstdraftnews.org/wp-content/uploads/2017/11/PREMS-162317-GBR-2018-Report-de%CC%81sinformation-1.pdf?x11466.
3. Andy Carvin, Distant Witness: Social Media, the Arab Spring, and a Journalism Revolution (New York: CUNY Journalism Press, 2013).
4. Alexis Madrigal, “Sorting the Real Sandy Photos from the Fakes,” The Atlantic, October 29, 2012, https://www.theatlantic.com/technology/archive/2012/10/sorting-the-real-sandy-photos-from-the-fakes/264243; Jack Stuef, “The Man behind @ComfortablySmug, Hurricane Sandy’s Worst Twitter Villain,” BuzzFeed, October 30, 2012, https://www.buzzfeed.com/jackstuef/the-man-behind-comfortablysmug-hurricane-sandys; Meredith Bennett-Smith, “Fake Hurricane Sandy Photos Spread on Internet as Storm Barrels toward Northeast,” Huffington Post, October 30, 2012, https://www.huffingtonpost.com/2012/10/29/fake-hurricane-sandy-photos-internet-northeast_n_2041283.html; Sam Laird, “Incredible Viral Soldier Pic Debunked by Military,” Mashable, October 29, 2012, https://mashable.com/2012/10/29/viral-soldier-pic-debunked/.
5. Chris Wade, “The Reddit Reckoning,” Slate, April 15, 2014, http://www.slate.com/articles/technology/technology/2014/04/reddit_and_the_boston_marathon_bombings_how_the_site_reckoned_with_its_own.html.
6. David Carr, “The Pressure to Be the TV News Leader Tarnishes a Big Brand,” New York Times, April 21, 2013, https://www.nytimes.com/2013/04/22/business/media/in-boston-cnn-stumbles-in-rush-to-break-news.html.
7. Melanie Ehrenkranz, “Google’s Top Stories Promoted Misinformation about the Las Vegas Shooting from 4chan,” Gizmodo, October 10, 2017, https://gizmodo.com/googles-top-stories-promoted-misinformation-about-the-l-1819053288.
8. Alice Marwick and Rebecca Lewis, Media Manipulation and Disinformation Online (New York: Data and Society Research Institute, 2017), 39, https://datasociety.net/pubs/oh/DataAndSociety_MediaManipulationAndDisinformationOnline.pdf.
9. Ryan Broderick (@broderick), Twitter thread about the #MacronLeaks response on 4chan, May 5, 2017, https://twitter.com/broderick/status/860423715842121728?lang=en.
10. Daniel Jolley and Karen M. Douglas, “The Social Consequences of Conspiracism: Exposure to Conspiracy Theories Decreases Intentions to Engage in Politics and to Reduce One’s Carbon Footprint,” British Journal of Psychology 105, no. 1 (2014): 35–56, https://
11. Scott Shane, “When Spies Hack Journalism,” New York Times, May 12, 2018, https://www.nytimes.com/2018/05/12/sunday-review/when-spies-hack-journalism.html.
12. Claire Wardle, “10 Questions to Ask before Covering Misinformation,” First Draft, September 29, 2017, https://firstdraftnews.org/10-questions-newsrooms/
13. John Cook and Stephan Lewandowsky, The Debunking Handbook (St. Lucia, Australia: University of Queensland, 2011).