Panagiotis Takis Metaxas
“Fake news,” or online falsehoods that are formatted and circulated such that a reader might mistake them for legitimate news articles,1 is a relatively recent phenomenon born out of the financial attractiveness of “clickbait”—capturing users’ attention online to generate revenue through advertisements. It has expanded into new forms of propaganda, bullshit (as defined by Harry Frankfurt),2 and financial scams, all of which have existed since the creation of early human communities. Since ancient times, people have been exposed to lies, misinformation, and falsehoods. Believing lies can come at grave personal and social costs. In some extreme cases, communities, religions, and cultures were destroyed due to misplaced belief in falsehoods. Even the end of the first democracy came when the Athenian citizens, led by the charismatic populist leader Alcibiades, believed reports that their Syracusan enemies were cowards and Sicilian city-states would support them as allies, and entered in a disastrous war they could not win.3
Today, we are exposed to “fake news,” a newer phenomenon in the ever-widening genre of deception. We faced and overcame challenges throughout our history, but I argue that this time is different. The volume of (mis)information reaching us comes at speeds and levels that we, as a species, have not evolved to handle. Technology, ethical policies, laws, regulations, and trusted authorities, including fact-checkers, can help, but we will not solve the problem without the active engagement of educated citizens. Epistemological education, the recognition of our own biases, and the protection of our channels of communication and trusted networks are all needed to overcome the problem and continue our progress as democratic societies.
I first became curious about the question of identifying truth as an undergraduate majoring in mathematics at the University of Athens, in Greece. A trusted friend, fluent in Russian, informed me that Soviet mathematicians had devised a model by which one could determine the facts in news announcements. I tried to find more information about the model, but during the Cold War there was not much scientific communication between East and West, there was no internet, and I could not read Russian. It took me years to realize that I had encountered an early instance of “fake news.”
In 1994, when the first search engine appeared, I became interested in the phenomenon of online propaganda. As an educator, my concern developed when I realized that my students were using search engines to retrieve and use information without an understanding of how it was produced. Until the advent of search engines, print information generally had greater validity over information gathered in other ways, so conducting quality research involved discovering printed sources—something that was quite hard, generally requiring hours spent poring over library books. For something to get published meant that it had been successfully scrutinized by professional editors, and challenged in ways that other forms of communication were not. But with the advent of the World Wide Web, in a sudden twist equal parts liberatory and dangerous, achieving publication became trivially easy. Barriers to authorship crumbled; in the internet era virtually anyone can be an author of content, and with the help of search engines, virtually any content authored can be found.
Google Search successfully presented itself as being able to objectively measure information quality using its famous PageRank algorithm. Its PageRank algorithm was listed among the ten best algorithms of data mining,4 and its reputation was transferring to its search results. If you find an article in the top-ten search results, the thinking goes, it must be good. Sure, you could encounter a bad search result, but you would think that it was due to the lack of your searching skills and you would try again with different keywords. But when you were searching for things for which you had no prior knowledge, there was no way of recognizing misinformation. In fact, in one study, “ ‘Of Course It’s True; I Saw It on the Internet!,’” my coauthor and I found that more technically competent students were more likely to be fooled by unreliable search results because they trusted their ability to find information using this technological tool and, thus, applied less critical thinking.5 Most people do not know (even today) that search engines can be gamed to promote the page of an experienced manipulator, or web spammer. “Search Engine Optimization,” a $65 billion industry born in the early 2000s, was making a living by fooling Google and the other search engines on particular search queries. Now, search engines are under continuous attacks to promote the agenda of propagandists, advertisers, fanatics, or conspiracy theorists.6
And while Google has been successful in defending itself from attacks against electoral candidates since 2008, Twitter and Facebook have become the new battleground. We observed the first Twitter bomb during the 2010 Massachusetts special senatorial election, and we observed that the same technique was employed on Facebook during the 2016 U.S. elections.7 Propagandists create fake accounts, then infiltrate and lurk in political groups until the appropriate time to promote their “fake news” to the group and watch it spread throughout the political echo chamber. Afterward, the fake accounts can delete themselves, making it difficult to determine the identity of the propagandist.8
Avoiding this manipulation has to be a coordinated effort. In particular, both social media platforms and citizens need to do a better job of recognizing “fake news” online. Our first collective reaction was to demand the deployment of fact-checking procedures. We want fact-checkers that will reliably determine the truth behind any online rumor and will notify us immediately. We want our social networks to inform us of what is reliable and what is not. We want policies that will enforce trustworthy information and laws that will punish those who violate them. And we want magical technologies to apply immediately, automatically, and correctly every time. In other words, we demand benevolent censorship.
Unfortunately, this cannot happen. There is no universal agreement on what is true and what is not. One person’s spam can be another person’s treasure. Laws are likely to be outdated before they even pass by the advances of technology. Policies can be a weapon against the public in the authoritarian regimes, which we see appearing around the world these days, as the new Malaysian Anti-Fake News Bill 2018 demonstrates.9 Deep learning technologies are as likely to help as to hurt, as the so-called deep fake videos already demonstrate.10 Moreover, machine learning technologies that do not give evidence on how they make decisions can only learn and imitate biases and injustices we already have in our communities.11
This does not mean we should not deploy dependable, independent, experienced, and educated smart crowds to help evaluate “fake news.” We should. We should encourage academic librarians to take the role of overseeing fact-checking organizations. Social networks and search engines should diminish the financial incentives that drive the “web spam” and “fake news” producers. User interfaces need to be clearer to help us detect “clickbait.” But nothing will succeed, unless we, the consumers of information, take more responsibility. We cannot avoid it any longer.
How easy is it for people to recognize “fake news”? How do we know what we know? This is one of the fundamental questions that we need to answer to evaluate the performance of any fact-checking system, whether operated by humans or machines. There is a whole branch of philosophy, called epistemology, that deals with the establishment of knowledge. Briefly, we know things due to our own experiences, our trust in reliable authorities, and our personal skill in handling logical reasoning, which is also known as critical thinking. Each of these three sources of knowledge (intrinsic, extrinsic, and derivations) is challenged by our technologies today.
We hold some beliefs for extrinsic reasons: because we trust the entity that is providing or supporting the information. Our whole educational system is based on this premise. Our systems of governance are also based on this, especially the “fourth estate,” the press.12 We also hold some beliefs for intrinsic reasons, that is, based on our own experiences realized through our senses and interpreted with our mind. We consider our experiences fundamental, and we rarely question whether what we learn from them is ever in doubt. “I saw it with my own eyes” is an expression that exists in most, if not all, languages. Trusting our eyes is considered equivalent to having absolute confidence, since seeing is the most powerful of all our senses. But it rarely is the case that we can make sense of what we see or hear without some thought process that interprets and establishes the factual aspects of our experience. Here is where we need the support of critical thinking skills that will help us determine the validity of our thoughts and observations. And for that, we also want the aid of a sound mind.
Even though in daily conversations we use the term critical thinking as a synonym to common sense, they are quite different, as Duncan Watts explains in the book Everything Is Obvious (Once You Know the Answer).13 Critical thinking means to use mathematical logic and rigor, to combine things that you already know, to derive new knowledge. By rigor, we mean to apply the scientific method in the derivation, and it starts with carefully writing down a hypothesis. Committing the hypothesis on a fixed medium is a crucial step because, without it, our thoughts may drift and we may end up evaluating something quite different. Then, we need to search for evidence, both supporting the hypothesis and discrediting it. Looking for both types of evidence is essential, because searching only for confirming evidence and ignoring discrediting evidence is the basis of most fallacies and conspiracy theories.
Of course, we need to apply mathematical logic in making sense of the evidence we have collected. This technical part of evaluating the hypothesis is hard, as it requires both education and practice. Unfortunately, not all educational systems prepare people for this step. Logic rules can be confused. A common logical mistake often used in conspiracy theories is to consider lack of evidence against a hypothesis as proof of correctness. If I cannot see why something could be wrong, the erroneous thinking goes, it must be true. For example, Facebook used flags to denote that some news articles had been fact-checked as true.14 But flagged articles are few, and most articles are not checked. Someone mistakenly may derive that an article without a flag is false. However, it could be that no one has checked that second article.
Unfortunately, applying the scientific method all the time is not easy for three reasons. The first reason relates to effort and education. It is mentally tiring, and it requires training to think critically. The good news here is that the more you practice critical thinking, the less tiring it becomes, and you can even come to enjoy it. The second reason relates to prejudices: we need to be aware of our own biases. We need to have what ancient Greek philosophers termed γνωθι σ’αυτον, or self-knowledge. We are not, by default, the objective judges we may wish to be. There are many cognitive biases we carry along, and they are all transparent to us. One of the more commonly used is confirmation bias: when we are presented with facts, it is easier to cherry-pick those that agree with what we already believe—and discredit those that do not—than change our opinion.15 The longer we think in a certain way, the more we reinforce the existing neural connections, the harder it is to change them. Changing our thinking requires effort and time.
The third reason is related to false information we have already accumulated over the years. Not everything we were taught as children by parents, teachers, and communities is true. Some of the “facts” we learned were made-up stories easy to comprehend or provide comfort for our worries. Belief in astrology is just one such example. Even though it is relatively easy for people to see it as invalid, many choose to believe in it and have it guide some of their actions.
We also know “facts” that are related to the brain’s limitations, for example, “facts” that we misunderstood, misheard, or remember incorrectly. The brain is not a reliable database of information that can be accessed accurately and on demand every time. Early efforts to understand the brain were comparing it to computer memory, but this metaphor is misleading. Every time we remember something, we are reconstructing the memory through an inexact and unreliable process. For example, sometimes police detectives and lawyers are successful in implanting “memories” to unsuspected witnesses by describing in detail the events. Witnesses may end up “remembering” these “memories.”
We may also know “facts” because we applied the wrong pattern trying to make sense of something. Our brain is a pattern-matching machine. On the one hand, finding similarities and patterns is fundamental to creativity. We develop solutions by recognizing similarities between a situation we are familiar with and another that is new to us. But, on the other hand, our pattern-matching ability can fail us sometimes: we can see the image of a face on a rock in New Hampshire or on the surface of the moon the moment someone points it out.
We also know “facts” that we have observed under emotional stress, such as fear, anger, and passion, as we often feel during important political elections. And, of course, we sometimes know “facts” that we derived upon when our brain was not working reliably. This could be due to the influence of alcohol or other chemical substances, due to lack of sleep, due to mental illnesses, or even due to extreme focus, as with the famous case of missing the gorilla appearing in a video of people passing a basketball.16 I am sure we can add more such examples that we have observed in others. It is always far easier to observe such behavior in others because, well, if we realize that they are happening to ourselves, we could try to correct them, assuming we have enough mental power for that.
Our brain is impressive, but it is not perfect, and it does not work well every time or at every phase of our life. Our brains are the products of evolution. They are not perfect or complete; they are works in progress. Neuroscientists describe the evolutionary process that started with the so-called reptilian brain, the smaller component that responds immediately to the basic instincts: fear, hunger, sexual desire.17 On top of that we have the mammal brain, then the primate brain, then the human brain occupying much of the neocortex.
Our brain is affected by construction limitations and errors. Our feelings, our senses, our environment challenge our perception of reality. We need to feel that we are in control of our environment to survive. The natural world around us is full of randomness, but we do not readily accept randomness in phenomena; we want to “discover” reasons explaining randomness. Again, this desire for control and for an explanation that discounts randomness is a powerful source for conspiracy theories.
We are not stupid; we are thinking lazy. Our brain has a hard time staying focused for too long.18 When we sit down to study intensely for an hour or two, we end up feeling exhausted, though we did not move from that chair. Thinking critically is very taxing to the brain. We try to avoid it unless we have to, like when we take a test in school. In most other cases, we try to create shortcuts to avoid using all of it. We adopt heuristics, stereotypes, personal ways of “thinking” that most of the time serve us well—but not always.19
It is counterintuitive, but even binary logic does not always help. From a young age, we practice with statements that are either true or false, because they are easier to understand. We are so accustomed to the process of understanding our world through a “closed world” assumption: we often assume that if something cannot be shown to be true, it must be false. Well, it may be impossible to determine its validity under the accepted assumptions.20
We have been dealing with this situation for hundreds of years. Our technologies have helped us make progress in controlling the world around us, but they have also challenged us. Consider one of the most impressive technologies of all times, the technology of writing. Using little drawings to form phonemes, words have been one of the more profound technologies of all times. We spend a good time of our lives training to recognize words, form sentences, compose arguments. Writing enables us to transmit ideas and information across generations. Every time we make it more efficient, as with the invention of the printing press, it has a profound effect on human history. But up until the spread of the interconnected networks of social media, we had few books to read. Few, of course, compared to the tsunami of words we read these days on Facebook, online newspapers, and, well, fake newspapers produced both by humans and by artificial intelligence. The amount of information that reaches us has exploded. Censorship used to be the act of hiding information from the public. In our new world, overwhelming noise can also function as censorship.21 Our social media are shifting our attention constantly between issues and topics with such speed and volume that we lose track of what is important.
The past couple of years have seen a massive rise in interest in “fake news.” But although many researchers, politicians, lawmakers, and laypeople are trying to address this issue, it remains an immensely complex problem, challenging the limits of our human intellect. What can we do?
Technology can certainly help, primarily if it is used to discover valid evidence that diverges from what we are casually aiming to locate, and to inform us when we find ourselves in an echo chamber. Wiki technology maintaining evidence considered by fact-checkers and librarians can also help. It can help them to manage and the rest of us to monitor the process that led to their decisions. Interfaces that give comparable exposure to a claim and its refutation would also help, so that, for example, the comments refuting a claim appearing in a social medium are also visible.
But other things will help: laws limiting the financial incentives that draw pranksters, propagandists, and advertisers in producing misinformation to draw our attention and clicks; regulations that restrict the collection and exchange of personal information, such as the recent European General Data Protection Regulation (GDPR); ethical policies that protect our limited attention capital so that we do not sink in the constant wave of information that covers us every day. Policies, laws, regulations, trusted authorities, and technology will help, but they will not solve the problem of propaganda, misinformation, and “fake news” if we rely just on them. No solution will be complete without active engagement of the citizen, epistemological education, and an active democratic society. We need to be aware of why we believe what we believe and of our own biases. We need to listen to those outside our echo chambers. And we need to apply critical thinking habitually so that we enjoy practicing it on a daily basis. Mastery of (most of) these skills is necessary for a successful life in the twenty-first century and beyond. It is not easy, because we need to change ourselves, but that’s what education has always been about.
1. As others have pointed out, some politicians, populists, and dictators misuse it to describe opinions they dislike; this co-opting of terminology is commonly done, as with other terms such as patriotism and the people. Angie Drobnic Holan, “The Media’s Definition of Fake News vs. Donald Trump’s,” PolitiFact, October 18, 2017, http://www.politifact.com/truth-o-meter/article/2017/oct/18/deciding-whats-fake-medias-definition-fake-news-vs/; Panagiotis Takis Metaxas, “Separating Truth from Lies,” interview by Alison Head and Kirsten Hostetler, Project Information Literacy, Smart Talk Interview no. 27, February 21, 2017, http://www.projectinfolit.org/takis-metaxas-smart-talk.html.
2. Harry Frankfurt, On Bullshit (Princeton, NJ: Princeton University Press, 2005).
3. Mary Lefkowitz, “Do Facts Matter? Redefining Truth Is a Tried and True Method of Taking Control,” The Spoke (blog), Albright Institute for Global Affairs, February 24, 2017, https://www.wellesley.edu/albright/about/blog/3261-do-facts-matter.
4. Xindong Wu, Vipin Kumar, J. Ross Quinlan, Joydeep Ghosh, Qiang Yang, Hiroshi Motoda, Geoffrey J. McLachlan, Angus Ng, Bing Liu, Philip S. Yu, Zhi-Hua Zhou, Michael Steinbach, David J. Hand, and Dan Steinberg, “Top 10 Algorithms in Data Mining,” Knowledge and Information Systems 14, no. 1 (2008): 1–37, http://www.cs.uvm.edu/~icdm/algorithms/10Algorithms-08.pdf.
5. Leah Graham and Panagiotis Takis Metaxas, “ ‘Of Course It’s True; I Saw It on the Internet!’: Critical Thinking in the Internet Era,” Communications of the ACM 46, no. 5 (May 2003): 71–75, http://bit.ly/oMjgnw.
6. Panagiotis Takis Metaxas and Joseph DeStefano, “Web Spam, Propaganda and Trust” (paper presented at the Adversarial Information Retrieval [AIRWeb] World Wide Web Conference, Chiba, Japan, May 10, 2005), http://airweb.cse.lehigh.edu/2005/metaxas.pdf.
7. Panagiotis Takis Metaxas and Eni Mustafaraj, “From Obscurity to Prominence in Minutes: Political Speech and Real-Time Search” (paper presented at Web Science Conference, Raleigh, NC, April 26–27, 2010), http://bit.ly/Twitter-Bomb; Eni Mustafaraj and Panagiotis Takis Metaxas, “The Fake News Spreading Plague: Was It Preventable?” (paper presented at Web Science Conference, Troy, NY, June 2017), http://bit.ly/2sehUCv.
8. The infamous #Pizzagate conspiracy is an example of such behavior. See Panagiotis Takis Metaxas and Samantha Finn, “The Infamous ‘Pizzagate’ Conspiracy Theory: Insights from a TwitterTrails Investigation,” (paper presented at Computation and Journalism Symposium, Northwestern University, Evanston, IL, October 13–14, 2017), http://bit.ly/2xEfIKU.
9. Marc Lourdes, “Malaysia’s Anti-Fake News Law Raises Media Censorship Fears,” CNN, April 3, 2018, https://www.cnn.com/2018/03/30/asia/malaysia-anti-fake-news-bill-intl/index.html.
10. Kevin Roose, “Here Come the Fake Videos, Too,” New York Times, March 4, 2018, https://www.nytimes.com/2018/03/04/technology/fake-videos-deepfakes.html.
11. Julia Angwin and Jeff Larson, “Bias in Criminal Risk Scores Is Mathematically Inevitable, Researchers Say,” ProPublica, December 30, 2016, https://www.propublica.org/article/bias-in-criminal-risk-scores-is-mathematically-inevitable-researchers-say.
12. “Of course it’s true, I saw it in the newspaper!” was an expression that people in my home country used to use when they wanted to support a claim they believed as true. Graham and Metaxas, “ ‘Of Course It’s True.’”
13. Duncan Watts, Everything Is Obvious (Once You Know the Answer): How Common Sense Fails Us (New York: Crown Business, 2011).
14. Jason Silverstein, “Facebook Will Stop Labeling Fake News Because It Backfired, Made More Users Believe Hoaxes,” Newsweek, December 21, 2017, http://www.newsweek.com/facebook-label-fake-news-believe-hoaxes-756426.
15. D. J. Flynn, Brendan Nyhan, and Jason Reifler, “The Nature and Origins of Misperceptions: Understanding False and Unsupported Beliefs about Politics,” Advances in Political Psychology 38, no. S1 (2017): 127–150.
16. Christopher Chabris and Daniel Simons, “The Invisible Gorilla,” 1999, http://www.theinvisiblegorilla.com/gorilla_experiment.html.
17. Daniel Kahneman, Thinking, Fast and Slow (New York: Farrar, Straus and Giroux, 2011). Kahneman’s “System 1” may have its headquarters in that part of the brain. See Lea Winerman, “A Machine for Jumping to Conclusions,” Monitor on Psychology (American Psychological Association) 43, no. 2 (2012): 24.
18. Ferris Jabr, “Does Thinking Really Hard Burn More Calories?,” Scientific American, July 18, 2012, https://www.scientificamerican.com/article/thinking-hard-calories/.
19. Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York: Penguin Books, 2016).
20. This is a belief that mathematicians had until the early twentieth century, when Kurt Gödel proved that axiomatic mathematical systems containing basic arithmetic are incomplete. They contain many mathematical statements that one can neither prove they are correct nor prove incorrect. In our educational systems, however, we are never given exercises for which we cannot have a definite answer.
21. Yuval Noah Harari, Homo Deus: A Brief History of Tomorrow (New York: HarperCollins, 2017).