In 2016 the issue of fake news exploded onto the political landscape during the US presidential election. Across the internet people were discussing and sharing stories of wildly varying degrees of improbability: ‘Clinton sold weapons to Isis’; ‘Trump endorsed by Pope’.
At one point during the election campaign, an analysis from Buzzfeed suggested that these fake news stories were getting more attention on Facebook than real ones.
But fake news isn’t a new phenomenon.
Robert Parkinson, a professor at Binghamton University, pointed out in the Washington Post that while Founding Father Benjamin Franklin was the American ambassador to France, he created a fake issue of a real Boston newspaper – the Independent Chronicle – that included the discovery of bags of 700 human scalps allegedly taken by Indians in league with King George III. Likewise, historian Tim Stanley noted that during the hotly contested 1828 US election (considered by many as the dirtiest election ever) Andrew Jackson started a rumour that John Quincy Adams had bought an American girl in order to satisfy the Tsar of Russia, while Adams spread rumours that Jackson’s mother slept with slaves and even had a child with one.
However, while there are historical examples of fake news stories, the rise in cable TV – especially twenty-four-hour rolling news with its increasingly fast-paced cycle – and the power of social networks to spread news farther and faster has made it easier and quicker to reach a much wider audience. Realistic-looking fake news sites have sprung up, spreading misinformation across the globe, whether for a joke, for money, to discredit rival agendas or to promote their own.
After the US election many people claimed that fake news had had a real impact on the outcome of the election, essentially winning it for Trump. Russia, in particular, has been accused of attempting to interfere in the election process, partly because much of the fake news content appears to have originated there.
But can fake news really influence us that easily? Some of the stories were pretty wild and outlandish – it seems implausible that they could have held enough sway to affect the way people voted.
It’s hard to say what the actual effect was; just because someone has reacted on Facebook doesn’t mean that they’ll necessarily have engaged with the content. In fact, in the face of criticism that Facebook should have done more to control the spread of fake news stories, Mark Zuckerburg said it was ridiculous to suggest that it could have influenced the election.
Whether or not that’s true, there are certainly examples of people buying into these stories, no matter how bizarre they may sound to some.
One story that did the rounds was of a paedophile ring led by Hillary Clinton operating out of a pizza restaurant in Washington, a story that led to a ‘concerned citizen’ driving out to the pizzeria, armed with a gun, to investigate the situation for himself. During the incident a weapon was discharged, although no one was injured. That is of course an extreme example, but it nevertheless shows that some people are taken in by these claims.
A less serious, somewhat amusing example was a widely shared meme that claimed Trump, in a 1998 People magazine interview, had said, ‘If I were to run, I’d run as a Republican. They’re the dumbest group of voters in the country. They believe anything on Fox News. I could lie and they’d still eat it up. I bet my numbers would be terrific.’ The quote was a lie; Trump never said that. But then, having made so many shocking-but-true statements (‘I could stand in the middle of Fifth Avenue and shoot somebody, and I wouldn’t lose any voters’, for example, is a real one), it can be hard to judge.
There are also ways to make the fabrications sound more believable. Often they mix fact with fiction, opening with a genuine story that then leads into the fake claims. For example, one article quoted various real statements made on abortion by Trump’s running mate Mike Pence, but then inserted an entirely fictitious quote on the topic of allowing abortions for rape victims: ‘We’d then have an epidemic of women claiming to have been raped just so they could have an abortion.’
Readers who were familiar with Pence’s anti-abortion stance, and whose confirmation bias perhaps predisposed them to believe negative things about him, may have felt this had a ring of truth to it, even though it sounds, on its own, rather far-fetched.
Tactics like this are worrying, as is the way that these stories spread. As noted, the internet and social media mean that fake news can reach a wider audience than ever before. And evidence is also emerging that large organisations might be paying for false social media accounts to spread fake news. ‘Astroturfing’, the practice of hiding the sponsors of a political, advertising, religious or any other kind of message to make it appear as though it originates from ordinary folk (creating ‘fake grassroots’), and ‘sockpuppetry’, the deliberate creation of false online identities to promote opinions – including fake news – have been rampant for years.
Social media is a highly unregulated space, open to exploitation by those with the money to do so. That is one reason why many people are calling for stricter regulation, such as taking greater steps to ensure that all registered users are actually who they claim to be.
Given the ease with which fake news can now spread we are clearly going to have to do more to understand what makes people more susceptible, or resistant, to it.
A paper in Psychological Science by Daniel Fessler has shown our receptivity depends on our psychological profile and the nature of the information we receive.1 It starts out with the following story: ‘In 2012, a liberal professor wrote that the Obama Administration was stockpiling ammunition, preparing for totalitarian rule. This idea was ignored by liberals. In 2015, conservative bloggers asserted that a military exercise aimed to occupy Texas and impose martial law. Conservatives became so concerned that the Texas Governor ordered the State Guard to monitor the exercise.’
There are obviously lots of reasons why these conspiracy theories might have been ignored by liberals and raised concerns among some conservatives (not least the differences in their opinions towards Obama), but Fessler and his colleagues suggested that conservatives might be more likely to believe stories that involve potential threats or hazards more generally. In order to test this, they had to invent stories which had no obvious political association, and then test whether conservatives were more likely to believe the statements that implied certain hazards.
That is exactly what they found.
Conservatives were much more likely to say they believed ‘Kale 156 contains thallium, a toxic heavy metal, that the plant absorbs from soil’, for example, than ‘Eating carrots results in significantly improved vision’. This reaction fits with the personality traits we saw in Chapter 2, with those on the right being somewhat more sensitive to threats.
As far as I know, there aren’t any studies showing that liberals are more receptive to certain claims or other forms of fake news. Before liberals take this as a win for their side however, it’s worth highlighting that most of the professors conducting this kind of research are themselves liberals. As objectively as they try to approach their research this will probably bias their focus. If there were more conservative professors in social psychology, I suspect there would be more research articles about the sort of fake news to which liberals are more susceptible.
One thing is certain: fake news is big news for modern democracies.
There are plenty of dubious and misleading news stories out there, and the likelihood is that at some point we will accidentally get taken in by one; but whether we do or not, we are perhaps becoming increasingly distrustful of the stories and ‘facts’ we come across everywhere. And that poses a problem: in order to make informed and rational decisions, we have to have sources we can trust – including so-called ‘experts’.
In the run-up to the EU referendum in the UK, Vote Leave campaigner Michael Gove dismissed some of the financial predictions of the economic consequences of Brexit, stating boldly ‘people in this country have had enough of experts’. Is he right?
We all have to rely on experts of one form or another at different times in our lives. Almost by definition, because they have expertise that we don’t, our reliance on them demands a certain degree of trust. When it comes to something like fixing our boiler, in most cases we can feel confident that our trust has been well placed in a trained professional – or at least it will become obvious fairly quickly if it was not.
But when it comes to something as complex as deciding whether to leave the EU, on what basis can we decide which experts to rely on, when there are so many conflicting opinions and ‘facts’ being touted around?
Indeed, as we saw in Chapter 4, it’s apparent from Philip Tetlock’s work that many of these ‘experts’ are unable to make accurate predictions (though they still seemed to convince themselves that they could). That particular study included a lot of so-called TV pundits, but the problem is widespread and we are all susceptible to unwittingly relying on the wrong information.
Take for example the story of a young professional named George. Like many people nowadays, George studied for a degree that didn’t equip him with much of the expertise he needed in his desired line of work. Luckily, however, there was a wealth of published academic research that helped to guide him.
More specifically, George was working in the field of economics, and trying to understand how levels of public debt (how much a country has borrowed) relate to economic growth. Two Harvard professors, Reinhart and Rogoff, had published some very influential research showing that the higher a country’s debt, the lower its level of growth.2 Perfect. George used this to argue that in order for a country to grow, it had to take big steps to reduce national debt.
Unfortunately for George, however, not long after the study had been published some students at the nearby University of Massachusetts were set the task of replicating Reinhart and Rogoff’s results. They couldn’t. At first the students assumed they must be making a mistake; eventually they asked if they could have a copy of the data the professors had used, so that they could figure out what they were doing wrong.
It turned out that the error was not with the students, but with the workings of the professors. A few mistakes in an Excel document had skewed the results. Once they had been corrected, the strong relationship between national debt and growth became much less clear.3 Unfortunately, George had relied on expertise that proved unreliable.
In case it isn’t clear already, the George in question was former Chancellor of the Exchequer George Osborne, who cited Reinhart and Rogoff’s work as a motivation for his austerity-focused response to the economic crisis. So if politicians can’t rely on expert advice to run the country effectively, what hope do the rest of us have?
Experts are human too and everyone makes mistakes. But sometimes a false finding isn’t due to a mistake; rather it’s because the study’s methods or findings have been intentionally interfered with. Fake news might be grabbing the headlines, but fake science is big news too, and there have been a worrying number of cases in recent years.
In 2011, world-renowned Harvard professor Marc Hauser resigned after an investigation found him guilty of manipulating evidence in a number of studies relating to his work with humans and monkeys, exploring the roots of cognition and morality. Also in 2011 an investigation began into a famous Dutch psychologist, Diederik Stapel (who had, for instance, claimed that meat eaters were more selfish), which concluded that he had manipulated data in as many as fifty-five studies and resulted in the end of his scientific career. And there are instances across all areas of scientific research; an investigation in 2016 by the China Food and Drug Administration (CFDA) into data from 1,622 clinical trials for new pharmaceutical drugs found that 80 per cent of the data failed to meet analysis requirements, were incomplete, or totally non-existent.
In 2015, two graduate students at the University of California, David Broockman and Joshua Kalla, produced evidence that a major paper in Science (one of the most prestigious journals in the world) had been faked. The paper claimed to have found evidence that just a few minutes of speaking to a gay political canvasser could influence the opinions of voters who were otherwise opposed to same-sex marriage.
This report was big news, particularly for political science, where it has often proved very difficult to demonstrate that you can ever change people’s views (let alone with just a few minutes of canvassing), especially on a highly emotive issue that could make all the difference to how people vote. But when the students tried to understand the methods used in the study, they found various inconsistencies with the paper, eventually leading to Science retracting the article.
Although Broockman and Kalla have subsequently shown through their own work that the study’s conclusions had merit (as we’ll see in Chapter 7), any instance where a study has been shown to be faked or manipulated has a damaging effect, even if the results are later confirmed. It can discredit not just that paper, but potentially other work by the scientist in question, and undermine the trust that we place in the scientific process.
These explicit cases of fraud are probably only a minor issue faced by the scientific community, however. A much larger problem is what has become known as the ‘replication crisis’, when other researchers are unable to reproduce the same result for reasons other than manipulated data; the original researchers may simply have found their results by chance. How scientists determine whether they have ‘found’ a meaningful result is complicated; but suffice to say like the old story about enough monkeys with enough typewriters producing a work of Shakespeare, with enough scientists running enough experiments out there, some of them are bound to accidentally come across significant effects.
In his 2011 book Thinking, Fast and Slow Daniel Kahneman reviewed a number of studies in which people were unconsciously primed to behave in particular ways. For example, exposing people to words related to aging seemed to cause them to walk more slowly down the corridor. In the years since the book was published, there have been numerous failures to replicate some of these studies; Kahneman has acknowledged this, and admitted that he probably gave some of these studies too much weight in his writing.
Social psychology in particular has faced a lot of controversy about claims that complex behaviour can be influenced by simple ‘priming’ effects. Another example particularly important for political psychology claimed to have found evidence that briefly exposing participants to the American flag could make them (both Republican and Democrat) more likely to shift their support towards the Republican party in a number of ways.4
Reflecting honestly, had I written this book a few years ago, I might well have included that study as an example of how our vote can be biased. In 2014, however, a systematic attempt to replicate several studies in psychology failed to find evidence for this effect.5 As an aside, it did demonstrate that a number of the key studies upon which Kahneman built his career replicated very robustly.
This ‘replication crisis’ has many causes. One of them, as we’ve seen, is that scientists can sometimes happen upon a finding by chance. This is then compounded by a systematic ‘publication bias’ in the incentives that shape a scientist’s career.
You might hope that science would operate on a purely objective basis when it comes to deciding what studies to publish, but in recent years a culture has developed in which the most interesting or exciting studies are prioritised. Anything not deemed newsworthy, including failures to replicate studies, or studies that don’t ‘find’ something, run the risk of gathering dust in the scientist’s ‘file drawer’. The focus on objectivity can become a little blurred in the face of demands on scientists to ‘publish or perish’.
Even when the science is robust, the media can also sometimes represent studies in a way that was not intended – oversimplifying a finding, perhaps, or taking a result out of context. Just because an interesting development has been noted in the brain of a rat does not necessarily mean some sort of miracle drug is about to transform our lives. And a slight correlation between people who eat a certain food and people who develop cancer doesn’t mean that the food causes the cancer.
That hasn’t stopped the Daily Mail running articles over the years linking cancer to bacon, beef, broccoli, chillies, chips, chocolate, cola, coffee, fruit juice, grapefruit, ham, lamb, milk, mouthwash, peanut butter, pastry, pickles, potatoes, rice, sausages and toast (but only if burnt) – to name but a few. In fact, many of the items it is often claimed can cure or prevent cancer have also been accused of causing cancer, at one time or another.
In 2016, a number of sources reported that a study had shown drinking two to three cups of coffee a day could prevent Alzheimer’s – but that was greatly overstating the significance of the results. In fairness to the media however, journalists often have to rely on a university-generated press release when reporting, and a recent study in the British Medical Journal found that exaggerated health stories in the news were often associated with exaggerated claims in the press release.6
Thankfully there are major developments in progress to restore some of the principles of objectivity to the actual process of science. Ben Goldacre at the University of Oxford has been working to ensure that the results of all medical trials – and the details of who funds them – are made openly available, whatever the outcome. Chris Chambers at the University of Cardiff has been leading an effort for journals in psychology and neuroscience to include an option for ‘registered reports’ that would accept or reject a study in advance based on the rigour of its design, not the outcome of the data collected. Brian Nosek in the USA has also set up The Center for Open Science, which has been facilitating large-scale attempts to test the robustness and reliability of some areas of science.
Measures like these are important to restore faith in the scientific process. Because, of course, it isn’t all wrong! There is plenty we can still rely on. And in some cases it is crucial that the public can be convinced of the reliability of the science, as we’ll see with vaccinations and climate change.
Any instance of fake science is potentially damaging.
When a paper is proved fraudulent, you might hope that it will simply be consigned to history and that the scientific community will dismiss the research and move on. But once fake science gets into the mainstream media, it can prove a lot harder to change people’s minds and convince them that the data was wrong.
Perhaps one of the most tragic examples of this was the publication of a link between vaccines and autism in the prestigious medicinal journal the Lancet, in 1998. The sample size was just twelve and the first author had an undeclared financial interest in the results of the study. After further scrutiny the journal decided the results didn’t actually hold up, and they retracted the paper – it led to investigations of fraud and the lead author was eventually struck off the medical register as a result.
Unfortunately, the story had already been taken up by the popular press, and, despite the fact that there have been multiple studies on the topic since and not one of them has found any link between autism and vaccines, it has proved very hard to correct. There has been a worrying decline in participation in some vaccine programs, which are vital to stop deadly diseases spreading. A measles outbreak in Swansea, between November 2012 and July 2013, resulted in 664 people being infected, eighty-eight being hospitalised and one death. In 2015 two dozen individuals who visited Disneyland in Florida fell ill with measles, and then exported it to three other states: Utah, Colorado and Washington.
One article called the fraudulent report ‘the most damaging medical hoax of the last 100 years’.7 The topic continues to crop up frequently in the news and, according to a 2015 report by the Pew Research Center, about one in ten Americans thinks vaccines are not safe.
There are plenty of other examples of what Stephan Lewandowsky at the University of Bristol calls ‘sticky’ misinformation. In the run-up to the most recent war in Iraq one of the most prominent arguments for the invasion was that Iraq possessed weapons of mass destruction. Not only have these claims now been refuted, they have been officially rejected by bipartisan groups in the USA.
Again, however, it has proved very hard to convince the wider public otherwise. Indeed, one survey suggested that the number of people in the USA who believed Iraq had had weapons of mass destruction was actually slightly higher after the war than it was before it.8 (It is possible there was some cognitive dissonance going on here; people didn’t want to admit their country had gone to war without a good reason, so became more likely to believe in the justifications – that is pure speculation, however.)
And despite Obama having released his birth certificate in 2011, some 19 per cent of Americans remain convinced he wasn’t born in the USA. According to a 2015 CNN/ORC poll, 29 per cent of Americans (including 43 per cent of Republicans) also said they thought he was a Muslim.
Psychologists like Lewandowsky have been turning their experimental skills in recent years to the question of how we can best counter this kind of misinformation. One thing is clear: an official correction (even from the highest officials in a country) isn’t enough.
It is possible, of course, that the media simply doesn’t give as much coverage to the corrections as it did to the original misinformation (which presumably would be a much more headline-grabbing story), and so the corrected version of the story doesn’t reach as wide an audience.
However, there is also evidence that providing people with counter information not only doesn’t work, but sometimes backfires; people can be resistant to attempts to change their beliefs. A study in the USA found that presenting people with stories about the potential negative impact of climate change could actually result in Republican supporters becoming even less likely to rate climate change policies as important.9
Climate change is a particularly interesting example. It is one of the areas in science where there is near consensus amongst the experts (97 per cent of scientists are in agreement), but there are still highly polarised views among the public in certain countries. Interestingly an Ipsos MORI Global Trends analysis in 2014 found that some of the highest levels of scepticism were in the USA, the UK and Australia. Why climate scepticism is so high in these particular countries isn’t clear, but probably relates to the ideas made available in the mainstream media in these countries.
A lot of psychology research has recently been directed towards trying to understand why some people are sceptical of climate change, and how they might be persuaded to change their minds.
One straightforward reason for people’s scepticism is that they are presented with a lot of counter claims and evidence by companies and politicians with a vested interest (connections to the oil industry, for example) in spreading doubt about climate change. These parties use a variety of tactics to muddy the waters, such as funding a wide variety of groups, including scientific-sounding organisations, to promote misinformation, claiming that the scientific evidence is contradictory, or giving the impression that scientists are divided on the topic. People then get the impression that climate change is not a clear-cut issue.
Another important factor appears to be what we think others believe. One study in Australia showed that only 283 out of 5,000 people (6 per cent) thought climate change wasn’t happening, but those 283 people estimated that around 43 per cent of people shared their climate scepticism – a serious overestimation. On the other hand, nearly 50 per cent of people agreed that man-made climate change was happening, and that group also estimated that only around 40 per cent of the population shared their view.10
One of the potential reasons for this discrepancy is that, as we don’t generally have direct access to what other people think on a mass scale, we typically rely on the impression we get from various forms of media. This in turn means that the way climate change is reported, often allowing for coverage of both sides of the argument, can skew our view of the topic.
Some media have acknowledged in recent years that their attempts to provide balance on the issue of climate change may have inadvertently contributed to the persistence of climate scepticism. If, in the interests of balance, a media outlet invites a climate sceptic every time they interview a climate scientist, then really the media is giving a false impression that the scientists, and the scientific evidence, are evenly divided. Last Week Tonight with John Oliver in the USA illustrated this by showing what a truly balanced debate should look like, inviting three climate sceptics and ninety-seven climate scientists onto the show.
So how can people be persuaded to change their minds?
Cambridge psychologist Sander van der Linden has been looking at what influences our opinions.11 He and his colleagues have found that people are more likely to accept the evidence of climate science if they first understand the extent of consensus among the scientific community.
However, the effect of highlighting this consensus can be undermined if people are also exposed to misinformation on the topic, such as The Oregon Global Warming Petition Project. This project, often cited by journalists and still making the rounds on social media, claimed ‘over 31,000 American scientists have signed a petition stating that there is no scientific evidence that the human release of carbon dioxide will, in the foreseeable future, cause catastrophic heating of the Earth’s atmosphere’.
Van der Linden and his colleagues found that they could counteract the misinformation by informing people that ‘some politically motivated groups use misleading tactics to try to convince the public that there is a lot of disagreement among scientists’. What was much more effective, however, was providing a detailed critique of the misinformation, for example highlighting that ‘some of the signatories are fraudulent, including Charles Darwin and members of the Spice Girls, that fewer than 1% of the signatories have a background in atmospheric/ climate science’. This was effective across the political spectrum in the USA, and had a similar influence on Democrats, Republicans and independents.
The researchers argued that misinformation on climate change is like a virus that can spread through contact – as more people share it and start to believe it. They suggest that people can be ‘inoculated’ against the virus, by providing detailed information and rebuttals to counter the claims spread by what they call the ‘merchants of doubt’.
This is a promising piece of research in some ways as it shows campaigners how to communicate with people in order to change their perceptions. It also, however, highlights the difficulties of doing so if campaigners have to specifically refute all the instances of misinformation that people might have encountered.
We need to be able to rely on experts in order to make meaningful decisions, but knowing who to trust and what to believe isn’t always clear. The ease with which fake news can spread on social media should be a major concern for us all – no one wants to be taken in by outright lies or find out they’ve cast their vote based on completely false information.
Unfortunately, the outbreak of fake news has happened at a time when the mainstream media is littered with ‘fake experts’, and even scientific literature has its share of ‘fake science’. In the current climate, we certainly need to approach information we come across with a degree of scepticism – but if that scepticism is applied to topics like vaccines or climate science, as we’ve seen, it can lead to very negative political and social consequences.
There are some signs of hope; the scientific community is stepping up to the challenge of producing, and disseminating, more reliable evidence, and large organisations like Facebook, Google and Wikipedia are starting to take their responsibilities more seriously.
Ultimately, however, societies that value free speech will never be able to completely prevent fake news from spreading. So we are probably going to have to take some action ourselves and ensure that we check the reliability of our sources, assess the plausibility of shocking stories, and be aware of the ways others might be trying to influence and mislead us.