2
Entangled arguments: The dangers of ‘dysrationalia’
It is 17 June 1922, and two middle-aged men – one short and squat, the other tall and lumbering with a walrus moustache – are sitting on the beach in Atlantic City, New Jersey. They are Harry Houdini and Arthur Conan Doyle1 – and by the end of the evening, their friendship will never be the same again.
It ended as it began – with a séance. Spiritualism was all the rage among London’s wealthy elite, and Conan Doyle was a firm believer, attending five or six gatherings a week. He even claimed that his wife Jean had some psychic talent, and that she had started to channel a spirit guide, Phineas, who dictated where they should live and when they should travel.
Houdini, in contrast, was a sceptic, but he still claimed to have an open mind, and on a visit to England two years previously, he had contacted Conan Doyle to discuss his recent book on the subject. Despite their differences, the two men had quickly struck up a fragile friendship and Houdini had even agreed to visit Conan Doyle’s favourite medium, who claimed to channel ectoplasm through her mouth and vagina; he quickly dismissed her powers as simple stage magic. (I’ll spare you the details.)
Now Conan Doyle was in the middle of an American book tour, and he invited Houdini to join him in Atlantic City.
The visit had begun amicably enough. Houdini had helped to teach Conan Doyle’s boys to dive, and the group were resting at the seafront when Conan Doyle decided to invite Houdini up to his hotel room for an impromptu séance, with Jean as the medium. He knew that Houdini had been mourning the loss of his mother, and he hoped that his wife might be able to make contact with the other side.
And so they returned to the Ambassador Hotel, closed the curtains, and waited for inspiration to strike. Jean sat in a kind of trance with a pencil in one hand as the men sat by and watched. She then began to strike the table violently with her hands – a sign that the spirit had descended.
‘Do you believe in God?’ she asked the spirit, who responded by moving her hand to knock again on the table. ‘Then I shall make the sign of the cross.’
She sat with her pen poised over the writing pad, before her hand began to fly wildly across the page.
‘Oh, my darling, thank God, at last I’m through,’ the spirit wrote. ‘I’ve tried oh so often – now I am happy. Why, of course, I want to talk to my boy ? my own beloved boy. Friends, thank you, with all my heart for this – you have answered the cry of my heart ? and of his ? God bless him.’
By the end of the séance, Jean had written around twenty pages in ‘angular, erratic script’. Her husband was utterly bewitched. ‘It was a singular scene – my wife with her hand flying wildly, beating the table while she scribbled at a furious rate, I sitting opposite and tearing sheet after sheet from the block as it was filled up.’
Houdini, in contrast, cut through the charade with a number of questions. Why had his mother, a Jew, professed herself to be a Christian? How had this Hungarian immigrant written her messages in perfect English – ‘a language which she had never learnt!’? And why did she not bother to mention that it was her birthday?
Houdini later wrote about his scepticism in an article for the New York Sun. It was the start of an increasingly public dispute between the two men, and their friendship never recovered before the escapologist’s death four years later.2
Even then, Conan Doyle could not let the matter rest. Egged on, perhaps, by his ‘spirit guide’ Phineas, he attempted to address and dismiss all of Houdini’s doubts in an article for The Strand magazine. His reasoning was more fanciful than any of his fictional works, not least in claiming that Houdini himself was in command of a ‘dematerialising and reconstructing force’ that allowed him to slip in and out of chains.
‘Is it possible for a man to be a very powerful medium all his life, to use that power continually, and yet never to realise that the gifts he is using are those which the world calls mediumship?’ he wrote. ‘If that be indeed possible, then we have a solution of the Houdini enigma.’
Meeting these two men for the first time, you would have been forgiven for expecting Conan Doyle to be the more critical thinker. A doctor of medicine and a best-selling writer, he exemplified the abstract reasoning that Terman was just beginning to measure with his intelligence tests. Yet it was the professional illusionist, a Hungarian immigrant whose education had ended at the age of twelve, who could see through the fraud.
Some commentators have wondered whether Conan Doyle was suffering from a form of madness. But let’s not forget that many of his contemporaries believed in spiritualism – including scientists such as the physicist Oliver Lodge, whose work on electromagnetism brought us the radio, and the naturalist Alfred Russel Wallace, a contemporary of Charles Darwin who had independently conceived the theory of natural selection. Both were formidable intellectual figures, but they remained blind to any evidence debunking the paranormal.
We’ve already seen how our definition of intelligence could be expanded to include practical and creative reasoning. But those theories do not explicitly examine our rationality, defined as our capacity to make the optimal decisions needed to meet our goals, given the resources we have to hand, and to form beliefs based on evidence, logic and sound reasoning.*
* Cognitive scientists such as Keith Stanovich describe two classes of rationality. Instrumental rationality is defined as ‘the optimisation of someone’s goal fulfilment’, or, less technically, as ‘behaving so that you get exactly what you want, given the resources available to you’. Epistemic rationality, meanwhile, concerns ‘how well your beliefs map onto the actual structure of the world’. By falling for fraudulent mediums, Conan Doyle was clearly lacking in the latter.
While decades of psychological research have documented humanity’s more irrational tendencies, it is only relatively recently that scientists have started to measure how that irrationality between individuals, and whether that variance is related to measures of intelligence. They are finding that the two are far from perfectly correlated: it is possible to have a very high SAT score that demonstrates good abstract thinking, for instance, while still performing badly on these new tests of rationality – a mismatch known as ‘dysrationalia’.
Conan Doyle’s life story – and his friendship with Houdini, in particular – offers the perfect lens through which to view this cutting-edge research.3 I certainly wouldn’t claim that any kind of faith is inherently irrational, but I am interested in the fact that fraudsters were able to exploit Conan Doyle’s beliefs to fool him time after time. He was simply blind to the evidence, including Houdini’s testimonies. Whatever your views on paranormal belief in general, he did not need to be quite so gullible at such great personal cost.
Conan Doyle is particularly fascinating because we know, through his writing, that he was perfectly aware of the laws of logical deduction. Indeed, he started to dabble in spiritualism at the same time that he first created Sherlock Holmes:4 he was dreaming up literature’s greatest scientific mind during the day, but failed to apply those skills of deduction at night. If anything, his intelligence seems to have only allowed him to come up with increasingly creative arguments to dismiss the sceptics and justify his beliefs; he was bound more tightly than Houdini in his chains.
Besides Doyle, many other influential thinkers of the last hundred years may have also been afflicted by this form of the intelligence trap. Even Einstein – whose theories are often taken to be the pinnacle of human intelligence – may have suffered from this blinkered reasoning, leading him to waste the last twenty-five years of his career with a string of embarrassing failures.
Whatever your specific situation and interests, this research will explain why so many of us make mistakes that are blindingly obvious to all those around us – and continue to make those errors long after the facts have become apparent.
Houdini himself seems to have intuitively understood the vulnerability of the intelligent mind. ‘As a rule, I have found that the greater brain a man has, and the better he is educated, the easier it has been to mystify him,’ he once told Conan Doyle.5
A true recognition of dysrationalia – and its potential for harm – has taken decades to blossom, but the roots of the idea can be found in the now legendary work of two Israeli researchers, Daniel Kahneman and Amos Tversky, who identified many cognitive biases and heuristics (quick-and-easy rules of thumb) that can skew our reasoning.
One of their most striking experiments asked participants to spin a ‘wheel of fortune’, which landed on a number between 1 and 100, before considering general knowledge questions – such as estimating the number of African countries that are represented in the UN. The wheel of fortune should, of course, have had no influence on their answers – but the effect was quite profound. The lower the quantity on the wheel, the smaller their estimate – the arbitrary value had planted a figure in their mind, ‘anchoring’ their judgement.6
You have probably fallen for anchoring yourself many times while shopping in the sales. Suppose you are looking for a new TV. You had expected to pay around £100, but then you find a real bargain: a £200 item reduced to £150. Seeing the original price anchors your perception of what is an acceptable price to pay, meaning that you will go above your initial budget. If, on the other hand, you had not seen the original price, you would have probably considered it too expensive, and moved on.
You may also have been prey to the availability heuristic, which causes us to over-estimate certain risks based on how easily the dangers come to mind, thanks to their vividness. It’s the reason that many people are more worried about flying than driving – because reports of plane crashes are often so much more emotive, despite the fact that it is actually far more dangerous to step into a car.
There is also framing: the fact that you may change your opinion based on the way information is phrased. Suppose you are considering a medical treatment for 600 people with a deadly illness and it has a 1 in 3 success rate. You can be told either that ‘200 people will be saved using this treatment’ (the gain framing) or that ‘400 people will die using this treatment’ (the loss framing). The statements mean exactly the same thing, but people are more likely to endorse the statement when it is presented in the gain framing; they passively accept the facts as they are given to them without thinking what they really mean. Advertisers have long known this: it’s the reason that we are told that foods are 95 per cent fat free (rather than being told they are ‘5 per cent fat’).
Other notable biases include the sunk cost fallacy (our reluctance to give up on a failing investment even if we will lose more trying to sustain it), and the gambler’s fallacy – the belief that if the roulette wheel has landed on black, it’s more likely the next time to land on red. The probability, of course, stays exactly the same. An extreme case of the gambler’s fallacy is said to have been observed in Monte Carlo in 1913, when the roulette wheel fell twenty-six times on black – and the visitors lost millions as the bets on red escalated. But it is not just witnessed in casinos; it may also influence family planning. Many parents falsely believe that if they have already produced a line of sons, then a daughter is more likely to come next. With this logic, they may end up with a whole football team of boys.
Given these findings, many cognitive scientists divide our thinking into two categories: ‘system 1’, intuitive, automatic, ‘fast thinking’ that may be prey to unconscious biases; and ‘system 2’, ‘slow’, more analytical, deliberative thinking. According to this view – called dual-process theory – many of our irrational decisions come when we rely too heavily on system 1, allowing those biases to muddy our judgement.
Yet none of the early studies by Kahneman and Tversky had tested whether our irrationality varies from person to person. Are some people more susceptible to these biases, while others are immune, for instance? And how do those tendencies relate to our general intelligence? Conan Doyle’s story is surprising because we intuitively expect more intelligent people, with their greater analytical minds, to act more rationally – but as Tversky and Kahneman had shown, our intuitions can be deceptive.
If we want to understand why smart people do stupid things, these are vital questions.
During a sabbatical at the University of Cambridge in 1991, a Canadian psychologist called Keith Stanovich decided to address these issues head on. With a wife specialising in learning difficulties, he had long been interested in the ways that some mental abilities may lag behind others, and he suspected that rationality would be no different. The result was an influential paper introducing the idea of dysrationalia as a direct parallel to other disorders like dyslexia and dyscalculia.
It was a provocative concept – aimed as a nudge in the ribs to all the researchers examining bias. ‘I wanted to jolt the field into realising that it had been ignoring individual differences,’ Stanovich told me.
Stanovich emphasises that dysrationalia is not just limited to system 1 thinking. Even if we are reflective enough to detect when our intuitions are wrong, and override them, we may fail to use the right ‘mindware’ – the knowledge and attitudes that should allow us to reason correctly.7 If you grow up among people who distrust scientists, for instance, you may develop a tendency to ignore empirical evidence, while putting your faith in unproven theories.8 Greater intelligence wouldn’t necessarily stop you forming those attitudes in the first place, and it is even possible that your greater capacity for learning might then cause you to accumulate more and more ‘facts’ to support your views.9
Circumstantial evidence would suggest that dysrationalia is common. One study of the high-IQ society Mensa, for example, showed that 44 per cent of its members believed in astrology, and 56 per cent believed that the Earth had been visited by extra-terrestrials.10 But rigorous experiments, specifically exploring the link between intelligence and rationality, were lacking.
Stanovich has now spent more than two decades building on those foundations with a series of carefully controlled experiments.
To understand his results, we need some basic statistical theory. In psychology and other sciences, the relationship between two variables is usually expressed as a correlation coefficient between 0 and 1. A perfect correlation would have a value of 1 – the two parameters would essentially be measuring the same thing; this is unrealistic for most studies of human health and behaviour (which are determined by so many variables), but many scientists would consider a ‘moderate’ correlation to lie between 0.4 and 0.59.11
Using these measures, Stanovich found that the relationships between rationality and intelligence were generally very weak. SAT scores revealed a correlation of just 0.1 and 0.19 with measures of the framing bias and anchoring, for instance.12 Intelligence also appeared to play only a tiny role in the question of whether we are willing to delay immediate gratification for a greater reward in the future – a tendency known as ‘temporal discounting’. In one test, the correlation with SAT scores was as small as 0.02. That’s an extraordinarily modest correlation for a trait that many might assume comes hand in hand with a greater analytical mind. The sunk cost bias also showed almost no relationship to SAT scores in another study.13
Gui Xue and colleagues at Beijing Normal University, meanwhile, have followed Stanovich’s lead, finding that the gambler’s fallacy is actually a little more common among the more academically successful participants in his sample.14 That’s worth remembering: when playing roulette, don’t think you are smarter than the wheel.
Even trained philosophers are vulnerable. Participants with PhDs in philosophy are just as likely to suffer from framing effects, for example, as everyone else – despite the fact that they should have been schooled in logical reasoning.15
You might at least expect that more intelligent people could learn to recognise these flaws. In reality, most people assume that they are less vulnerable than other people, and this is equally true of the ‘smarter’ participants. Indeed, in one set of experiments studying some of the classic cognitive biases, Stanovich found that people with higher SAT scores actually had a slightly larger ‘bias blind spot’ than people who were less academically gifted.16 ‘Adults with more cognitive ability are aware of their intellectual status and expect to outperform others on most cognitive tasks,’ Stanovich told me. ‘Because these cognitive biases are presented to them as essentially cognitive tasks, they expect to outperform on them as well.’
From my interactions with Stanovich, I get the impression that he is extremely cautious about promoting his findings, meaning he has not achieved the same kind of fame as Daniel Kahneman, say – but colleagues within his field believe that these theories could be truly game-changing. ‘The work he has done is some of the most important research in cognitive psychology – but it’s sometimes underappreciated,’ agreed Gordon Pennycook, a professor at the University of Regina, Canada, who has also specialised in exploring human rationality.
Stanovich has now refined and combined many of these measures into a single test, which is informally called the ‘rationality quotient’. He emphasises that he does not wish to devalue intelligence tests – they ‘work quite well for what they do’ – but to improve our understanding of these other cognitive skills that may also determine our decision making, and place them on an equal footing with the existing measures of cognitive ability.
‘Our goal has always been to give the concept of rationality a fair hearing – almost as if it had been proposed prior to intelligence’, he wrote in his scholarly book on the subject.17 It is, he says, a ‘great irony’ that the thinking skills explored in Kahneman’s Nobel Prize-winning work are still neglected in our most well-known assessment of cognitive ability.18
After years of careful development and verification of the various sub-tests, the first iteration of the ‘Comprehensive Assessment of Rational Thinking’ was published at the end of 2016. Besides measures of the common cognitive biases and heuristics, it also included probabilistic and statistical reasoning skills – such as the ability to assess risk – that could improve our rationality, and questionnaires concerning contaminated mindware such as anti-science attitudes.
For a taster, consider the following question, which aims to test the ‘belief bias’. Your task is to consider whether the conclusion follows, logically, based only on the opening two premises.
All living things need water.
Roses need water.
Therefore, roses are living things.
What did you answer? According to Stanovich’s work, 70 per cent of university students believe that this is a valid argument. But it isn’t, since the first premise only says that ‘all living things need water’ – not that ‘all things that need water are living’.
If you still struggle to understand why that makes sense, compare it to the following statements:
All insects need oxygen.
Mice need oxygen.
Therefore mice are insects.
The logic of the two statements is exactly the same – but it is far easier to notice the flaw in the reasoning when the conclusion clashes with your existing knowledge. In the first example, however, you have to put aside your preconceptions and think, carefully and critically, about the specific statements at hand – to avoid thinking that the argument is right just because the conclusion makes sense with what you already know.19 That’s an important skill whenever you need to appraise a new claim.
When combining all these sub-tests, Stanovich found that the overall correlation with measures of general intelligence, such as SAT scores, was modest: around 0.47 on one test. Some overlap was to be expected, especially given the fact that several of these measures, such as probabilistic reasoning, would be aided by mathematical ability and other aspects of cognition measured by IQ tests and SATs. ‘But that still leaves enough room for the discrepancies between rationality and intelligence that lead to smart people acting foolishly,’ Stanovich said.
With further development, the rationality quotient could be used in recruitment to assess the quality of a potential employee’s decision making; Stanovich told me that he has already had significant interest from law firms and financial institutions, and executive head-hunters.
Stanovich hopes his test may also be a useful tool to assess how students’ reasoning changes over a school or university course. ‘This, to me, would be one of the more exciting uses,’ Stanovich said. With that data, you could then investigate which interventions are most successful at cultivating more rational thinking styles.
While we wait to see that work in action, cynics may question whether RQ really does reflect our behaviour in real life. After all, the IQ test is sometimes accused of being too abstract. Is RQ – based on artificial, imagined scenarios – any different?
Some initial answers come from the work of Wändi Bruine de Bruin at Leeds University. Inspired by Stanovich’s research, her team first designed their own scale of ‘adult decision-making competence’, consisting of seven tasks measuring biases like framing, measures of risk perception, and the tendency to fall for the sunk cost fallacy (whether you are likely to continue with a bad investment or not). The team also examined over-confidence by asking the subjects some general knowledge questions, and then asking them to gauge how sure they were that each answer was correct.
Unlike many psychological studies, which tend to use university students as guinea pigs, Bruine de Bruin’s experiment examined a diverse sample of people, aged eighteen to eighty-eight, with a range of educational backgrounds – allowing her to be sure that any results reflected the population as a whole.
As Stanovich has found with his tests, the participants’ decision-making skills were only moderately linked to their intelligence; academic success did not necessarily make them more rational decision makers.
But Bruine de Bruin then decided to see how both measures were related to their behaviours in the real world. To do so, she asked participants to declare how often they had experienced various stressful life events, from the relatively trivial (such as getting sunburnt or missing a flight), to the serious (catching an STD or cheating on your partner) and the downright awful (being put in jail).20 Although the measures of general intelligence did seem to have a small effect on these outcomes, the participants’ rationality scores were about three times more important in determining their behaviour.
These tests clearly capture a more general tendency to be a careful, considered thinker that was not reflected in more standard measures of cognitive ability; you can be intelligent and irrational – as Stanovich had found – and this has serious consequences for your life.
Bruine de Bruin’s findings can offer us some insights into other peculiar habits of intelligent people. One study from the London School of Economics, published in 2010, found that people with higher IQs tend to consume more alcohol and may be more likely to smoke or take illegal drugs, for instance – supporting the idea that intelligence does not necessarily help us to weigh up short-term benefits against the long-term consequences.21
People with high IQs are also just as likely to face financial distress, such as missing mortgage payments, bankruptcy or credit card debt. Around 14 per cent of people with an IQ of 140 had reached their credit limit, compared to 8.3 per cent of people with an average IQ of 100. Nor were they any more likely to put money away in long-term investments or savings; their accumulated wealth each year was just a tiny fraction greater. These facts are particularly surprising, given that more intelligent (and better educated) people do tend to have more stable jobs with higher salaries, which suggests that their financial distress is a consequence of their decision making, rather than, say, a simple lack of earning power.22
The researchers suggested that more intelligent people veer close to the ‘financial precipice’ in the belief that they will be better able to deal with the consequence afterwards. Whatever the reason, the results suggest that smarter people are not investing their money in the more rational manner that economists might anticipate; it is another sign that intelligence does not necessarily lead to better decision making.
As one vivid example, consider the story of Paul Frampton. A brilliant physicist at the University of North Carolina, his work ranged from a new theory of dark matter (the mysterious, invisible mass holding our universe together) to the prediction of a subatomic particle called the ‘axigluon’, a theory that is inspiring experiments at the Large Hadron Collider.
In 2011, however, he began online dating, and soon struck up a friendship with a former bikini model named Denise Milani. In January the next year, she invited him to visit her on a photoshoot in La Paz, Bolivia. When he arrived, however, he found a message – she’d had to leave for Argentina instead. But she’d left her bag. Could he pick it up and bring it to her?
Alas, he arrived in Argentina but there was still no sign of Milani. Losing patience, he decided to return to the USA, where he checked in her suitcase with his own luggage. A few minutes later, an announcement called him to meet the airport staff at his gate. Unless you suffer from severe dysrationalia yourself, you can probably guess what happened next. He was subsequently charged with transporting two kilograms of cocaine.
Fraudsters, it turned out, had been posing as Milani – who really is a model, but knew nothing of the scheme and had never been in touch with Frampton. They would have presumably intercepted the bag once he had carried it over the border.
Frampton had been warned about the relationship. ‘I thought he was out of his mind, and I told him that,’ John Dixon, a fellow physicist and friend of Frampton’s, said in the New York Times. ‘But he really believed that he had a pretty young woman who wanted to marry him.’23
We can’t really know what was going through Frampton’s mind. Perhaps he suspected that ‘Milani’ was involved in some kind of drug smuggling operation but thought that this was a way of proving himself to her. His love for her seems to have been real, though; he even tried to message her in prison, after the scam had been uncovered. For some reason, however, he just hadn’t been able to weigh up the risks, and had allowed himself to be swayed by impulsive, wishful thinking.
If we return to that séance in Atlantic City, Arthur Conan Doyle’s behaviour would certainly seem to fit neatly with theories of dysrationalia, with compelling evidence that paranormal and superstitious beliefs are surprisingly common among the highly intelligent.
According to a survey of more than 1,200 participants, people with college degrees are just as likely to endorse the existence of UFOs, and they were even more credulous of extrasensory perception and ‘psychic healing’ than people with a worse education.24 (The education level here is an imperfect measure of intelligence, but it gives a general idea that the abstract thinking and knowledge required to enter university does not translate into more rational beliefs.)
Needless to say, all of the phenomena above have been repeatedly disproven by credible scientists – yet it seems that many smart people continue to hold on to them regardless. According to dual-process (fast/slow thinking) theories, this could just be down to cognitive miserliness. People who believe in the paranormal rely on their gut feelings and intuitions to think about the sources of their beliefs, rather than reasoning in an analytical, critical way.25
This may be true for many people with vaguer, less well-defined beliefs, but there are some particular elements of Conan Doyle’s biography that suggest his behaviour can’t be explained quite so simply. Often, it seemed as if he was using analytical reasoning from system 2 to rationalise his opinions and dismiss the evidence. Rather than thinking too little, he was thinking too much.
Consider how Conan Doyle was once infamously fooled by two schoolgirls. In 1917 – a few years before he met Houdini – sixteen-year-old Elsie Wright and nine-year-old Frances Griffith claimed to have photographed a population of fairies frolicking around a stream in Cottingley, West Yorkshire. Through a contact at the local Theosophical Society, the pictures eventually landed in Conan Doyle’s hands.
Many of his acquaintances were highly sceptical, but he fell for the girls’ story hook, line and sinker.26 ‘It is hard for the mind to grasp what the ultimate results may be if we have actually proved the existence upon the surface of this planet of a population which may be as numerous as the human race,’ he wrote in The Coming of Fairies.27 In reality, they were cardboard cut-outs, taken from Princess Mary’s Giftbook28 – a volume that had also included some of Conan Doyle’s own writing.29
What’s fascinating is not so much the fact that he fell for the fairies in the first place, but the extraordinary lengths that he went to explain away any doubts. If you look at the photographs carefully, you can even see hatpins holding one of the cut-outs together. But where others saw pins, he saw the gnome’s belly button – proof that fairies are linked to their mothers in the womb with an umbilical cord. Conan Doyle even tried to draw on modern scientific discoveries to explain the fairies’ existence, turning to electromagnetic theory to claim that they were ‘constructed in material which threw out shorter or longer vibrations’, rendering them invisible to humans.
As Ray Hyman, a professor of psychology at the University of Oregon, puts it: ‘Conan Doyle used his intelligence and cleverness to dismiss all counter-arguments . . . [He] was able to use his smartness to outsmart himself.’30
The use of system 2 ‘slow thinking’ to rationalise our beliefs even when they are wrong leads us to uncover the most important and pervasive form of the intelligence trap, with many disastrous consequences; it can explain not only the foolish ideas of people such as Conan Doyle, but also the huge divides in political opinion about issues such as gun crime and climate change.
So what’s the scientific evidence?
The first clues came from a series of classic studies from the 1970s and 1980s, when David Perkins of Harvard University asked students to consider a series of topical questions, such as: ‘Would a nuclear disarmament treaty reduce the likelihood of world war?’ A truly rational thinker should consider both sides of the argument, but Perkins found that more intelligent students were no more likely to consider any alternative points of view. Someone in favour of nuclear disarmament, for instance, might not explore the issue of trust: whether we could be sure that all countries would honour the agreement. Instead, they had simply used their abstract reasoning skills and factual knowledge to offer more elaborate justifications of their own point of view.31
This tendency is sometimes called the confirmation bias, though several psychologists – including Perkins – prefer to use the more general term ‘myside bias’ to describe the many different kinds of tactics we may use to support our viewpoint and diminish alternative opinions. Even student lawyers, who are explicitly trained to consider the other side of a legal dispute, performed very poorly.
Perkins later considered this to be one of his most important discoveries.32 ‘Thinking about the other side of the case is a perfect example of a good reasoning practice,’ he said. ‘Why, then, do student lawyers with high IQs and training in reasoning that includes anticipating the arguments of the opposition prove to be as subject to confirmation bias or myside bias, as it has been called, as anyone else? To ask such a question is to raise fundamental issues about conceptions of intelligence.’33
Later studies only replicated this finding, and this one-sided way of thinking appears to be a particular problem for the issues that speak to our sense of identity. Scientists today use the term ‘motivated reasoning’ to describe this kind of emotionally charged, self-protective use of our minds. Besides the myside/confirmation bias that Perkins examined (where we preferentially seek and remember the information that confirms our view), motivated reasoning may also take the form of a disconfirmation bias – a kind of preferential scepticism that tears down alternative arguments. And, together, they can lead us to become more and more entrenched in our opinions.
Consider an experiment by Dan Kahan at Yale Law School, which examined attitudes to gun control. He told his participants that a local government was trying to decide whether to ban firearms in public – and it was unsure whether this would increase or decrease crime rates. So they had collected data on cities with and without these bans, and on changes in crime over one year:
Kahan also gave his participants a standard numeracy test, and questioned them on their political beliefs.
Try it for yourself. Given this data, do the bans work?
Kahan had deliberately engineered the numbers to be deceptive at first glance, suggesting a huge decrease in crime in the cities carrying the ban. To get to the correct answer, you need to consider the ratios, showing around 25 per cent of the cities with the ban had witnessed an increase in crime, compared with 16 per cent of those without a ban. The ban did not work, in other words.
As you might hope, the more numerate participants were more likely to come to that conclusion – but only if they were more conservative, Republican voters who were already more likely to oppose gun control. If they were liberal, Democrat voters, the participants skipped the explicit calculation, and were more likely to go with their (incorrect) initial hunch that the ban had worked, no matter what their intelligence.
In the name of fairness, Kahan also conducted the same experiment, but with the data reversed, so that the data supported the ban. Now, it was the numerate liberals who came to the right answer – and the numerate conservatives who were more likely to be wrong. Overall, the most numerate participants were around 45 per cent more likely to read the data correctly if it conformed to their expectations.
The upshot, according to Kahan and other scientists studying motivated reasoning, is that smart people do not apply their superior intelligence fairly, but instead use it ‘opportunistically’ to promote their own interests and protect the beliefs that are most important to their identities. Intelligence can be a tool for propaganda rather than truth-seeking.34
It’s a powerful finding, capable of explaining the enormous polarisation on issues such as climate change (see graph below).35 The scientific consensus is that carbon emissions from human sources are leading to global warming, and people with liberal politics are more likely to accept this message if they have better numeracy skills and basic scientific knowledge.36 That makes sense, since these people should also be more likely to understand the evidence. But among free-market capitalists, the opposite is true: the more scientifically literate and numerate they are, the more likely they are to reject the scientific consensus and to believe that claims of climate change have been exaggerated.
The same polarisation can be seen for people’s views on vaccination,37 fracking38 and evolution.39 In each case, greater education and intelligence simply helps people to justify the beliefs that match their political, social or religious identity. (To be absolutely clear, overwhelming evidence shows that vaccines are safe and effective, carbon emissions are changing the climate, and evolution is true.)
There is even some evidence that, thanks to motivated reasoning, exposure to the opposite point of view may actually backfire; not only do people reject the counter-arguments, but their own views become even more deeply entrenched as a result. In other words, an intelligent person with an inaccurate belief system may become more ignorant after having heard the actual facts. We could see this with Republicans’ opinions about Obamacare in 2009 and 2010: people with greater intelligence were more likely to believe claims that the new system would bring about Orwellian ‘death panels’ to decide who lived and died, and their views were only reinforced when they were presented with evidence that was meant to debunk the myths.40
Kahan’s research has primarily examined the role of motivated reasoning in political decision making – where there may be no right or wrong answer – but he says it may stretch to other forms of belief. He points to a study by Jonathan Koehler, then at the University of Texas at Austin, who presented parapsychologists and sceptical scientists with data on two (fictional) experiments into extrasensory perception.
The participants should have objectively measured the quality of the papers and the experimental design. But Koehler found that they often came to very different conclusions, depending on whether the results of the studies agreed or disagreed with their own beliefs in the paranormal.41
When we consider the power of motivated reasoning, Conan Doyle’s belief in fraudulent mediums seems less paradoxical. His very identity had come to rest on his experiments with the paranormal. Spiritualism was the foundation of his relationship with his wife, and many of his friendships; he had invested substantial sums of money in a spiritualist church42 and written more than twenty books and pamphlets on the subject. Approaching old age, his beliefs also provided him with the comforting certainty of the afterlife. ‘It absolutely removes all fear of death,’ he said, and the belief connected him with those he had already lost43 ? surely two of the strongest motivations imaginable.
All of this would seem to chime with research showing that beliefs may first arise from emotional needs – and it is only afterwards that the intellect kicks in to rationalise the feelings, however bizarre they may be.
Conan Doyle certainly claimed to be objective. ‘In these 41 years, I never lost any opportunity of reading and studying and experimenting on this matter,’44 he boasted towards the end of his life. But he was only looking for the evidence that supported his point of view, while dismissing everything else.45
It did not matter that this was the mind that created Sherlock Holmes – the ‘perfect reasoning and observing machine’. Thanks to motivated reasoning, Conan Doyle could simply draw on that same creativity to explain away Houdini’s scepticism. And when he saw the photos of the Cottingley Fairies, he felt he had found the proof that would convince the world of other psychic phenomena. In his excitement, his mind engineered elaborate scientific explanations – without seriously questioning whether it was just a schoolgirl joke.
When they confessed decades after Conan Doyle’s death, the girls revealed that they simply hadn’t bargained for grown-ups’ desire to be fooled. ‘I never even thought of it as being a fraud,’ one of the girls, Frances Griffiths, revealed in a 1985 interview. ‘It was just Elsie and I having a bit of fun and I can’t understand to this day why they were taken in – they wanted to be taken in.’46
Following their increasingly public disagreement, Houdini lost all respect for Conan Doyle; he had started the friendship believing that the writer was an ‘intellectual giant’ and ended it by writing that ‘one must be half-witted to believe some of these things’. But given what we know about motivated reasoning, the very opposite may be true: only an intellectual giant could be capable of believing such things.*
* In his book The Rationality Quotient, Keith Stanovich points out that George Orwell famously came to much the same conclusion when describing various forms of nationalism, Orwell writing that: ‘There is no limit to the follies that can be swallowed if one is under the influence of feelings of this kind . . . One has to belong to the intelligentsia to believe things like that: No ordinary man could be such a fool.’
Many other great intellects may have lost their minds thanks to blinkered thinking. Their mistakes may not involve ghosts and fairies, but they still led to years of wasted effort and disappointment as they toiled to defend the indefensible.
Consider Albert Einstein, whose name has become a synonym for genius. While still working as young patent clerk in 1905, he outlined the foundations for quantum mechanics, special relativity, and the equation for mass?energy equivalence (E=MC2) – the concept for which he is most famous.47 A decade later he would announce his theory of general relativity – tearing through Isaac Newton’s laws of gravity.
But his ambitions did not stop there. For the remainder of his life, he planned to build an even grander, all-encompassing understanding of the universe that melded the forces of electromagnetism and gravity into a single, unified theory. ‘I want to know how God created this world. I am not interested in this or that phenomenon, in the spectrum of this or that element, I want to know his thoughts’, he had written previously – and this was his attempt to capture those thoughts in their entirety.
After a period of illness in 1928, he thought he had done it. ‘I have laid a wonderful egg . . . Whether the bird emerging from this will be viable and long-lived lies in the lap of the gods’, he wrote. But the gods soon killed that bird, and many more dashed hopes would follow over the next twenty-five years, with further announcements of a new Unified Theory, only for them all to fall like a dead weight. Soon before his death, Einstein had to admit that ‘most of my offspring end up very young in the graveyard of disappointed hopes’.
Einstein’s failures were no surprise to those around him, however. As his biographer, the physicist Hans Ohanian, wrote in his book Einstein’s Mistakes: ‘Einstein’s entire program was an exercise in futility . . . It was obsolete from the start.’ The more he invested in the theory, however, the more reluctant he was to let it go. Freeman Dyson, a colleague at Princeton, was apparently so embarrassed by Einstein’s foggy thinking that he spent eight years deliberately avoiding him on campus.
The problem was that Einstein’s famous intuition – which had served him so well in 1905 – had led him seriously astray, and he had become deaf and blind to anything that might disprove his theories. He ignored evidence of nuclear forces that were incompatible with his grand idea, for instance, and came to despise the results of quantum theory – a field he had once helped to establish.48 At scientific meetings, he would spend all day trying to come up with increasingly intricate counter-examples to disprove his rivals, only to have been disproved by the evening.49 He simply ‘turned his back on experiments’ and tried to ‘rid himself of the facts’, according to his colleague at Princeton, Robert Oppenheimer.50
Einstein himself realised as much towards the end of his life. ‘I must seem like an ostrich who forever buries its head in the relativistic sand in order not to face the evil quanta’, he once wrote to his friend, the quantum physicist Louis de Broglie. But he continued on his fool’s errand, and even on his deathbed, he scribbled pages of equations to support his erroneous theories, as the last embers of his genius faded. All of which sounds a lot like the sunk cost fallacy exacerbated by motivated reasoning.
The same stubborn approach can be found in many of his other ideas. Having supported communism, he continually turned a blind eye to the failings of the USSR, for instance.51
Einstein, at least, had not left his domain of expertise. But this single-minded determination to prove oneself right may be particularly damaging when scientists stray outside their usual territory, a fact that was noted by the psychologist Hans Eysenck. ‘Scientists, especially when they leave the particular field in which they are specialized, are just as ordinary, pig-headed, and unreasonable as everybody else’, he wrote in the 1950s. ‘And their unusually high intelligence only makes their prejudices all the more dangerous.’52 The irony is that Eysenck himself came to believe theories of the paranormal, showing the blinkered analysis of evidence he claimed to deplore.
Some science writers have even coined a term – Nobel Disease – to describe the unfortunate habit of Nobel Prize winners to embrace dubious positions on various issues. The most notable case is, of course, Kary Mullis, the famous biochemist with the strange conspiracy theories who we met in the introduction. His autobiography, Dancing Naked in the Mind Field, is almost a textbook in the contorted explanations the intelligent mind can conjure to justify its preconceptions.53
Other examples include Linus Pauling, who discovered the nature of chemical bonds between atoms, yet spent decades falsely claiming that vitamin supplements could cure cancer;54 and Luc Montagnier, who helped discover the HIV virus, but who has since espoused some bizarre theories that even highly diluted DNA can cause structural changes to water, leading it to emit electromagnetic radiation. Montagnier believes that this phenomenon can be linked to autism, Alzheimer’s disease and various serious conditions, but many other scientists reject these claims, leading to a petition of 35 other Nobel laureates asking for him to be removed from his position in an AIDS research centre.55
Although we may not be working on a Grand Unified Theory, there is a lesson here for all of us. Whatever your profession, the toxic combination of motivated reasoning and the bias blind spot could still lead us to justify prejudiced opinions about those around us, pursue failing projects at work, or rationalise a hopeless love affair.
As two final examples, let’s look at two of history’s greatest innovators: Thomas Edison and Steve Jobs.
With more than a thousand patents to his name, Thomas Edison was clearly in possession of an extraordinarily fertile mind. But once he had conceived an idea, he struggled to change his mind – as shown in the ‘battle of the currents’.
In the late 1880s, having produced the first working electric lightbulb, Edison sought to find a way to power America’s homes. His idea was to set up a power grid using a steady ‘direct current’ (DC), but his rival George Westinghouse had found a cheaper means of transmitting electricity with the alternating current (AC) we use today. Whereas DC is a flat line of a single voltage, AC oscillates rapidly between two voltages, which stops it losing energy over distance.
Edison claimed that AC was simply too dangerous, since it more easily leads to death by electrocution. Although this concern was legitimate, the risk could be reduced with proper insulation and regulations, and the economic arguments were just too strong to ignore: it really was the only feasible way to provide electricity to the mass market.
The rational response would have been to try to capitalise on the new technology and improve its safety, rather than continuing to pursue DC. One of Edison’s own engineers, Nikola Tesla, had already told him as much. But rather than taking his advice, Edison dismissed Tesla’s ideas and even refused to pay him for his research into AC, leading Tesla to take his ideas to Westinghouse instead.56
Refusing to admit defeat, Edison engaged in an increasingly bitter PR war to try to turn public opinion against AC. It began with macabre public demonstrations, electrocuting stray dogs and horses. And when Edison heard that a New York court was investigating the possibility of using electricity for executions, he saw yet another opportunity to prove that point, as he advised the court on the development of the electric chair – in the hope that AC would be forever associated with death. It was a shocking moral sacrifice for someone who had once declared that he would ‘join heartily in an effort to totally abolish capital punishment’.57
You may consider these to be simply the actions of a ruthless businessman, but the battle really was futile. As one journal stated in 1889: ‘It is impossible now that any man, or body of men, should resist the course of alternating current development . . . Joshua may command the sun to stand still, but Mr Edison is not Joshua.’58 By the 1890s, he had to admit defeat, eventually turning his attention to other projects.
The historian of science Mark Essig writes that ‘the question is not so much why Edison’s campaign failed as why he thought it might succeed’.59 But an understanding of cognitive errors such as the sunk cost effect, the bias blind spot and motivated reasoning helps to explain why such a brilliant mind may persuade itself to continue down such a disastrous path.
The co-founder of Apple, Steve Jobs, was similarly a man of enormous intelligence and creativity, yet he too sometimes suffered from a dangerously skewed perception of the world. According to Walter Isaacson’s official biography, his acquaintances described a ‘reality distortion field’ – ‘a confounding mélange of charismatic rhetorical style, indomitable will, and eagerness to bend any fact to fit the purpose at hand’, in the words of his former colleague Andy Hertzfeld.
That single-minded determination helped Jobs to revolutionise technology, but it also backfired in his personal life, particularly after he was diagnosed with pancreatic cancer in 2003. Ignoring his doctor’s advice, he instead opted for quack cures such as herbal remedies, spiritual healing and a strict fruit juice diet. According to all those around him, Jobs had convinced himself that his cancer was something he could cure himself, and his amazing intelligence seems to have allowed him to dismiss any opinions to the contrary.60
By the time he finally underwent surgery, the cancer had progressed too far to be treatable, and some doctors believe Jobs may still have been alive today if he had simply followed medical advice. In each case, we see that greater intellect is used for rationalisation and justification, rather than logic and reason.
We have now seen three broad reasons why an intelligent person may act stupidly. They may lack elements of creative or practical intelligence that are essential for dealing with life’s challenges; they may suffer from ‘dysrationalia’, using biased intuitive judgements to make decisions; and they may use their intelligence to dismiss any evidence that contradicts their views thanks to motivated reasoning.
Harvard University’s David Perkins described this latter form of the intelligence trap to me best when he said it was like ‘putting a moat around a castle’. The writer Michael Shermer, meanwhile, describes it as creating ‘logic-tight compartments’ in our thinking. But I personally prefer to think of it as a runaway car, without the right steering or navigation to correct its course. As Descartes had originally put it: ‘those who go forward but very slowly can get further, if they always follow the right road, than those who are in too much of a hurry and stray off it’.
Whatever metaphor you choose, the question of why we evolved this way is a serious puzzle for evolutionary psychologists. When they build their theories of human nature, they expect common behaviours to have had a clear benefit to our survival. But how could it ever be an advantage to be intelligent but irrational?
One compelling answer comes from the recent work of Hugo Mercier at the French National Centre for Scientific Research, and Dan Sperber at the Central European University in Budapest. ‘I think it’s now so obvious that we have the myside bias, that psychologists have forgotten how weird it is,’ Mercier told me in an interview. ‘But if you look at it from an evolutionary point of view, it’s really maladaptive.’
It is now widely accepted that human intelligence evolved, at least in part, to deal with the cognitive demands of managing more complex societies. Evidence comes from the archaeological record, which shows that our skull size did indeed grow as our ancestors started to live in bigger groups.61 We need brainpower to keep track of others’ feelings, to know who you can trust, who will take advantage and who you need to keep sweet. And once language evolved, we needed to be eloquent, to be able to build support within the group and bring others to our way of thinking. Those arguments didn’t need to be logical to bring us those benefits; they just had to be persuasive. And that subtle difference may explain why irrationality and intelligence often go hand in hand.62
Consider motivated reasoning and the myside bias. If human thought is primarily concerned with truth-seeking, we should weigh up both sides of an argument carefully. But if we just want to persuade others that we’re right, then we’re going to seem more convincing if we can pull as much evidence for our view together. Conversely, to avoid being duped ourselves, we need to be especially sceptical of others’ arguments, and so we should pay extra attention to interrogating and challenging any evidence that disagrees with our own beliefs – just as Kahan had shown.
Biased reasoning isn’t just an unfortunate side effect of our increased brainpower, in other words – it may have been its raison d’être.
In the face-to-face encounters of our ancestors’ small gatherings, good arguments should have counteracted the bad, enhancing the overall problem solving to achieve a common goal; our biases could be tempered by others. But Mercier and Sperber say these mechanisms can backfire if we live in a technological and social bubble, and miss the regular argument and counterargument that could correct our biases. As a result, we simply accumulate more information to accommodate our views.
Before we learn how to protect ourselves from those errors, we must first explore one more form of the intelligence trap – ‘the curse of expertise’, which describes the ways that acquired knowledge and professional experience (as opposed to our largely innate general intelligence) can also backfire. As we shall see in one of the FBI’s most notorious mix-ups, you really can know too much.