3
The curse of knowledge: The beauty and fragility of the expert mind
One Friday evening in April 2004, the lawyer Brandon Mayfield made a panicked call to his mother. ‘If we were to somehow suddenly disappear . . . if agents of the government secretly sweep in and arrest us, I would like your assurance that you could come to Portland on the first flight and take the kids back to Kansas with you,’ he told her.1
An attorney and former officer in the US Army, Mayfield was not normally prone to paranoia, but America was still reeling from the fallout of 9/11. As a Muslim convert, married to an Egyptian wife, Mayfield sensed an atmosphere of ‘hysteria and islamophobia’, and a series of strange events now led him to suspect that he was the target of investigation.
One day his wife Mona had returned home from work to find that the front door was double-locked with a top bolt, when the family never normally used the extra precaution. Another day, Mayfield walked into his office to find a dusty footprint on his office desk, under a loose tile on the ceiling, even though no one should have entered the room overnight. On the road, meanwhile, a mysterious car, driven by a stocky fifty- or sixty-year-old, seemed to have followed him to and from the mosque.
Given the political climate, he feared he was under surveillance. ‘There was this realisation that it could be a secret government agency,’ he told me in an interview. By the time Mayfield made that impassioned phone call to his mother, he said, he had begun to feel an ‘impending doom’ about his fate, and he was scared about what that would mean for his three children.
At around 9:45 a.m. on 6 May, those fears were realised with three loud thumps on his office door. Two FBI agents had arrived to arrest Mayfield in connection with the horrendous Madrid bombings, which had killed 192 people and injured around two thousand on 11 March that year. His hands were cuffed behind his back, and he was bundled into a car and taken to the local courthouse.
He pleaded that he knew nothing of the attacks; when he first heard the news he had been shocked by the ‘senseless violence’, he said. But FBI agents claimed to have found his fingerprint on a blue shopping bag containing detonators, left in a van in Madrid. The FBI declared it was a ‘100% positive match’; there was no chance they were wrong.
As he describes in his book, Improbable Cause, Mayfield was held in a cell while the FBI put together a case to present to the Grand Jury. He was handcuffed and shackled in leg irons and belly chains, and subjected to frequent strip searches.
His lawyers painted a bleak picture: if the Grand Jury decided he was involved in the attacks, he could be shipped to Guantanamo Bay. As the judge stated in his first hearing, fingerprints are considered the gold standard of forensic evidence: people had previously been convicted for murder based on little more than a single print. The chances of two people sharing the same fingerprint were considered to be billions to one.2
Mayfield tried to conceive how his fingerprint could have appeared on a plastic carrier bag more than 5,400 miles away – across the entire American continent and Atlantic Ocean. But there was no way. His lawyers warned that the very act of denying such a strong line of evidence could mean that he was indicted for perjury. ‘I thought I was being framed by unnamed officials – that was the immediate thought,’ Mayfield told me.
His lawyers eventually persuaded the court to employ an independent examiner, Kenneth Moses, to re-analyse the prints. Like those of the FBI’s own experts, Moses’ credentials were impeccable. He had served with the San Francisco Police Department for twenty-seven years, and had garnered many awards and honours during his service.3 It was Mayfield’s last chance, and on 19 May – after nearly two weeks in prison – he returned to the tenth floor of the courthouse, to hear Moses give his testimony by video conference.
As Moses’ testimony unfurled, Mayfield’s worst fears were confirmed. ‘I compared the latent prints to the known prints that were submitted on Brandon Mayfield,’ Moses told the court. ‘And I concluded that the latent print is the left index finger of Mr Mayfield.’4
Little did he know that a remarkable turn of events taking place on the other side of the Atlantic Ocean would soon save him. That very morning, the Spanish National Police had identified an Algerian man, Ouhnane Daoud, connected with the bombings. Not only could they show that his finger better fitted the print previously matched to Mayfield – including some ambiguous areas dismissed by the FBI ? but his thumb also matched an additional print found on the bag. He was definitely their man.
Mayfield was freed the next day, and by the end of the month, the FBI would have to release a humiliating public apology.
What went wrong? Of all the potential explanations, a simple lack of skill cannot be the answer: the FBI’s forensics teams are considered to be the best in the world.5 Indeed, a closer look at the FBI’s mistakes reveals that they did not occur despite its examiners’ knowledge – they may have occurred because of it.
The previous chapters have examined how general intelligence – the capacity for abstract reasoning measured by IQ or SATs – can backfire. The emphasis here should be on the word general, though, and you might hope we could mitigate those errors through more specialised knowledge and professional expertise, cultivated through years of experience. Unfortunately, the latest research shows that these can also lead us to err in unexpected ways.
These discoveries should not be confused with some of the vaguer criticisms that academics (such as Paul Frampton) live in an ‘ivory tower’ isolated from ‘real life’. Instead, the latest research highlights dangers in the exact situations where most people would hope that experience protects you from mistakes.
If you are undergoing heart surgery, flying across the globe or looking to invest an unexpected windfall, you want to be in the care of a doctor, pilot or accountant with a long and successful career behind them. If you want an independent witness to verify a fingerprint match in a high-profile case, you choose Moses. Yet there are now various social, psychological and neurological reasons that explain why expert judgement sometimes fails at the times when it is most needed – and the sources of these errors are intimately entwined with the very processes that normally allow experts to perform so well.
‘A lot of the cornerstones, the building blocks that make the expert an expert and allow them to do their job efficiently and quickly, also entail vulnerabilities: you can’t have one without the other,’ explains the cognitive neuroscientist Itiel Dror at University College London, who has been at the forefront of much of this research. ‘The more expert you are, the more vulnerable you are in many ways.’
Clearly experts will still be right the majority of times, but when they are wrong, it can be disastrous, and a clear understanding of the overlooked potential for expert error is essential if we are to prevent those failings.
As we shall soon discover, those frailties blinded the FBI examiners’ judgement – bringing about the string of bad decisions that led to Mayfield’s arrest. In aviation they have led to the unnecessary deaths of pilots and civilians, and in finance they contributed to the 2008 financial crisis.
Before we examine that research, we first need to consider some core assumptions. One potential source of expert error could be a sense of over-confidence. Perhaps experts over-reach themselves, believing their powers are infallible? The idea would seem to fit with the descriptions of the bias blind spot that we explored in the last chapter.
Until recently, however, the bulk of the scientific research suggested the opposite was true: it’s the incompetents who have an inflated view of their abilities. Consider a classic study by David Dunning at the University of Michigan and Justin Kruger at New York University. Dunning and Kruger were apparently inspired by the unfortunate case of McArthur Wheeler, who attempted to rob two banks in Pittsburgh in 1995. He committed the crimes in broad daylight, and the police arrested him just hours later. Wheeler was genuinely perplexed. ‘But I wore the juice!’ he apparently exclaimed. Wheeler, it turned out, believed a coating of lemon juice (the basis of invisible ink) would make him imperceptible on the CCTV footage.6
From this story, Dunning and Kruger wondered if ignorance often comes hand in hand with over-confidence, and set about testing the idea in a series of experiments. They gave students tests on grammar and logical reasoning, and then asked them to rate how well they thought they had performed. Most people misjudged their own abilities, but this was particularly true for the people who performed the most poorly. In technical terms, their confidence was poorly calibrated – they simply had no idea just how bad they were. Crucially, Dunning and Kruger found that they could reduce that over-confidence by offering training in the relevant skills. Not only did the participants get better at what they did; their increased knowledge also helped them to understand their limitations.7
Since Dunning and Kruger first published their study in 1999, the finding has been replicated many times, across many different cultures.8 One survey of thirty-four countries – from Australia to Germany, and Brazil to South Korea – examined the maths skills of fifteen-year-old students; once again, the least able were often the most over-confident.9
Unsurprisingly, the press have been quick to embrace the ‘Dunning?Kruger Effect’, declaring that it is the reason why ‘losers have delusions of grandeur’ and ‘why incompetents think they are awesome’ and citing it as the cause of President Donald Trump’s more egotistical statements.10
The Dunning-Kruger Effect should have an upside, though. Although it may be alarming when someone who is highly incompetent but confident reaches a position of power, it does at least reassure us that education and training work as we would hope, improving not just our knowledge but our metacognition and self-awareness. This was, incidentally, Bertrand Russell’s thinking in an essay called ‘The Triumph of Stupidity’ in which he declared that ‘the fundamental cause of the trouble is that in the modern world the stupid are cocksure while the intelligent are full of doubt’.
Unfortunately, these discoveries do not paint the whole picture. In charting the shaky relationship between perceived and actual competence, these experiments had focused on general skills and knowledge, rather than the more formal and extensive study that comes with a university degree, for example.11 And when you do investigate people with an advanced education, a more unsettling vision of the expert brain begins to emerge.
In 2010, a group of mathematicians, historians and athletes were tasked with identifying certain names that represented significant figures within each discipline. They had to discern whether Johannes de Groot or Benoit Theron were famous mathematicians, for instance, and they could answer, Yes, No, or Don’t Know. As you might hope, the experts were better at picking out the right people (such as Johannes de Groot, who really was a mathematician) if they fell within their discipline. But they were also more likely to say they recognised the made-up figures (in this case, Benoit Theron).12 When their self-perception of expertise was under question, they would rather take a guess and ‘over-claim’ the extent of their knowledge than admit their ignorance with a ‘don’t know’.
Matthew Fisher at Yale University, meanwhile, quizzed university graduates on their college major for a study published in 2016. He wanted to check their knowledge of the core topics of the degree, so he first asked them to estimate how well they understood some of the fundamental principles of their discipline; a physicist might have been asked to gauge their understanding of thermodynamics; a biologist, to describe Kreb’s Cycle.
Unbeknown to the participants, Fisher then sprung a surprise test: they now had to write a detailed description of the principles they claimed to know. Despite having declared a high level of knowledge, many stumbled and struggled to write a coherent explanation. Crucially, this was only true within the topic of their degree. When graduates also considered topics beyond their specialism, or more general, everyday subjects, their initial estimates tended to be far more realistic.13
One likely reason is that the participants simply had not realised how much they might have forgotten since their degree (a phenomenon that Fisher calls meta-forgetfulness). ‘People confuse their current level of understanding with their peak level of knowledge,’ Fisher told me. And that may suggest a serious problem with our education. ‘The most cynical reading of it is that we’re not giving students knowledge that stays with them,’ Fisher said. ‘We’re just giving them the sense they know things, when they actually don’t. And that seems to be counter-productive.’
The illusion of expertise may also make you more closed-minded. Victor Ottati at Loyola University in Chicago has shown that priming people to feel knowledgeable means that they were less likely to seek or listen to the views of people who disagreed with them.* Ottati notes that this makes sense when you consider the social norms surrounding expertise; we assume that an expert already has the credentials to stick to their opinions, which he calls ‘earned dogmatism’.14
* The Japanese, incidentally, have encoded these ideas in the word shoshin, which encapsulates the fertility of the beginner’s mind and its readiness to accept new ideas. As the Zen monk Shunryu Suzuki put it in the 1970s: ‘In the beginner’s mind there are many possibilities; in the expert’s, there are few.’
In many cases, of course, experts really may have better justifications to think what they do. But if they over-estimate their own knowledge – as Fisher’s work might suggest – and then stubbornly refuse to seek or accept another’s opinion, they may quickly find themselves out of their depth.
Ottati speculates that this fact could explain why some politicians become more entrenched in their opinions and fail to update their knowledge or seek compromise – a state of mind he describes as ‘myopic over-self-confidence’.
Earned dogmatism might also further explain the bizarre claims of the scientists with ‘Nobel Disease’ such as Kary Mullis. Subrahmanyan Chandrasekhar, the Nobel Prize-winning Indian-American astrophysicist, observed this tendency in his colleagues. ‘These people have had great insights and made profound discoveries. They imagine afterwards that the fact that they succeeded so triumphantly in one area means they have a special way of looking at science that must be right. But science doesn’t permit that. Nature has shown over and over again that the kinds of truth which underlie nature transcend the most powerful minds.’15
Inflated self-confidence and earned dogmatism are just the start of the expert’s flaws, and to understand the FBI’s errors, we have to delve deeper into the neuroscience of expertise and the ways that extensive training can permanently change our brain’s perception – for good and bad.
The story begins with a Dutch psychologist named Adriaan de Groot, who is sometimes considered the pioneer of cognitive psychology. Beginning his career during the Second World War, de Groot had been something of a prodigious talent at school and university – showing promise in music, mathematics and psychology – but the tense political situation on the eve of the war offered few opportunities to pursue academia after graduation. Instead, de Groot found himself scraping together a living as a high-school teacher, and later, as an occupational psychologist for a railway company.16
De Groot’s real passion was chess, however. A considerably talented player, he had represented his country at an international tournament in Buenos Aires,17 and decided to interview other players about their strategies to see if they could reveal the secrets of exceptional performance.18 He began by showing them a sample chess board before asking them to talk through their mental strategies as they decided on the next move.
De Groot had initially suspected that their talents might arise from the brute force of their mental calculations: perhaps they were simply better at crunching the possible moves and simulating the consequences. This didn’t seem to be the case, however: the experts didn’t report having cycled through many positions, and they often made up their minds within a few seconds, which would not have given them enough time to consider the different strategies.
Follow-up experiments revealed that the players’ apparent intuition was in fact an astonishing feat of memory, achieved through a process that is now known as ‘chunking’. The expert player stops seeing the game in terms of individual pieces and instead breaks the board into bigger units – or ‘complexes’ – of pieces. In the same ways that words can be combined into larger sentences, those complexes can then form templates or psychological scripts known as ‘schemas’, each of which represents a different situation and strategy. This transforms the board into something meaningful, and it is thought to be the reason that some chess grandmasters can play multiple games simultaneously – even while blindfolded. The use of schemas significantly reduces the processing workload for the player’s brain; rather than computing each potential move from scratch, experts search through a vast mental library of schemas to find the move that fits the board in front of them.
De Groot noted that over time the schemas can become deeply ‘engrained in the player’, meaning that the right solution may come to mind automatically with just a mere glance at the board, which neatly accounts for those phenomenal flashes of brilliance that we have come to associate with expert intuition. Automatic, engrained behaviours also free up more of the brain’s working memory, which might explain how experts operate in challenging environments. ‘If this were not the case,’ de Groot later wrote, ‘it would be completely impossible to explain why some chess players can still play brilliantly while under the influence of alcohol.’19
De Groot’s findings would eventually offer a way out of his tedious jobs at the high school and railway, earning him a doctorate from the University of Amsterdam. And it has since inspired countless other studies in many domains – explaining the talent of everyone from Scrabble and poker champions to the astonishing performances of elite athletes like Serena Williams, and the rapid coding of world-class computer programmers.20
Although the exact processes will differ depending on the particular skill, in each case the expert is benefiting from a vast library of schemas that allows them to extract the most important information, recognise the underlying patterns and dynamics, and react with an almost automatic response from a pre-learnt script.21
This theory of expertise may also help us to understand less celebrated talents, such as the extraordinary navigation of London taxi drivers through the city’s 25,000 streets. Rather than remembering the whole cityscape, they have built schemas of known routes, so that the sight of a landmark will immediately suggest the best path from A to B, depending on the traffic at hand – without them having to recall and process the entire map.22
Even burglars may operate using the same neural processes. Asking real convicts to take part in virtual reality simulations of their crimes, researchers have demonstrated that more experienced burglars have amassed a set of advanced schemas based on the familiar layouts of British homes, allowing them to automatically intuit the best route through a house and to alight on the most valuable possessions.23 As one prison inmate told the researchers: ‘The search becomes a natural instinct, like a military operation – it becomes routine.’24
There is no denying that the expert’s intuition is a highly efficient way of working in the vast majority of situations they face – and it is often celebrated as a form of almost superhuman genius.
Unfortunately, it can also come with costly sacrifices.
One is flexibility: the expert may lean so heavily on existing behavioural schemas that they struggle to cope with change.25 When tested on their memories, experienced London taxi drivers appeared to struggle with the rapid development of Canary Wharf at the end of the twentieth century, for instance; they just couldn’t integrate the new landmarks and update their old mental templates of the city.26 Similarly, an expert games champion will find it harder to learn a new set of rules and an accountant will struggle to adapt to new tax laws. The same cognitive entrenchment can also limit creative problem solving if the expert fails to look beyond their existing schemas for new ways to tackle a challenge. They become entrenched in the familiar ways of doing things.
The second sacrifice may be an eye for detail. As the expert brain chunks up the raw information into more meaningful components, and works at recognising broad underlying patterns, it loses sight of the smaller elements. This change has been recorded in real-time scans of expert radiologists’ brains: they tend to show greater activity in the areas of the temporal lobe associated with advanced pattern recognition and symbolic meaning, but less activity in regions of the visual cortex that are associated with combing over fine detail.27 The advantage will be the ability to filter out irrelevant information and reduce distraction, but this also means the expert is less likely to consider all the elements of a problem systematically, potentially causing them to miss important nuances that do not easily fit their mental maps.
It gets worse. Expert decisions, based on gist rather than careful analysis, are also more easily swayed by emotions and expectations and cognitive biases such as framing and anchoring.28 The upshot is that training may have actually reduced their rationality quotient. ‘The expert’s mindset – based on what they expect, what they hope, whether they are in a good mood or bad mood that day – affects how they look at the information,’ Itiel Dror told me. ‘And the brain mechanisms – the actual cognitive architecture – that give an expert their expertise are especially vulnerable to that.’
The expert could, of course, override their intuitions and return to a more detailed, systematic analysis. But often they are completely unaware of the danger – they have the bias blind spot that we observed in Chapter 2.29 The result is a kind of ceiling to their accuracy, as these errors become more common among experts than those arising from ignorance or inexperience. When that fallible, gist-based processing is combined with over-confidence and ‘earned dogmatism’, it gives us one final form of the intelligence trap – and the consequences can be truly devastating.
The FBI’s handling of the Madrid bombings offers the perfect example of these processes in action. Matching fingerprints is an extraordinarily complex job, with analyses based on three levels of increasingly intricate features, from broad patterns, such as whether your prints have a left- or right-facing swirl, a whorl or an arch, to the finer details of the ridges in your skin – whether a particular line splits in two, breaks into fragments, forms a loop called an ‘eye’ or ends abruptly. Overall, examiners may aim to detect around ten identifying features.
Eye-tracking studies reveal that expert examiners often go through this process semi-automatically,30 chunking the picture in much the same way as de Groot’s chess grandmasters31 to identify the features that are considered the most useful for comparison. As a result, the points of identification may just jump out at the expert analyst, while a novice would have to systematically identify and check each one – making it exactly the kind of top-down decision making that can be swayed by bias.
Sure enough, Dror has found that expert examiners are prone to a range of cognitive errors that may arise from such automatic processing. They were more likely to find a positive match if they were told a suspect had already confessed to the crime.32 The same was true when they were presented with emotive material, such as a gory picture of a murder victim. Although it should have had no bearing on their objective judgement, the examiners were again more likely to link the fingerprints, perhaps because they felt more motivated and determined to catch the culprit.33 Dror points out that this is a particular problem when the available data is ambiguous and messy – and that was exactly the problem with the evidence from Madrid. The fingerprint had been left on a crumpled carrier bag; it was smeared and initially difficult to read.
The FBI had first run the fingerprint through a computer analysis to find potential suspects among their millions of recorded prints, and Mayfield’s name appeared as the fourth of twenty possible suspects. At this stage, the FBI analysts apparently had no idea of his background – his print was only on file from a teenage brush with the law. But it seems likely that they were hungry for a match, and once they settled on Mayfield they became more and more invested in their choice – despite serious signs that they had made the wrong decision.
While the examiners had indeed identified around fifteen points of similarity in the fingerprints, they had consistently ignored significant differences. Most spectacularly, a whole section of the latent print – the upper left-hand portion – failed to match Mayfield’s index finger. The examiners had argued that this area might have come from someone else’s finger, who had touched the bag at another time; or maybe it came from Mayfield himself, leaving another print super-imposed on the first one to create a confusing pattern. Either way, they decided they could exclude that anomalous section and simply focus on the bit that looked most like Mayfield’s.
If the anomalous section had come from another finger, however, you would expect to see tell-tale signs. The two fingers would have been at different angles, for instance, meaning that the ridges would be overlapping and criss-crossing. You might also expect that the two fingers would have touched the bag with varying pressure, affecting the appearance of the impressions left behind; one section might have seemed fainter than the first. Neither sign was present in this case.
For the FBI’s story to make sense, the two people would have gripped the bag with exactly the same force, and their prints would have had to miraculously align. The chances of that happening were tiny. The much likelier explanation was that the print came from a single finger – and that finger was not Mayfield’s.
These were not small subtleties but glaring holes in the argument. A subsequent report by the Office of the Inspector General (OIG) found that the complete neglect of this possibility was completely unwarranted. ‘The explanation required the examiners to accept an extraordinary set of coincidences’, the OIG concluded.34 Given those discrepancies, some independent fingerprint examiners reviewing the case concluded that Mayfield should have been ruled out right away.35
Nor was this the only example of such circular reasoning in the FBI’s case: the OIG found that across the whole of their analysis, the examiners appeared far more likely to dismiss or ignore any points of interest that disagreed with their initial hunch, while showing far less scrutiny for details that appeared to suggest a match.
The two marked-up prints above, taken from the freely available OIG report, show just how many errors they made. The Madrid print is on the left; Mayfield’s is on the right. Admittedly the errors are hard to see for a complete novice, but if you look very carefully you can make out some notable features that are present in one but not the other.
The OIG concluded that this was a clear case of the confirmation bias, but given what we have learnt from the research on top-down processing and the selective attention that comes with expertise, it is possible that the examiners weren’t even seeing those details in the first place. They were almost literally blinded by their expectations.
These failings could have been uncovered with a truly independent analysis. But although the prints moved through multiple examiners, each one knew their colleague’s conclusions, swaying their judgement. (Dror calls this a ‘bias cascade’.36) This also spread to the officers performing the covert surveillance of Mayfield and his family, who even mistook his daughter’s Spanish homework for travel documents placing him in Madrid at the time of the attack.
Those biases will only have been strengthened once the FBI looked into Mayfield’s past and discovered that he was a practising Muslim, and that he had once represented one of the Portland Seven terrorists in a child custody case. In reality, it had no bearing on his presumed guilt.37
The FBI’s confidence was so great that they ignored additional evidence from Spain’s National Police (the SNP). By mid-April the SNP had tried and failed to verify the match, yet the FBI lab quickly disregarded their concerns. ‘They had a justification for everything,’ Pedro Luis Mélida Lledó, head of the fingerprint unit for the SNP, told the New York Times shortly after Mayfield was exonerated.38 ‘But I just couldn’t see it.’
Records of the FBI’s internal emails confirm that the examiners were unshaken by the disagreement. ‘I spoke with the lab this morning and they are absolutely confident that they have the match to the print ? No doubt about it!!!!!’ one FBI agent wrote. ‘They will testify in any court you swear them into.’39
That complete conviction may have landed Mayfield in Guantanamo Bay – or death row – if the SNP had not succeeded in finding their own evidence that he was innocent. A few weeks after the original bombings, they raided a house in suburban Madrid. The suspects detonated a suicide bomb rather than submitting to arrest, but the police managed to uncover documents bearing the name of Ouhnane Daoud: an Algerian national, whose prints had been on record for an immigration event. Mayfield was released, and within a week, he was completely exonerated of any connection to the attack. Challenging the lawfulness of his arrest, he eventually received $2 million in compensation.
The lesson here is not just psychological, but social. Mayfield’s case perfectly illustrates the ways that the over-confidence of experts themselves, combined with our blind faith in their talents, can amplify their biases – with potentially devastating effect. The chain of failures within the FBI and the courtroom should not have been able to escalate so rapidly, given the lack of evidence that Mayfield had even left the country.
With this knowledge in mind, we can begin to understand why some existing safety procedures – although often highly effective – nevertheless fail to protect us from expert error.
Consider aviation. Commonly considered to be one of the most reliable industries on Earth, airports and pilots already make use of numerous safety nets to catch any momentary lapses of judgement. The use of checklists as reminders of critical procedures – now common in many other sectors – originated in the cockpit to ensure, for instance, safer take-offs and landings.
Yet these strategies do not account for the blind spots that specifically arise from expertise. With experience, the safety procedures are simply integrated into the pilot’s automatic scripts and shrink from conscious awareness. The result, according to one study of nineteen serious accidents, is ‘an insidious move towards less conservative judgement’ and it has led to people dying when the pilot’s knowledge should have protected them from error.40
This was evident at Blue Grass Airport in Lexington, Kentucky, on 25 August 2007 at 6 a.m. in the morning. Comair Flight 5191 had been due to take off from runway 22 around 6 a.m., but the pilot lined up on a shorter runway. Thanks to the biases that came with their extensive experience, both the pilot and co-pilot missed all the warning signs that they were in the wrong place. The plane smashed through the perimeter fence, before ricocheting off an embankment, crashing into a pair of trees, and bursting into flames. Forty-seven passengers – and the pilot – died as a result.41
The curse of expertise in aviation doesn’t end there. As we saw with the FBI’s forensic scientists, experimental studies have shown that a pilot’s expertise may even influence their visual perception – causing them to under-estimate the depth of cloud in a storm, for instance, based on their prior expectations.42
The intelligence trap shows us that it’s not good enough to be fool proof; procedures need to be expert proof too. The nuclear power industry is one of the few sectors to account for the automatisation that comes with experience, with some plants routinely switching the order of procedures in their safety checks to prevent inspectors from working on auto-pilot. Many other industries, including aviation, could learn the same lesson.43
A greater appreciation of the curse of expertise – and the virtues of ignorance – can also explain how some organisations weather chaos and uncertainty, while others crumble in the changing wind.
Consider a study by Rohan Williamson of Georgetown University, who recently examined the fortunes of banks during financial crises. He was interested in the roles of ‘independent directors’– people recruited from outside the organisation to advise the management. The independent director is meant to offer a form a self-regulation, which should require a certain level of expertise, and many do indeed come from other financial institutions. Due to the difficulties of recruiting a qualified expert without any other conflicting interests, however, some of the independent directors may be drawn from other areas of business, meaning they may lack the more technical knowledge of the processes involved in the bank’s complex transactions.
Bodies such as the Organisation for Economic Cooperation and Development (OECD) had previously argued that this lack of financial expertise may have contributed to the 2008 financial crisis.44
But what if they’d got it the wrong way around, and this ignorance was actually a virtue? To find out, Williamson examined the data of 100 banks before and after the crisis. Until 2006, the results were exactly as you might expect if you assume that greater knowledge always aids decision making: banks with an expert board performed slightly better than those with fewer (or no) independent directors holding a background in finance since they were more likely to endorse risky strategies that paid off.
Their fortunes took a dramatic turn after the financial crash, however; now it was the banks with the least expertise that performed better. The ‘expert’ board members, so deeply embedded in their already risky decision making, didn’t pull back and adapt their strategy, while the less knowledgeable independent directors were less entrenched and biased, allowing them to reduce the banks’ losses as they guided them through the crash.45
Although this evidence comes from finance – an area not always respected for its rationality – the lessons could be equally valuable for any area of business. When the going gets tough, the less experienced members of your team may well be the best equipped to guide you out of the mess.
In forensic science, at least, there has been some movement to mitigate the expert errors behind the FBI’s investigations into Brandon Mayfield.
‘Before Brandon Mayfield, the fingerprint community really liked to explain any errors in the language of incompetence,’ says the UCLA law professor Jennifer Mnookin. ‘Brandon Mayfield opened up a space for talking about the possibility that really good analysts, using their methods correctly, could make a mistake.’46
Itiel Dror has been at the forefront of the work detailing these potential errors in forensic judgements, and recommending possible measures that could mitigate the effects. For example, he advocates more advanced training that includes a cognitively informed discussion of bias, so that every forensic scientist is aware of the ways their judgement may be swayed, and practical ways to minimise these influences. ‘Like an alcoholic at an AA meeting, acknowledging the problem is the first step in the solution,’ he told me.
Another requirement is that forensic analysts make their judgements ‘blind’, without any information beyond the direct evidence at hand, so that they are not influenced by expectation but see the evidence as objectively as possible. This is especially crucial when seeking a second opinion: the second examiner should have no knowledge of the first judgement.
The evidence itself must be presented in the right way and in the right sequence, using a process that Itiel Dror calls ‘Linear Sequential Unmasking’ to avoid the circular reasoning that had afflicted the examiners’ judgement after the Madrid bombings.47 For instance, the examiners should first mark up the latent print left on the scene before even seeing the suspect’s fingerprint, giving them predetermined points of comparison. And they should not receive any information about the context of a case before making their forensic judgement of the evidence. This system is now used by the FBI and other agencies and police departments across the United States and other countries.
Dror’s message was not initially welcomed by the experts he has studied; during our conversation at London’s Wellcome Collection, he showed me an angry letter, published in a forensics journal, from the Chair of the Fingerprint Society, which showed how incensed many examiners were at the very idea that they may be influenced by their expectations and their emotions. ‘Any fingerprint examiner who comes to a decision on identification and is swayed either way in that decision-making process under the influence of stories and gory images is either totally incapable of performing the noble tasks expected of him/her or is so immature he/she should seek employment at Disneyland’, he wrote.
Recently, however, Dror has found that more and more forces are taking his suggestions on board. ‘Things are changing . . . but it’s slow. You will still find that if you talk to certain examiners, they will say “Oh no, we’re objective.” ’
Mayfield retains some doubts about whether these were genuine unconscious errors, or the result of a deliberate set-up, but he supports any work that helps to highlight the frailties of fingerprint analysis. ‘In court, each piece of evidence is like a brick in a wall,’ he told me. ‘The problem is that they treat the fingerprint analysis as if it is the whole wall – but it’s not even a strong brick, let alone a wall.’
Mayfield continues to work as a lawyer. He is also an active campaigner, and has co-written his account of the ordeal, called Improbable Cause, with his daughter Sharia, in a bid to raise awareness of the erosion of US civil liberties in the face of more stringent government surveillance. During our conversation, he appeared to be remarkably stoic about his ordeal. ‘I’m talking to you – I’m not locked in Guantanamo, in some Kafkaesque situation . . . So in that sense, the justice system must have worked,’ he told me. ‘But there may be many more people who are not in such an enviable position.’
With this knowledge in mind, we are now ready to start Part 2. Through the stories of the Termites, Arthur Conan Doyle, and the FBI’s forensic examiners, we have seen four potential forms of the intelligence trap:
If we return to the analogy of the brain as a car, this research confirms the idea that intelligence is the engine, and education and expertise are its fuel; by equipping us with the basic abstract reasoning skills and specialist knowledge, they put our thinking in motion, but simply adding more power won’t always help you to drive that vehicle safely. Without counter-factual thinking and tacit knowledge – you may find yourself up a dead end; if you suffer from motivated reasoning, earned dogmatism, and entrenchment, you risk simply driving in circles, or worse, off a cliff.
Clearly, we’ve identified the problem, but we are still in need of some lessons to teach us how to navigate these potential pitfalls more carefully. Correcting these omissions is now the purpose of a whole new scientific discipline – evidence-based wisdom – which we shall explore in Part 2.