10
Stupidity spreading like wildfire: Why disasters occur – and how to stop them
We are on an oil rig in the middle of the ocean. It is a quiet evening with a light breeze.
The team of engineers has finished drilling, and they are now trying to seal their well with cement. They have checked the pressure at the seal, and all seems to be going well. Soon extraction can begin, and the dollars will start rolling in. It should be time to celebrate.
But the pressure tests were wrong; the cement has not set and the seal at the bottom of the well is not secure. As the engineers happily sign off their job, oil and gas has started to build up within the pipe – and it’s rising fast. In the middle of the engineers’ celebrations, mud and oil starts spewing onto the rig floor; the crew can taste the gas on their tongues. If they don’t act quickly, they will soon face a full-on ‘blowout’.
If you have even a passing knowledge of the world news in 2010, you may think you know what happens next: an almighty explosion and the largest oil spill in history.
But in this case, it doesn’t happen. Maybe the leak is far enough away from the engine room, or the wind is blowing, creating a movement of air that prevents the escaping gas from catching light. Or maybe the team on the ground simply notice the build-up of pressure and are able to deploy the ‘blowout preventer’ in time. Whatever the specific reason, a disaster is averted. The company loses a few days of extraction – and a few million dollars of profits – but no one dies.
This is not a hypothetical scenario or a wishful reimagining of the past. There had been literally dozens of minor blowouts in the Gulf of Mexico alone in the twenty years before the Deepwater Horizon spill at the Macondo well in April 2010 – but thanks to random circumstances such as the direction and speed of the wind, full-blown disasters never took place, and the oil companies could contain the damage.1
Transocean, the company in charge of cementing the Deepwater Horizon rig, had even experienced a remarkably similar incident in the North Sea just four months previously, when the engineers had also misinterpreted a series of ‘negative pressure tests’ – missing signs that the seal of the well was broken. But they had been able to contain the damage before an explosion occurred, resulting in a few days’ lost work rather than an environmental catastrophe.2
On 20 April 2010, however, there was no wind to dissipate the oil and gas, and thanks to faulty equipment, all the team’s attempts to contain the blowout failed. As the escaping gas built up in the engine rooms, it eventually ignited, unleashing a series of fireballs that ripped through the rig.
The rest is history. Eleven workers lost their lives, and over the next few months, more than 200 million gallons of oil were released into the Gulf of Mexico, making it the worst environmental catastrophe in American history. BP had to pay more than $65 billion in compensation.3
Why would so many people miss so many warning signs? From previous near misses to a failed reading of the internal pressure on the day of the explosion, employees seemed to have been oblivious to the potential for disaster.
As Sean Grimsley, a lawyer for a US Presidential Commission investigating the disaster, concluded: ‘The well was flowing. Hydrocarbons were leaking, but for whatever reason the crew after three hours that night decided it was a good negative pressure test . . . The question is why these experienced men out on that rig talked themselves into believing that this was a good test . . . None of these men wanted to die.’4
Disasters like the Deepwater Horizon explosion require us to expand our focus, beyond groups and teams, to the surprising ways that certain corporate cultures can exacerbate individual thinking errors and subtly inhibit wiser reasoning. It is almost as if the organisation as a whole is suffering from a collective bias blind spot.
The same dynamics underlie many of the worst manmade catastrophes in recent history, from NASA’s Columbia disaster to the Concorde crash in 2000.
You don’t need to lead a multinational organisation to benefit from this research; it includes eye-opening findings for anyone in employment. If you’ve ever worried that your own work environment is dulling your mind, these discoveries will help explain your experiences, and offer tips for the best ways to protect yourself from mindlessly imitating the mistakes of those around you.
Before we examine large-scale catastrophes, let’s begin with a study of ‘functional stupidity’ in the general workplace. The concept is the brainchild of Mats Alvesson at Lund University in Sweden, and André Spicer at the Cass Business School in London, who coined the term to describe the counter-intuitive reasons that some companies may actively discourage their employees from thinking.
Spicer told me that his interest stems from his PhD at the University of Melbourne, during which time he studied decision making at the Australian Broadcasting Corporation (ABC).5 ‘They would introduce these crazy change management programmes, which would often result in nothing changing except creating a huge amount of uncertainty.’
Many employees acknowledged the flaws in the corporation’s decision making. ‘You found a lot of very smart people thrown together in an organisation and many of them would spend a lot of time complaining how stupid the organisation was,’ Spicer told me. What really surprised him, however, was the number of people who failed to acknowledge the futility of what they were doing. ‘These extremely high-skilled and knowledgeable professionals were getting sucked into these crazy things, saying “this is intelligent, this is rational”, then wasting an incredible amount of time.’*
* The same culture can also be seen in the BBC’s offices – a fact the broadcaster itself lampoons in its mockumentary TV series, W1A. Having worked at the BBC while researching this book, it occurs to me that deciding to create a three-series sitcom about your own organisational failings – rather than fixing them – is perhaps the definition of functional stupidity.
Years later he would discuss such organisational failings with Alvesson at a formal academic dinner. In their resulting studies, the pair of researchers examined dozens of other examples of organisational stupidity, from the armed forces to IT analysts, newspaper publishers and their own respective universities, to examine whether many institutions really do make the most of their staff’s brains.
Their conclusions were deeply depressing. As Alvesson and Spicer wrote in their book, The Stupidity Paradox: ‘Our governments spend billions on trying to create knowledge economies, our firms brag about their superior intelligence, and individuals spend decades of their lives building up fine CVs. Yet all this collective intellect does not seem to be reflected in the many organisations we studied . . . Far from being “knowledge-intensive”, many of our most well-known chief organisations have become engines of stupidity.’6
In parallel with the kinds of biases and errors behind the intelligence trap, Spicer and Alvesson define ‘stupidity’ as a form of narrow thinking lacking three important qualities: reflection about basic underlying assumptions, curiosity about the purpose of your actions, and a consideration of the wider, long-term consequences of your behaviours.7 For many varied reasons, employees simply aren’t being encouraged to think.
This stupidity is often functional, they say, because it can come with some benefits. Individuals may prefer to go with the flow in the workplace to save effort and anxiety, particularly if we know there will be incentives or even a promotion in this for us later. Such ‘strategic ignorance’ is now well studied in psychological experiments where participants must compete for money: often participants choose not to know how their decisions affect the other players.8 By remaining in the dark, the player gains some ‘moral wiggle room’ (the scientific term) that allows them to act in a more selfish way.
We might also be persuaded by social pressure: no one likes a trouble-maker, after all, who delays meetings with endless questions. Unless we are actively encouraged to share our views, staying quiet and nodding along with the people around us can improve our individual prospects – even if that means temporarily turning off our critical capacities.
Besides helping the individual, this kind of narrow-minded, unquestioning approach can also bring some immediate benefits for the organisation, increasing productivity and efficiency in the short term without the employees wasting time questioning the wisdom of their behaviours. The result is that some companies may – either accidentally or deliberately – actually encourage functional stupidity within their offices.
Spicer and Alvesson argue that many work practices and structures contribute to an organisation’s functional stupidity, including excessive specialisation and division of responsibilities. A human resources manager may now have the very particular, single task of organising personality tests, for instance. As the psychological research shows us, our decision making and creativity benefits from hearing outside perspectives and drawing parallels between different areas of interest; if we mine the same vein day after day, we may begin to pay less attention to the nuances and details. The German language, incidentally, has a word for this: the Fachidiot, a one-track specialist who takes a single-minded, inflexible approach to a multifaceted problem.
But perhaps the most pervasive – and potent – source of functional stupidity is the demand for complete corporate loyalty and an excessive focus on positivity, where the very idea of criticism may be seen as a betrayal, and admitting disappointment or anxiety is considered a weakness. This is a particular bugbear for Spicer, who told me that relentless optimism is now deeply embedded in many business cultures, stretching from start-ups to huge multinationals.
He described research on entrepreneurs, for instance, who often cling to the motto that they will ‘fail forward’ or ‘fail early, fail often’. Although these mottos sound like an example of the ‘growth mindset’ – which should improve your chances of success in the future – Spicer says that entrepreneurs often look to explain their failings with external factors (‘my idea was before its time’) rather than considering the errors in their own performance, and how it might be adapted in the future. They aren’t really considering their own personal growth.
The numbers are huge: between 75 and 90 per cent of entrepreneurs lose their first businesses – but by striving to remain relentlessly upbeat and positive, they remain oblivious to their mistakes.9 ‘Instead of getting better – which this “fail forward” idea would suggest – they actually get worse over time,’ Spicer said. ‘Because of these self-serving biases, they just go and start a new venture and make exactly the same mistakes over and over again . . . and they actually see this as a virtue.’
The same attitude is prevalent among much larger and more established corporations, where bosses tell their employees to ‘only bring me the good news’. Or you may attend a brainstorming session, where you are told that ‘no idea is a bad idea’. Spicer argues that this is counter-productive; we are actually more creative when we take on board a criticism at an early stage of a discussion. ‘You’ve tested the assumptions and then you are able to enact upon them, instead of trying to push together ideas to cover up any differences.’
I hope you will now understand the intelligence trap well enough to see immediately some of the dangers of this myopic approach.
The lack of curiosity and insight is particularly damaging during times of uncertainty. Based on his observations in editorial meetings, for instance, Alvesson has argued that overly rigid and unquestioning thinking of this kind prevented newspapers from exploring how factors like the economic climate and rising taxes were influencing their sales; editors were so fixated on examining specific headlines on their front pages that they forgot even to consider the need to explore broader new strategies or outlets for their stories.
But Nokia’s implosion in the early 2010s offers the most vivid illustration of the ways that functional stupidity can drive an outwardly successful organisation to failure.
If you owned a cellphone in the early 2000s, chances are that it was made by the Finnish company. In 2007, they held around half the global market share. Six years later, however, most of their customers had turned away from the clunky Nokia interface to more sophisticated smartphones, notably Apple’s iPhone.
Commentators at the time suggested that Nokia was simply an inferior company with less talent and innovation than Apple, that the corporation had been unable to see the iPhone coming, or that they had been complacent, assuming that their own products would trump any others.
But as they investigated the company’s demise, the Finnish and Singaporean researchers Timo Vuori and Quy Huy found that none of this was true.10 Nokia’s engineers were among the best in the world, and they were fully aware of the risks ahead. Even the CEO himself had admitted, during an interview, that he was ‘paranoid about all the competition’. Yet they nevertheless failed to rise to the occasion.
One of the biggest challenges was Nokia’s operating system, Symbian, which was inferior to Apple’s iOS and unsuitable for dealing with sophisticated touchscreen apps, but overhauling the existing software would take years of development, and the management wanted to be able to present their new products quickly, leading them to rush through projects that needed greater forward planning.
Unfortunately, employees were not allowed to express any doubts about the way the company was proceeding. Senior managers would regularly shout ‘at the tops of their lungs’ if you told them something they did not want to hear. Raise a doubt, and you risked losing your job. ‘If you were too negative, it would be your head on the block,’ one middle manager told the researchers. ‘The mindset was that if you criticise what’s being done, then you’re not genuinely committed to it,’ said another.
As a consequence, employees began to feign expertise rather than admitting their ignorance about the problems they were facing, and accepted deadlines that they knew would be impossible to maintain. They would even massage the data showing their results so as to give a better impression. And when the company lost employees, it deliberately hired replacements with a ‘can do’ attitude – people who would nod along with new demands rather than disagreeing with the status quo. The company even ignored advice from external consultants, one of whom claimed that ‘Nokia has always been the most arrogant company ever towards my colleagues.’ They lost any chance of an outside perspective.
The very measures that were designed to focus the employee’s attention and encourage a more creative outlook were making it harder and harder for Nokia to step up to the competition.
As a result, the company consistently failed to upgrade its operating system to a suitable standard – and the quality of Nokia’s products slowly deteriorated. By the time the company launched the N8 – their final attempt at an ‘iPhone Killer’ – in 2010, most employees had secretly lost faith. It flopped, and after further losses Nokia’s mobile phone business was acquired by Microsoft in 2013.
The concept of functional stupidity is inspired by extensive observational studies, including an analysis of Nokia’s downfall, rather than psychological experiments, but this kind of corporate behaviour shows clear parallels with psychologists’ work on dysrationalia, wise reasoning and critical thinking.
You might remember, for instance, that feelings of threat trigger the so-called ‘hot’, self-serving cognition that leads us to justify our own positions rather than seeking evidence that challenges our point of view – and this reduces scores of wise reasoning. (It is the reason we are wiser when advising a friend about a relationship problem, even if we struggle to see the solution to our own troubles.)
Led by its unyielding top management, Nokia as an organisation was therefore beginning to act like an individual, faced with uncertain circumstances, whose ego has been threatened. Nokia’s previous successes, meanwhile, may have given it a sense of ‘earned dogmatism’, meaning that managers were less open to suggestions from experts outside the company.
Various experiments from social psychology suggest that this is a common pattern: groups under threat tend to become more conformist, single-minded and inward looking. More and more members begin to adopt the same views, and they start to favour simple messages over complex, nuanced ideas. This is even evident at the level of entire nations: newspaper editorials within a country tend to become more simplified and repetitive when it faces international conflict, for instance.11
No organisation can control its external environment: some threats will be inevitable. But organisations can alter the way they translate those perceived dangers to employees, by encouraging alternative points of view and actively seeking disconfirming information. It’s not enough to assume that employing the smartest people possible will automatically translate to better performance; you need to create the environment that allows them to use their skills.
Even the companies that appear to buck these trends may still incorporate some elements of evidence-based wisdom – although it may not be immediately obvious from their external reputation. The media company Netflix, for instance, famously has the motto that ‘adequate performance earns a generous severance’ – a seemingly cut-throat attitude that might promote myopia and short-term gains over long-term resilience.
Yet they seem to balance this with other measures that are in line with the broader psychological research. A widely circulated presentation outlining Netflix’s corporate vision, for example, emphasises many of the elements of good reasoning that we have discussed so far, including the need to recognise ambiguity and uncertainty and to challenge prevailing opinions – exactly the kind of culture that should encourage wise decision making.12
We can’t, of course, know how Netflix will fare in the future. But its success to date would suggest that you can avoid functional stupidity while also running an efficient – some would say ruthless – operation.
The dangers of functional stupidity do not end with these instances of corporate failure. Besides impairing creativity and problem solving, a failure to encourage reflection and internal feedback can also lead to human tragedy, as NASA’s disasters show.
‘Often it leads to a number of small mistakes being made, or the [company] focuses on the wrong problems and overlooks a problem where there should have been some sort of post mortem,’ notes Spicer. As a consequence, an organisation may appear outwardly successful while slowly sliding towards disaster.
Consider the Space Shuttle Columbia disaster in 2003, when foam insulation broke off an external tank during launch and struck the left wing of the orbiter. The resulting hole caused the shuttle to disintegrate upon re-entry into the Earth’s atmosphere, leading to the death of all seven crew members.
The disaster would have been tragic enough had it been a fluke, one-off occurrence without any potential warning signs. But NASA engineers had long known the insulation could break away like this; it had happened in every previous launch. For various reasons, however, the damage had never occurred in the right place to cause a crash, meaning that the NASA staff began to ignore the danger it posed.
‘It went from being a troublesome event for engineers and managers to being classified as a housekeeping matter,’ Catherine Tinsley, a professor of management at Georgetown University in Washington DC who has specialised in studying corporate catastrophes, told me.
Amazingly, similar processes were also the cause of the Challenger crash in 1986, which exploded due to a faulty seal that had deteriorated in the cold Florida winter. Subsequent reports showed that the seals had cracked on many previous missions, but rather than see this as a warning, the staff had come to assume that it would always be safe. As Richard Feynman – a member of the Presidential Commission investigating the disaster – noted, ‘when playing Russian roulette, the fact that the first shot got off safely is little comfort for the next’.13 Yet NASA did not seem to have learnt from those lessons.
Tinsley emphasises that this isn’t a criticism of those particular engineers and managers. ‘These are really smart people, working with data, and trying really hard to do a good job.’ But NASA’s errors demonstrate just how easily your perception of risk radically shifts without you even recognising that a change has occurred. The organisation was blind to the possibility of disaster.
The reason appears to be a form of cognitive miserliness known as the outcome bias, which leads us to focus on the actual consequences of a decision without even considering the alternative possible results. Like many of the other cognitive flaws that afflict otherwise intelligent people, it’s really a lack of imagination: we passively accept the most salient detail from an event (what actually happened) and don’t stop to think about what might have been, had the initial circumstances been slightly different.
Tinsley has now performed many experiments confirming that the outcome bias is a very common tendency among many different professionals. One study asked business students, NASA employees and space-industry contractors to evaluate the mission controller ‘Chris’, who took charge of an unmanned spacecraft under three different scenarios. In the first, the spacecraft launches perfectly, just as planned. In the second, it has a serious design flaw, but thanks to a turn of luck (its alignment to the sun) it can make its readings effectively. And in the third, there is no such stroke of fortune, and it completely fails.
Unsurprisingly, the complete failure is judged most harshly, but most of the participants were happy to ignore the design flaw in the ‘near-miss’ scenario, and instead praised Chris’s leadership skills. Importantly – and in line with Tinsley’s theory that the outcome bias can explain disasters like the Columbia catastrophe – the perception of future dangers also diminished after the participants had read about the near miss, explaining how some organisations may slowly become immune to failure.14
Tinsley has now found that this tendency to overlook errors was the common factor in dozens of other catastrophes. ‘Multiple near-misses preceded and foreshadowed every disaster and business crisis we studied,’ Tinsley’s team concluded in an article for the Harvard Business Review in 2011.15
Take one of the car manufacturer Toyota’s biggest disasters. In August 2009, a Californian family of four died when the accelerator pedal of their Lexus jammed, leading the driver to lose control on the motorway and plough into an embankment at 120 miles per hour, where the car burst into flames. Toyota had to recall more than six million cars – a disaster that could have been avoided if the company had paid serious attention to more than two thousand reports of accelerator malfunction over the previous decades, which is around five times the number of complaints that a car manufacturer might normally expect to receive for this issue.16
Tellingly, Toyota had set up a high-level task force in 2005 to deal with quality control, but the company disbanded the group in early 2009, claiming that quality ‘was part of the company’s DNA and therefore they didn’t need a special committee to enforce it’. Senior management also turned a deaf ear to specific warnings from more junior executives, while focusing on rapid corporate growth.17 This was apparently a symptom of a generally insular way of operating that did not welcome outside input, in which important decisions were made only by those at the very top of the hierarchy. Like Nokia’s management, it seems they simply didn’t want to hear bad news that might sidetrack them from their broader goals.
The ultimate cost to Toyota’s brand was greater than any of the savings they imagined they would make by not heeding these warnings. By 2010, 31 per cent of Americans believed that Toyota cars were unsafe18 ? a dramatic fall from grace for a company that was once renowned for its products’ quality and customer satisfaction.
Or consider Air France Flight 4590 from Paris to New York City. As it prepared for take-off on 25 July 2000, the Concorde airliner ran over some sharp debris left on the runway, causing a 4.5 kg chunk of tyre to fly into the underside of the aircraft’s wing. The resulting shockwave ruptured a fuel tank, leading it to catch light during take-off. The plane crashed into a nearby hotel, killing 113 people in total. Subsequent analyses revealed 57 previous instances in which the Concorde tyre had burst on the runway, and in one case the damage was very nearly the same as for Flight 4590 – except, through sheer good luck, the leaking fuel had failed to ignite. Yet these near misses were not taken as serious warning signs requiring urgent action.19
These crises are dramatic case studies in high-risk industries, but Tinsley argues that the same thinking processes will present latent dangers for many other organisations. She points to research on workplace safety, for instance, showing that for every thousand near misses, there will be one serious injury or fatality and at least ten smaller injuries.20
Tinsley does not frame her work as an example of ‘functional stupidity’, but the outcome bias appears to arise from the same lack of reflection and curiosity that Spicer and Alvesson have outlined.
And even small changes to a company’s environment can increase the chances that near misses are spotted. In both lab experiments and data gathered during real NASA projects, Tinsley has found that people are far more likely to note and report near misses when safety is emphasised as part of the overall culture, in its mission statements – sometimes with as much as a five-fold increase in reporting.21
As an example, consider one of those scenarios involving the NASA manager planning the unmanned space mission. Participants told that ‘NASA, which pushes the frontiers of knowledge, must operate in a high-risk, risk-tolerant environment’ were much less likely to notice the near miss. Those told that ‘NASA, as a highly visible organization, must operate in a high-safety, safety-first environment’, in contrast, successfully identified the latent danger. The same was also true when the participants were told that they would need to justify their judgement to the board. ‘Then the near miss also looks more like the failure condition.’
Remember we are talking about unconscious biases here: no participants had weighed it up and considered the near miss was worth ignoring; but unless they were prompted, they just didn’t really think about it at all. Some companies may expect that the value of safety is already implicitly understood, but Tinsley’s work demonstrates that it needs to be highly salient. It is telling that NASA’s motto had been ‘Faster, Better, Cheaper’ for most of the decade leading up to the Columbia disaster.
Before we end our conversation, Tinsley emphasises that some risks will be inevitable; the danger is when we are not even aware they exist. She recalls a seminar during which a NASA engineer raised his hand in frustration. ‘Do you not want us to take any risks?’ he asked. ‘Space missions are inherently risky.’
‘And my response was that I’m not here to tell you what your risk tolerance should be. I’m here to say that when you experience a near miss, your risk tolerance will increase and you won’t be aware of it.’ As the fate of the Challenger and Columbia missions shows, no organisation can afford that blind spot.
In hindsight, it is all too easy to see how Deepwater Horizon became a hotbed of irrationality before the spill. By the time of the explosion, it was six weeks behind schedule, with the delay costing $1 million a day, and some staff were unhappy with the pressure they were subjected to. In one email, written six days before the launch, the engineer Brian Morel labelled it ‘a nightmare well that has everyone all over the place’.
These are exactly the high-pressure conditions that are now known to reduce reflection and analytical thinking. The result was a collective blind spot that prevented many of Deepwater Horizon’s employees (from BP and its partners, Halliburton and Transocean) from seeing the disaster looming, and contributed to a series of striking errors.
To try to reduce the accumulating costs, for instance, they chose to use a cheaper mix of cement to secure the well, without investigating the possibility that it may not have been stable enough for the job at hand. They also reduced the total volume of cement used – violating their own guidelines – and scrimped on the necessary equipment required to hold the well in place.
On the day of the accident itself, the team avoided completing the full suite of tests to ensure the seal was secure, while also ignoring anomalous results that might have predicted the build-up of pressure inside the well.22 Worse still, the equipment necessary to contain the blowout, once it occurred, was in ill-repair.
Each of these risk factors could have been identified long before disaster struck; as we have seen, there were many minor blowouts that should have been significant warnings of the underlying dangers, leading to new and updated safety procedures. Thanks to lucky circumstances, however – even the random direction of the wind – none had been fatal, and so the underlying factors, including severe corner-cutting and inadequate safety training, had not been examined.23 And the more they played with fate, the more they were lulled into a false sense of complacency and became less concerned about cutting corners.24 It was a classic case of the outcome bias that Tinsley has documented – and the error seemed to have been prevalent across the whole of the oil industry.
Eight months previously, another oil and gas company, PTT, had even witnessed a blowout and spill in the Timor Sea, off Australia. Halliburton, which had also worked on the Macondo well, was the company behind the cement job there, too, and although a subsequent report had claimed that Halliburton itself held little responsibility, it might have still been taken as a vivid reminder of the dangers involved. A lack of communication between operators and experts, however, meant the lessons were largely ignored by the Deepwater Horizon team.25
In this way, we can see that the disaster wasn’t down to the behaviour of any one employee, but to an endemic lack of reflection, engagement and critical thinking that meant decision makers across the project had failed to consider the true consequences of their actions.
‘It is the underlying “unconscious mind” that governs the actions of an organization and its personnel’, a report from the Center for Catastrophic Risk Management (CCRM) at the University of California, Berkeley, concluded.26 ‘These failures . . . appear to be deeply rooted in a multi-decade history of organizational malfunction and short-sightedness.’ In particular, the management had become so obsessed with pursuing further success, they had forgotten their own fallibilities and the vulnerabilities of the technology they were using. They had ‘forgotten to be afraid’.
Or as Karlene Roberts, the director of the CCRM, told me in an interview, ‘Often, when organisations look for the errors that caused something catastrophic to happen, they look for someone to name, blame and then train or get rid of . . . But it’s rarely what happened on the spot that caused the accident. It’s often what happened years before.’
If this ‘unconscious mind’ represents an organisational intelligence trap, how can an institution wake up to latent risks?
In addition to studying disasters, Roberts’ team has also examined the common structures and behaviours of ‘high-reliability organisations’ such as nuclear power plants, aircraft carriers, and air traffic control systems that operate with enormous uncertainty and potential for hazard, yet somehow achieve extremely low failure rates.
Much like the theories of functional stupidity, their findings emphasise the need for reflection, questioning, and the consideration of long-term consequences – including, for example, policies that give employees the ‘licence to think’.
Refining these findings to a set of core characteristics, Karl Weick and Kathleen Sutcliffe have shown that high-reliability organisations all demonstrate:27
The commitment to resilience may be evident in small gestures that allow workers to know that their commitment to safety is valued. On one aircraft carrier, the USS Carl Vinson, a crewmember reported that he had lost a tool on deck that could have been sucked into a jet engine. All aircraft were redirected to land – at significant cost – but rather than punishing the team member for his carelessness, he was commended for his honesty in a formal ceremony the next day. The message was clear – errors would be tolerated if they were reported, meaning that the team as a whole were less likely to overlook much smaller mistakes.
The US Navy, meanwhile, has employed the SUBSAFE system to reduce accidents on its nuclear submarines. The system was first implemented following the loss of the USS Thresher in 1963, which flooded due to a poor joint in its pumping system, resulting in the deaths of 112 Navy personnel and 17 civilians.29 SUBSAFE specifically instructs officers to experience ‘chronic uneasiness’, summarised in the saying ‘trust, but verify’, and in more than five decades since, they haven’t lost a single submarine using the system.30
Inspired by Ellen Langer’s work, Weick refers to these combined characteristics as ‘collective mindfulness’. The underlying principle is that the organisation should implement any measures that encourage its employees to remain attentive, proactive, open to new ideas, questioning of every possibility, and devoted to discovering and learning from mistakes, rather than simply repeating the same behaviours over and over.
There is good evidence that adopting this framework can result in dramatic improvements. Some of the most notable successes of applying collective mindfulness have come from healthcare. (We’ve already seen how doctors are changing how individuals think – but this specifically concerns the overall culture and group reasoning.) The available measures involve empowering junior staff to question assumptions and to be more critical of the evidence presented to them, and encouraging senior staff to actively engage the opinions of those beneath them so that everyone is accountable to everyone else. The staff also have regular ‘safety huddles’, proactively report errors and perform detailed ‘root-cause analyses’ to examine the underlying processes that may have contributed to any mistake or near miss.
Using such techniques, one Canadian hospital, St Joseph’s Healthcare in London, Ontario, has reduced medication errors (the wrong drugs given to the wrong person) to just two mistakes in more than 800,000 medications dispensed in the second quarter of 2016. The Golden Valley Memorial in Missouri, meanwhile, has reduced drug-resistant Staphylococcus aureus infections to zero using the same principles, and patient falls – a serious cause of unnecessary injury in hospitals – have dropped by 41 per cent.31
Despite the additional responsibilities, staff in mindful organisations often thrive on the extra workload, with a lower turnover rate than institutions that do not impose these measures.32 Contrary to expectations, it is more rewarding to feel like you are fully engaging your mind for the greater good, rather than simply going through the motions.
In these ways, the research on functional stupidity and mindful organisations perfectly complement each other, revealing the ways that our environment can either involve the group brain in reflection and deep thinking, or dangerously narrow its focus so that it loses the benefits of its combined intelligence and expertise. They offer us a framework to understand the intelligence trap and evidence-based wisdom on a grand scale.
Beyond these general principles, the research also reveals specific practical steps for any organisation hoping to reduce error. Given that our biases are often amplified by feelings of time pressure, Tinsley suggests that organisations should encourage employees to examine their actions and ask: ‘If I had more time and resources, would I make the same decisions?’ She also believes that people working on high-stakes projects should take regular breaks to ‘pause and learn’, where they may specifically look for near misses and examine the factors underlying them – a strategy, she says, that NASA has now applied. They should institute near-miss reporting systems; ‘and if you don’t report a near miss, you are then held accountable’.
Spicer, meanwhile, proposes adding regular reflective routines to team meetings, including pre-mortems and post-mortems, and appointing a devil’s advocate whose role is to question decisions and look for flaws in their logic. ‘There’s lots of social psychology that says it leads to slightly dissatisfied people but better-quality decisions.’ He also recommends taking advantage of the outside perspective, by either inviting secondments from other companies, or encouraging staff to shadow employees from other organisations and other industries, a strategy that can help puncture the bias blind spot.
The aim is to do whatever you can to embrace that ‘chronic uneasiness’ – the sense that there might always be a better way of doing things.
Looking to research from further afield, organisations may also benefit from tests such as Keith Stanovich’s rationality quotient, which would allow them to screen employees working on high-risk projects and to check whether they are more or less susceptible to bias, and if they are in need of further training. They might also think of establishing critical thinking programmes within the company.
They may also analyse the mindset embedded in its culture: whether it encourages the growth of talent or leads employees to believe that their abilities are set in stone. Carol Dweck’s team of researchers asked employees at seven Fortune 1000 companies to rate their level of agreement with a series of statements, such as: ‘When it comes to being successful, this company seems to believe that people have a certain amount of talent, and they really can’t do much to change it’ (reflecting a collective fixed mindset) or ‘This company genuinely values the development and growth of its employees’ (reflecting a collective growth mindset).
As you might hope, companies cultivating a collective growth mindset enjoyed greater innovation and productivity, more collaboration within teams and higher employee commitment. Importantly, employees were also less likely to cut corners, or cheat to get ahead. They knew their development would be encouraged and were therefore less likely to cover up for their perceived failings.33
During their corporate training, organisations could also make use of productive struggle and desirable difficulties to ensure that their employees process the information more deeply. As we saw in Chapter 8, this not only means that the material is recalled more readily; it also increases overall engagement with the underlying concepts and means that the lessons are more readily transferable to new situations.
Ultimately, the secrets of wise decision making for the organisation are very similar to the secrets of wise decision making for the intelligent individual. Whether you are a forensic scientist, doctor, student, teacher, financier or aeronautical engineer, it pays to humbly recognise your limits and the possibility of failure, take account of ambiguity and uncertainty, remain curious and open to new information, recognise the potential to grow from errors, and actively question everything.
In the Presidential Commission’s damning report on the Deepwater Horizon explosion, one particular recommendation catches the attention, inspired by a revolutionary change in US nuclear power plants as a model for how an industry may deal with risk more mindfully.34
As you might have come to expect, the trigger was a real crisis. (‘Everyone waits to be punished before they act,’ Roberts said.) In this case it was the partial meltdown of a radioactive core in the Three Mile Island Nuclear Generating Station in 1979. The disaster led to the foundation of a new regulator, the Institute of Nuclear Power Operations (INPO), which incorporates a number of important characteristics.
Each generator is visited by a team of inspectors every two years, each visit lasting five to six weeks. Although one-third of INPO’s inspectors are permanent staff, the majority are seconded from other power plants, leading to a greater sharing of knowledge between organisations, and the regular input of an outside perspective in each company. INPO also actively facilitates discussions between lower-level employees and senior management with regular review groups. This ensures that the fine details and challenges of day-to-day operations are acknowledged and understood at every level of the hierarchy.
To increase accountability, the results of the inspections are announced at an annual dinner – meaning that ‘You get the whole top level of the utility industry focused on the poor performer’, according to one CEO quoted in the Presidential Commission’s report. Often, CEOs in the room will offer to loan their expertise to bring other generators up to scratch. The result is that every company is constantly learning from each other’s mistakes. Since INPO began operating, US generators have seen a tenfold reduction in the number of worker accidents.35
You need not be a fan of nuclear power to see how these structures maximise the collective intelligence of employees across the industry and greatly increase each individual’s awareness of potential risks, while reducing the build-up of those small, unacknowledged errors that can lead to catastrophe. INPO shows the way that regulatory bodies can help mindful cultures to spread across organisations, uniting thousands of employees in their reflection and critical thinking.
The oil industry has not (yet) implemented a comparably intricate system, but energy companies have banded together to revise industry standards, improve worker training and education, and upgrade their technology to better contain a spill, should it occur. BP has also funded a huge research programme to deal with the environmental devastation in the Gulf of Mexico. Some lessons have been learnt – but at what cost?36
The intelligence trap often emerges from an inability to think beyond our expectations – to imagine an alternative vision of the world, where our decision is wrong rather than right. This must have been the case on 20 April 2010; no one can possibly have considered the true scale of the catastrophe they were letting loose.
Over the subsequent months, the oil slick would cover more than 112,000 km2 of the ocean’s surface – an area that is roughly 85 per cent the size of England.37 According to the Center for Biological Diversity, the disaster killed at least 80,000 birds, 6,000 sea turtles and 26,000 marine mammals – an ecosystem destroyed by preventable errors. Five years later, baby dolphins were still being born with under-developed lungs, due to the toxic effects of the oil leaked into the water and the poor health of their parents. Only 20 per cent of dolphin pregnancies resulted in a live birth.38
That’s not to mention the enormous human cost. Besides the eleven lives lost on the rig itself and the unimaginable trauma inflicted on those who escaped, the spill devastated the livelihoods of fishing communities in the Gulf. Two years after the spill, Darla Rooks, a lifelong fisherperson from Port Sulfur, Louisiana, described finding crabs ‘with holes in their shells, shells with all the points burned off so all the spikes on their shells and claws are gone, misshapen shells, and crabs that are dying from within . . . they are still alive, but you open them up and they smell like they’ve been dead for a week’.
The level of depression in the area rose by 25 per cent over the following months, and many communities struggled to recover from their losses. ‘Think about losing everything that makes you happy, because that is exactly what happens when someone spills oil and sprays dispersants on it,’ Rooks told Al Jazeera in 2012.39 ‘People who live here know better than to swim in or eat what comes out of our waters.’
This disaster was entirely preventable – if only BP and its partners had recognised the fallibility of the human brain and its capacity for error. No one is immune, and the dark stain in the Gulf of Mexico should be a constant reminder of the truly catastrophic potential of the intelligence trap.