18

UNKNOWN UNKNOWNS

There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know.

Donald Rumsfeld, US Department of Defense news briefing, 12 February 2002

RUMSFELD MADE HIS FAMOUS REMARK to advocate invading Iraq even though there was no evidence linking it to al-Qaeda’s terrorist attack on Manhattan’s World Trade Center towers in 2001. His ‘unknown unknowns’ were what else Iraq might be up to, even though he had no idea what it might be, and that was being presented as a reason for military action.102 But in a less militaristic context, the distinction he was making might well have been the most sensible thing he said in his entire career. When we’re aware of our ignorance, we can try to improve our knowledge. When we’re ignorant but unaware of it, we may be living in a fool’s paradise.

Most of this book has been about how humanity took unknown unknowns and turned them into known unknowns. Instead of attributing natural disasters to the gods, we recorded them, pored over the measurements, and extracted useful patterns. We didn’t obtain an infallible oracle, but we did get a statistical oracle that forecast the future better than random guesswork. In a few cases, we turned unknown unknowns into known knowns. We knew how the planets were going to move, and we also knew why we knew that. But when our new oracle of natural law proved inadequate, we used a mixture of experiment and clear thinking to quantify uncertainty: although we were still uncertain, we knew how uncertain we were. Thus was probability theory born.

My six Ages of Uncertainty run through the most important advances in understanding why we’re uncertain, and what we can do about it. Many disparate human activities played a part. Gamblers joined forces with mathematicians to unearth the basic concepts of probability theory. One of the mathematicians was a gambler, and at first he used his mathematical knowledge to great effect, but eventually he lost the family fortune. The same story goes for the world’s bankers early this century, who so firmly believed that mathematics freed their gambles from risk that they also lost the family fortune. Their family was a bit bigger, though: the entire population of the planet. Games of chance presented mathematicians with fascinating questions, and the toy examples these games provided were simple enough to analyse in detail. Ironically, it turns out that neither dice nor coins are as random as we imagine; much of the randomness comes from the human who throws the dice or tosses the coin.

As our mathematical understanding grew, we discovered how to apply the same insights to the natural world, and then to ourselves. Astronomers, trying to obtain accurate results from imperfect observations, developed the method of least squares to fit data to a model with the smallest error. Toy models of coin tossing explained how errors can average out to give smaller ones, leading to the normal distribution as a practical approximation to the binomial, and the central limit theorem, which proved that a normal distribution is to be expected when large numbers of small errors combine, whatever the probability distribution of the individual errors might be.

Meanwhile, Quetelet and his successors were adapting the astronomers’ ideas to model human behaviour. Soon, the normal distribution became the statistical model par excellence. An entire new subject, statistics, emerged, making it possible not just to fit models to data, but to assess how good the fit was, and to quantify the significance of experiments and observations. Statistics could be applied to anything that could be measured numerically. The reliability and significance of the results were open to question, but statisticians found ways to estimate those features too. The philosophical issue ‘what is probability?’ caused a deep split between frequentists, who calculated probabilities from data, and Bayesians, who considered them to be degrees of belief. Not that Bayes himself would necessarily sign up to the viewpoint now named after him, but I think he’d be willing to take credit for recognising the importance of conditional probability, and for giving us a tool to calculate it. Toy examples show what a slippery notion conditional probability can be, and how poorly human intuition can perform in such circumstances. Its real-world applications, in medicine and the law, sometimes reinforce that concern.

Effective statistical methods, used on data from well-designed clinical trials, have greatly increased doctors’ understanding of diseases, and made new drugs and treatments possible by providing reliable assessments of their safety. Those methods go well beyond classical statistics, and some are feasible only because we now have fast computers that can handle huge amounts of data. The financial world continues to pose problems for all forecasting methods, but we’re learning not to place too much reliance on classical economics and normal distributions. New ideas from areas as disparate as complex systems and ecology are shedding new light and suggesting sensible policies to ward off the next financial crash. Psychologists and neuroscientists are starting to think that our brains run along Bayesian tracks, embodying beliefs as strengths of connections between nerve cells. We’ve also come to realise that sometimes uncertainty is our friend. It can be exploited to perform useful tasks, often very important ones. It has applications to space missions and heart pacemakers.

It’s also why we’re able to breathe. The physics of gases turned out to be the macroscopic consequences of microscopic mechanics. The statistics of molecules explains why the atmosphere doesn’t all pile up in one place. Thermodynamics arose from the quest for more efficient steam engines, leading to a new and somewhat elusive concept: entropy. This in turn appeared to explain the arrow of time, because entropy increases as time passes. However, the entropy explanation on a macroscopic scale conflicts with a basic principle on the microscopic scale: mechanical systems are time-reversible. The paradox remains puzzling; I’ve argued that it comes from a focus on simple initial conditions, which destroy time-reversal symmetry.

Round about the time when we decided that uncertainty isn’t the whim of the gods, but a sign of human ignorance, new discoveries at the frontiers of physics shattered that explanation. Physicists became convinced that in the quantum world, nature is irreducibly random, and often plain weird. Light is both a particle and a wave. Entangled particles somehow communicate by ‘spooky action at a distance’. Bell inequalities are guarantees that only probabilistic theories can explain the quantum world.

About sixty years ago, mathematicians threw a spanner in the works, discovering that ‘random’ and ‘unpredictable’ aren’t the same. Chaos shows that deterministic laws can produce unpredictable behaviour. There can be a prediction horizon beyond which forecasts cease to be accurate. Weather-forecasting methods have changed completely as a result, running an ensemble of forecasts to deduce the most probable one. To complicate everything, some aspects of chaotic systems may be predictable on much longer timescales. Weather (a trajectory on an attractor) is unpredictable after a few days; climate (the attractor itself) is predictable over decades. A proper understanding of global warming and the associated climate change rests on understanding the difference.

Nonlinear dynamics, of which chaos is a part, is now casting doubt on some aspects of Bell inequalities, which are now under assault through a number of logical loopholes. Phenomena once thought to be characteristic of quantum particles are turning up in good old classical Newtonian physics. Maybe quantum uncertainty isn’t uncertain at all. Maybe Chaos preceded Cosmos as the ancient Greeks thought. Maybe Einstein’s dictum that God doesn’t play dice needs to be revised: He does play dice, but they’re hidden away, and they’re not truly random. Just like real dice.

I’m fascinated by how the Ages of Uncertainty are still striking sparks off each other. You often find methods from different ages being used in conjunction, such as probability, chaos, and quantum, all rolled into one. One outcome of our long quest to predict the unpredictable is that we now know there are unknown unknowns. Nassim Nicholas Taleb wrote a book about them, The Black Swan, calling them black swan events. The 2nd-century Roman poet Juvenal wrote (in Latin) ‘a rare bird in the lands and very much like a black swan’, his metaphor for ‘non-existent’. Every European knew swans were always white, right up to the moment in 1697 when Dutch explorers found black ones, in quantity, in Australia. Only then did it become apparent that what Juvenal thought he knew wasn’t a known known at all. The same error dogged the bankers in the 2008 crisis: their ‘five-sigma’ potential disasters, too rare even to consider, turned out to be commonplace, but in circumstances they hadn’t previously encountered.

All six Ages of Uncertainty have had lasting effects on the human condition, and they’re still with us today. If there’s a drought, some of us pray for rain. Some try to understand what caused it. Some try to stop everyone making the same mistake again. Some look for new sources of water. Some wonder if we could create rain to order. And some seek better methods to predict droughts by computer, using quantum effects in electronic circuits.

Unknown unknowns still trip us up (witness the belated realisation that plastic rubbish is choking the oceans), but we’re beginning to recognise that the world is much more complex than we like to imagine, and everything is interconnected. Every day brings new discoveries about uncertainty, in its many different forms and meanings. The future is uncertain, but the science of uncertainty is the science of the future.