RULE TWO

Ponder your personal experience

In a bird’s eye view you tend to survey
everything . . . In a worm’s eye view you don’t have
that advantage of looking at everything. You
just see whatever is close to you.

MUHAMMAD YUNUS1

As I settled into presenting More or Less, I felt I had a dream gig. Debunking numerical nonsense in the news was fun, and by looking through the statistical telescope I was constantly seeing new and interesting things. There was, however, a snag: every time I travelled to the BBC studios to record the programme, I felt that my personal experience was contradicting some credible-seeming statistics.

Let me explain. The commute wasn’t the world’s most glamorous journey. To get to White City in west London from Hackney in east London, I’d scurry across a busy road, hop on to a busy double-decker bus, and watch the traffic as we moved slowly towards Bethnal Green, the underground station. If the bus had been busy, the tube train was busier. It made a can of sardines look roomy. I’d join a crowd of hopeful passengers on the platform, waiting for a Central Line train to arrive with enough space to squeeze on. That was by no means a certainty. We’d often have to wait for the second or third train before being able to wriggle between the less-than-delighted passengers who’d ridden in from further east. Getting a seat was out of the question.

It was this experience that challenged my view that numbers make the world add up, because when I looked at the statistics about how busy London’s public transport actually was, they flatly contradicted the evidence of my own eyes – and on warmer, sweatier days, my own nose. Those statistics showed that the average occupancy of a London bus was around twelve people, a tiny number compared with the sixty-two seats available on the double-decker bus I rode every morning.2 That felt completely wrong. Some days I felt there were more than twelve people within arm’s reach, let alone on the bus.

The tube occupancy rates made even less sense. According to Transport for London, the ‘crush capacity’ of one of those tube trains is more than a thousand people.3 But the average occupancy? Less than 130.4 What? You could lose 130 people on a Central Line tube train. You could squeeze them on to a single carriage and leave the other seven completely empty. And that’s not the occupancy at quiet moments – it’s the average. Was I really supposed to believe that these statistics – twelve people on a bus, 130 people on a train – reflected reality? Surely not, not when every single time I took a trip to work I could not only barely get on to the train, I would sometimes struggle to get on to the platform. The trains must be busier than the statistics showed.

In the studio, I was singing the praises of statistical thinking. But on the way to the studio, my everyday experience told me that these particular statistics must be wrong.

The contradiction between what we see with our own eyes and what the statistics claim can be very real. In the previous chapter we discovered that it is important not to be fooled by our personal feelings. As I’m a self-confessed data detective, you might expect me to say the same about our personal experiences, too. After all, who are you going to believe? A trusty spreadsheet, or your own lying eyes?

The truth is more complicated. Our personal experiences should not be dismissed along with our feelings, at least not without further thought. Sometimes the statistics give us a vastly better way to understand the world; sometimes they mislead us. We need to be wise enough to figure out when the statistics are in conflict with everyday experience – and in those cases, which to believe.

So what should we do when the numbers tell one story, and day-to-day life tells us something different? That’s what this chapter is about.

We might start by being curious about where the statistics come from. In the case of my commute, the numbers are published by Transport for London (TfL), the government organisation which oversees London’s roads and public transport. But how do the fine folk of TfL know for sure how many people are on a bus or a tube train? It’s a good question, and the answer is: they don’t. They can, however, make a good guess. Years ago, estimates were based on paper surveys, carried out by researchers standing at bus stops or in tube stations with a clipboard, or handing out questionnaires. Clearly this was a ponderous method, although it is unlikely that it introduced enough errors to explain the huge disparity between my experience and the official occupancy figures.

In any case, in the age of contactless payments it’s much easier to estimate passenger numbers. The vast majority of bus journeys are made by people tapping an identifiable contactless chip on a bank card, a TfL Oyster card or a smartphone. The data scientists at TfL can see where and when these devices are being used. They still have to make an educated guess as to when you get off the bus, but this is often possible – for example, they might see you make the return journey from the same area later. Or they might see that you had used your card on a connecting service: whenever I tapped into the tube network at Bethnal Green, one minute after the bus I’d been riding on arrived in the area, TfL could conclude with confidence that I’d been on the bus until the stop at Bethnal Green, but no further.

On the London Underground, people tap in and out, but TfL still does not know what route commuters took across the network, which often offers several plausible alternatives. TfL thus still doesn’t know how busy particular trains are. Again, they can make an educated guess, using occasional paper-based surveys to supplement their judgement as to how passengers are choosing to get around.

The estimates will soon be more accurate yet. On 8 July 2019, TfL switched on a system to use wi-fi networks to measure how crowded are different parts of the London Underground. The more phones are trying to connect to wi-fi, the busier the pinch point in a particular station. This system promises to let TfL spot overcrowding and other problems in real time. (I spoke to the data team at TfL the day after this system was switched on. They were adorably excited.)5

The statistics, then, are at least plausible. We can’t simply dismiss them as mistaken.

The next step is to look for reasons why our personal experience might be so different. In the case of my commute, the obvious starting point is that I was travelling at a busy time of day, on one of the busiest sections of the tube network. No wonder it was crowded.

But this particular rabbit-hole goes a little deeper. It’s perfectly possible that most trains aren’t crowded, and yet most people travel on crowded trains. For an extreme illustration, imagine a hypothetical train line with ten trains a day. One rush-hour train has a thousand people crammed on to it. All the other trains carry no passengers at all. What’s the average occupancy of these trains? A hundred people – not far off the true figure on the London Underground. But what is the experience of the typical passenger in this scenario? Every single person rode on a crowded train.

The real situation on the London Underground isn’t as extreme. There aren’t many completely empty trains, but trains do sometimes run with very few passengers on them, particularly when they’re running counter to the flow of commuters. Whenever they do, very few passengers will be around to witness it. Those statistics are telling the truth – but not the whole truth.

Of course, there are alternative ways to gauge the problem of overcrowding. Rather than measure the occupancy of the average train, you could measure the situation faced by the average passenger: out of a hundred passenger journeys, how many are on overcrowded trains? That would be a better way to measure the passenger experience – and indeed TfL are now refocusing their data collection and reporting to produce statistics that reflect the situation not of the trains, but of the passengers.

Yet there’s no single objective measure of how busy the public transport network is. As a passenger, it seems to me that all the buses I’m on are well used. But TfL’s statistics show, truthfully, that many buses are driving around largely empty. This is because buses don’t just appear in the busiest areas by magic; when they reach the end of the route they have to turn round and go back again. TfL care about the low average occupancy of buses because those buses cost money, take up space on the roads, and emit pollution. The average occupancy is therefore an important metric for them.

In short, my own eyes told me something important and true about London’s transport network. But the statistics told me something else, something equally important and equally true – and something I couldn’t have known in any other way. Sometimes personal experience tells us one thing, the statistics tell us something quite different, and both are true.

That’s not always the case, of course. Think back to the discovery that heavy cigarette smoking increased the risk of lung cancer by a factor of sixteen. Many people would have found reason from their personal experience to be sceptical of this finding. Perhaps your chain-smoking nonagenarian grandma is as fit as a fiddle, whereas the only person you know who died from lung cancer is your next-door neighbour’s uncle and he never smoked a cigarette in his life.

On the face of it, this seems no different to the experience of my daily commute appearing to contradict TfL’s statistics. But on closer inspection, in this case we do find reason to discard our personal experience and trust the statistical view. Though a factor of sixteen is hardly a small effect, lung cancer is itself scarce enough to confuse our intuitions. The world is full of patterns that are too subtle or too rare to detect by eyeballing them, and a pattern doesn’t need to be very subtle or rare to be hard to spot without a statistical lens.

This is true of many medical conditions and treatments. When we feel bad – anything from a headache to depression, a sore knee to an unsightly spot – we seek solutions. My wife recently suffered from a sharp pain in her shoulder whenever she raised her arm; it was bad enough to make it hard for her to get dressed or reach something on a high shelf. After a while, she went to a physiotherapist, who diagnosed the problem and prescribed some uncomfortable exercises, which my wife diligently performed every day. After a few weeks, she told me, ‘I think my shoulder is getting better.’

‘Wow – looks like the physiotherapy worked!’ I said.

‘Maybe,’ said my wife, who can spot me setting a statistical trap a mile off. ‘Or maybe it would have got better by itself anyway.’

Indeed. From my wife’s point of view, it didn’t really matter. What she wanted was for her shoulder to heal, and the evidence of her own senses was the only relevant yardstick. But for the question of whether the exercises had caused the recovery, her personal experience wasn’t much use – and from the point of view not of my wife but of future shoulder-pain sufferers, it’s the question of causation that matters. We need to know whether those exercises tend to help, or whether there might be a better approach.

The same is true of any other treatment for any other problem, whether it’s diet, therapy, exercise, antibiotics or painkillers: it’s nice if we feel better, but future generations need to know whether we feel better because of the steps we’ve taken, or whether they were empty rituals that did no good, cost money, wasted time and produced unwelcome side effects. For this reason, we rely on randomised trials of any treatment, ideally compared against the best available treatment, or against a fake treatment called a placebo. It’s not that our personal experience is irrelevant, it’s that it can’t give us the information we need to help those who come after us.

When personal experience and statistics seem to be in conflict, a closer look at the situation may reveal particular reasons why personal experience is likely to be an unreliable guide. Consider the idea that the vaccination against measles, mumps and rubella (MMR) increases a child’s risk of autism. It doesn’t, but fewer than half of us are convinced of that.6

We can say with confidence that there is no such link thanks to the statistical perspective. Since autism is not common, we need to compare the experiences of many thousands of children who have received the vaccination, and those who have not. One major study, in Denmark, did exactly that. It followed 650,000 children. Most of them received an MMR vaccine at the age of fifteen months, and a follow-up at four years, but about 30,000 did not. About 1 per cent of children were then diagnosed with autism, and that was true both of the vaccinated and the unvaccinated children. (The unvaccinated children, of course, were at higher risk of contracting these dangerous diseases.)7

So why do many people remain sceptical? Part of the answer is a sad history of reckless publishing around the issue. But in part the doubts persist because many people have heard of children whose autism was diagnosed soon after an MMR vaccination, and whose parents think the MMR was to blame. Imagine taking your child for the vaccination, and soon afterwards receiving a diagnosis of autism. Would you connect the two? It would be hard not to wonder.

In fact, the prevalence of such anecdotes is not surprising because autism tends to be diagnosed at one of two ages: early signs of the condition are observable by paediatric nurses at around the age of fifteen months; if not picked up then, diagnosis often follows a child starting school.8 And the two doses of the MMR vaccine are routinely given close to these ages. When we find a convincing explanation for why our personal experience sits uneasily with the statistical view, it should reassure us to set aside our doubts and trust the numbers.

A less fraught example is our relationship with television and other media. Many people on television are richer than you and me. Almost by definition, they are more famous than you and me. It’s very likely that they are better-looking than you and me; they are certainly better-looking than me (I am on the radio for a reason). When we reflect on how attractive, famous and rich the typical person is, we can’t help but have our assessment skewed by the fact that many of the people we know, we know through the media; they are attractive, famous and rich. Even if, on reflection, we realise that TV personalities aren’t a random sample of the global population, it’s hard to shake the feeling that they are.

Psychologists have a name for our tendency to confuse our own perspective with something more universal: it’s called ‘naive realism’, the sense that we are seeing reality as it truly is, without filters or errors.9 Naive realism can lead us badly astray when we confuse our personal perspective on the world with some universal truth. We are surprised when an election goes against us: everyone in our social circle agreed with us, so why did the nation vote otherwise? Opinion polls don’t always get it right, but I can assure you they have a better track record of predicting elections than simply talking to your friends.

Naive realism is a powerful illusion. Consider the findings of a survey from the opinion pollster Ipsos MORI. MORI asked nearly 30,000 people across thirty-eight countries about a range of social issues, finding them – and, presumably, most of the rest of us – badly out of step with what credible statistics showed:10

(a) We’re wrong about the murder rate. We think it’s been rising since the year 2000. In most of the countries surveyed, it’s been falling.

(b) We think deaths from terrorism have been higher in the past fifteen years than in the fifteen years before; they’re down.

(c) We think that 28 per cent of prisoners are immigrants. Ipsos MORI reckons the true rate across the countries surveyed was 15 per cent.

(d) We think that 20 per cent of teenage girls give birth each year. This number strains biological credibility when you think about it. An eighteen-year-old has been a teenager for six years, so if each year she has a 20 per cent chance of having a baby, most eighteen-year-olds are mothers. (Those who aren’t are balanced by the eighteen-year-olds who are mothers several times over.) Look around; is that really true? The correct figure, says Ipsos MORI, is that 2 per cent of teenage girls give birth each year.*

(e) We think that 34 per cent of people have diabetes; the true figure is 8 per cent.

(f) We think that 75 per cent of people have a Facebook account. The correct figure at the time of asking, 2017, was 46 per cent.

Why are our perceptions of the world so mistaken? It’s hard to be sure, but a plausible first guess is that we’re getting our impressions from the media. It’s not that a reputable newspaper or TV channel would actually give us the wrong data – although it has been known. The problem is that the news carries tales of lottery wins and fairy-tale romances, terrorist atrocities or gruesome assaults by strangers, and of course the latest trends, which are often not nearly as popular as they seem. None of these stories reflects everyday life; all of them are viscerally memorable and seem to take place in our living rooms. We form our impressions accordingly.

As the great psychologist Daniel Kahneman explained in Thinking, Fast and Slow: ‘When faced with a difficult question, we often answer an easier one instead, usually without noticing the substitution.’ Rather than asking ‘Are terrorists likely to kill me?’ we ask ourselves, ‘Have I recently seen a news report about terrorism?’ Instead of saying, ‘Out of all the teenage girls I know, how many are already mothers?’ we say, ‘Can I think of a recent example of a news story about teenage pregnancy?’

These news reports are data, in a way. They’re just not representative data. But they certainly influence our views of the world. To adapt Kahneman’s terminology, they’re ‘fast statistics’ – immediate, intuitive, visceral and powerful. ‘Slow statistics’, those based on a thoughtful gathering of unbiased information, aren’t the ones that tend to leap into our minds. But as we shall see, there are ways to consume more of the slow stuff and have a more balanced diet of information as a result.

So far we’ve seen cases in which the ponderous-and-careful slow statistics are more trustworthy than the quick-and-dirty fast statistics, and situations in which both give us a useful angle on the world. But are there also cases where we should trust our personal impressions more than the data?

Yes. There are certain things that we cannot learn from a spreadsheet.

Consider Jerry Z. Muller’s book, The Tyranny of Metrics. It’s 220 pages long. The average chapter is 10.18 pages long and contains 17.76 endnotes. There are four cover endorsements and the book weighs 421 grams. But of course none of these numbers tells us what we want to know – which is what does the book say, and should we take it seriously? To understand the book you will need to read it, or trust the opinion of someone who has.

Jerry Muller takes aim at the problem with a certain kind of ‘slow statistics’ – those used as management metrics or performance targets. Statistical metrics can show us facts and trends that would be impossible to see in any other way, but often they’re used as a substitute for relevant experience, by managers or politicians without specific expertise or a close-up view. For example, if a group of doctors collect and analyse data on clinical outcomes, they are likely to learn something together that helps them to do their jobs. But if the doctors’ bosses then decide to tie bonuses or professional advancement to improving these numbers, unintended consequences will predictably occur. For example, several studies have found evidence of cardiac surgeons refusing to operate on the sickest patients for fear of lowering their reported success rates.11

In my book Messy, I spent a chapter discussing similar examples. There was the time the UK government collected data on how many days people had to wait for an appointment when they called their doctor, which is a useful thing to know. But then the government set a target to reduce the average waiting time. Doctors logically responded by refusing to take any advance bookings at all; patients had to phone up every morning and hope they happened to be among the first to get through. Waiting times became, by definition, always less than a day.

What happened when a widely consulted ranking of US colleges, the US News and World Report, rewarded more selective institutions? Over-subscribed universities scrambled to attract fresh applicants that they could reject, and thereby appear to be more selective.

Then there is the notorious obsession with the ‘body count’ metric, which was embraced by US Defense Secretary Robert McNamara during the Vietnam War. The more of the enemy you kill, reasoned McNamara, the closer you are to winning. This was always a dubious idea, but the body count quickly became an informal metric for ranking units and handing out promotions, and was therefore often exaggerated. And since it was sometimes easier to count enemies who were already dead than to kill anyone new, counting bodies became a military objective in itself. It was risky, and it was useless, but it responded to the skewed incentive McNamara had set.

This episode shows that statistics aren’t always worth gathering – but you can appreciate why McNamara wanted them. He was trying to understand and control a distant situation, one he had no experience of as a soldier. A few years ago I interviewed General H. R. McMaster, an expert on the mistakes made in Vietnam. He told me that the army used to believe that ‘situational understanding could be delivered on a computer screen’.

It could not. Sometimes you have to be there to understand – especially when a situation is fast-moving or contains soft, hard-to-quantify details, as is typically the case on the battlefield. The Nobel laureate economist Friedrich Hayek had a phrase for the kind of awareness it’s hard to capture in metrics and maps: the ‘knowledge of the particular circumstances of time and place’.

Social scientists have long understood that statistical metrics are at their most pernicious when they are being used to control the world, rather than try to understand it. Economists tend to cite their colleague Charles Goodhart, who wrote in 1975: ‘Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.’12 (Or, more pithily: ‘When a measure becomes a target, it ceases to be a good measure.’) Psychologists turn to Donald T. Campbell, who around the same time explained: ‘The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.’13

Goodhart and Campbell were on to the same basic problem: a statistical metric may be a pretty decent proxy for something that really matters, but it is almost always a proxy rather than the real thing. Once you start using that proxy as a target to be improved, or a metric to control others at a distance, it will be distorted, faked or undermined. The value of the measure will evaporate.

In 2018, I visited China with my family. The trip taught me that I shouldn’t need to favour either fast or slow statistics; the deepest understanding comes from melding them together.

The slow statistics tell a familiar story – familiar, at least, to economics geeks like me. Real income per person in China has increased ten-fold since 1990. Since the early 1980s, the number of extremely poor people there has fallen by more than three quarters of a billion – well over half the entire population of the country. China consumed more cement in a recent three-year period than the United States used in the entire twentieth century. On paper, it is the most dramatic explosion of economic activity in human history.

Yet seeing it with your own eyes is another experience entirely. Nothing in the statistics truly prepared me for a journey across Guangdong, the southern province of China that has been at the forefront of this growth. We started at Hong Kong – the ultimate high-rise city – and walked into its mainland twin, Shenzhen. Then in the shadow of the Ping An skyscraper, which dwarfs the Empire State Building, we caught a bullet train across the province.

Where London’s tower blocks often stand alone or in groups of two or three, Shenzhen will have a cluster of a dozen identical monoliths, crammed with apartments, shoulder to shoulder. Next to that cluster, another dozen of a different design. Then another, and another. Here and there, in the distance across the haze, would be a Manhattan-esque cluster of larger skyscrapers. The towers marched on and on, all the way (or so it seemed to me) to the city of Guangzhou – forty-five minutes or so of high-speed travel through an infinite vista of concrete.

We ended the day much deeper into China, in the picture-postcard landscape of Yangshuo. But despite the idyllic surroundings, I couldn’t sleep. The endless tower blocks scrolled through my mind. What if we had lost our six-year-old son in the middle of Guangdong? And my sleepless anxieties flitted back and forth between my family and the world. So many people. So much concrete. How could the planet possibly survive this?

Of course, there was nothing in this experience to contradict the economic data; the two perspectives on China’s growth were perfectly complementary. But they felt very different. The ‘slow statistics’ required me to reflect and calculate, taking some effort to process the numbers and follow the logic of what they implied for modern China. The train journey delivered ‘fast statistics’ instead. It tapped into a different and more intuitive way of thinking, as I swiftly and automatically formed my impressions, compared Guangzhou to the cities I knew back home, and anxiously sensed the danger to those I love.*

Both ways of understanding the world have their own advantages, and their own traps. Muhammad Yunus, an economist, microfinance pioneer and winner of the Nobel Peace Prize, has contrasted the ‘worm’s eye view’ of personal experience with the ‘bird’s eye view’ that statistics can provide. The worm and the bird see the world very differently, and Professor Yunus is right to emphasise the advantage of seeing it up close.

But birds see a lot, too. Professor Yunus, paying close attention to the lives of poor women around him in Bangladesh, saw an opportunity to improve their lives by giving them access to less expensive loans, unleashing a generation of microentrepreneurs. But that up-close intuition needs to be cross-checked with some statistical rigour. The microcredit schemes that Yunus did so much to popularise have now been examined more thoroughly, using randomised trials in which a group of otherwise similar people applying for small loans are either approved, or rejected, at random. (This is like a clinical trial in which some patients get a new drug while others get a placebo.) These experiments tend to find that the benefits of receiving a small loan are quite modest, and temporary. Apply the same rigorous test to other approaches – for example, giving microentrepreneurs small cash payments along with advice from a mentor – and you find that the cash-and-mentor scheme is more likely to boost the income from these tiny businesses than providing loans.14

Statistical evidence can feel dry and thin. It doesn’t touch us in the same memorable and instinctive way as our personal experience. Yet our personal experience is limited. My trip to China took in tourist spots, airports and high-speed rail links. It would be a serious mistake to believe I saw everything that mattered.

There is no easy answer to the balance between the bird’s eye view and the worm’s eye view, between the broad and rigorous but dry insight we get from the numbers and the rich but parochial lessons we learn from experience. We must simply keep reminding ourselves what we’re learning and what we might be missing. In statistics, as elsewhere, hard logic and personal impressions work best when they reinforce and correct each other. Ideally we’ll find a way to combine the best of both.

One effort to do that has been developed by Anna Rosling Rönnlund of Gapminder, a Swedish foundation that fights misconceptions about global development. She aims to close the gap between fast and slow statistics – between the worm’s eye view and the bird’s eye view – using an ingenious website, ‘Dollar Street’.

On Dollar Street you can compare the life of the Butoyi family in Makamba, Burundi, with the Bi family from Yunnan, China. Imelda Butoyi is a farmer. She and her four children get by on $27 a month. Bi Hua and Yue Hen are both entrepreneurs. Their family enjoys an income of $10,000 a month. It’s no surprise that life on $27 a month is very different from life on $10,000 a month. But the numbers alone don’t convey the difference in a way that we can intuitively feel, or compare to our own lives.

Dollar Street attempts to fix that, as far as is possible through the medium of a computer screen, by presenting short films and thousands of photographs of different rooms and everyday objects – a cooking stove; a source of light; a toy; somewhere to store salt; a phone; a bed. In each home about 150 photographs are taken of these everyday places and things – if they exist – and they’re portrayed in the same way as far as is possible. The images speak with great clarity.

The photographs of Imelda Butoyi’s home give a much more vivid impression than the precise-yet-thin statistic that she makes $27 a month. The house has mud walls, and a roof made of straw and mud. Light comes from an open fire. The toilet is a plank over a hole in the ground outside. The floor is packed earth. The children’s toys? There are just a couple of picture books.

The Bi family home, in contrast, boasts a modern shower, a flush lavatory, a fancy hi-fi and a flat-screen TV. Their car is out front. The photographs show everything clearly, including the fact that the kitchen is surprisingly cramped, with just a couple of electric hobs for cooking.

‘We can use photos as data,’ says Rosling Rönnlund.15 What makes them useful data rather than random and potentially misleading is that they’re sortable, comparable, and connected to the numbers. The site allows you to filter so that you see only photographs of low-, middle- or high-income households. Or only photographs from a particular country. Or only photographs of a particular item – such as toothpaste or toys.

It’s easy, for example, to look at all the images of cooking from very poor households and see that the standard method around the world is an iron pot hanging over an open fire. Wealthier households all use push-button appliances delivering controllable electricity or gas. Regardless of where you live, if you’re poor you’re likely to sleep on the floor in the same room as other family members. If you’re rich you’ll have privacy and a comfortable bed. Much of what we think of as cultural differences turn out to be differences in income.

‘Numbers will never tell the full story of what life on Earth is all about,’ wrote Hans Rosling, despite being the world’s most famous statistical guru. (Hans was Anna Rosling Rönnlund’s father-in-law.) Hans was right, of course. Numbers will never tell the full story – which is why, as a doctor and academic, he travelled so widely, and why he so expertly wove stories to go alongside his statistical evidence. But the stories the numbers do tell matter.

What I love about Dollar Street is that it successfully combines statistics, fast and slow – the worm’s eye view and the bird’s eye view. It shows us everyday images that we instinctively understand and remember. We empathise with people all round the world. But we do so in a clear statistical context – one that can show us life at $27 a month, or $500 a month, or $10,000 a month, and can make it clear how many people live in each situation.

If we don’t understand the statistics, we’re likely to be badly mistaken about the way the world is. It is all too easy to convince ourselves that whatever we’ve seen with our own eyes is the whole truth; it isn’t. Understanding causation is tough even with good statistics, but hopeless without them.

And yet, if we understand only the statistics, we understand little. We need to be curious about the world that we see, hear, touch and smell as well as the world we can examine through a spreadsheet.

My second piece of advice, then, is to try to take both perspectives – the worm’s eye view as well as the bird’s eye view. They will usually show you something different, and they will sometimes pose a puzzle: how could both views be true? That should be the beginning of an investigation. Sometimes the statistics will be misleading, sometimes it will be our own eyes that deceive us, and sometimes the apparent contradiction can be resolved once we get a handle on what is happening. Often that will require us to ask a few smart questions – including the question I’ll introduce in the next chapter.

___________

* This is a reminder of how useful it is to stop and think. There is no advanced mathematics required to realise that the 20 per cent figure simply cannot be squared with our everyday experience. In some countries, people say they believe that 50 per cent of teenage girls give birth each year, which would imply young women typically enter adulthood with three children of their own.

* Admirers of Daniel Kahneman and his book Thinking, Fast and Slow may recognise what he calls ‘system 1’ and ‘system 2’ here.