5

The Truth About Intelligence

In 1905 seventeen-year-old Indian mathematician Srinivasa Ramanujan ran away from home. Failing almost every subject other than maths, he lost his scholarship at the prestigious Government Arts College in Kumbakonam, a town in Tamil Nadu, India. His interest in mathematics had become an obsession, and he spent nearly all his free time solving theorems and discovering new ones intuitively. Two years later, he gave up on formal education entirely, pursuing maths in private while living on the verge of starvation.

In 1909, hoping to gain some kind of employment, Ramanujan visited the Indian Mathematical Society. Its president, a man named Dewan Bahadur R. Ramachandra Rao, described the encounter:

A short uncouth figure, stout, unshaved, not overclean, with one conspicuous feature – shining eyes – walked in with a frayed notebook under his arm… [He] showed me some of his simpler results. These transcended existing books… he led me to elliptic integrals and hypergeometric series and at last his theory of divergent series not yet announced to the world converted me. I asked him what he wanted. He said he wanted a pittance to live on so that he might pursue his researches.1

Ramanujan dazzled and baffled his peers. Exhilarated, they suggested that he send his work to Professor Godfrey Harold Hardy of Trinity College, Cambridge, himself an eccentric maths prodigy who, as a child, would write numbers up to millions and amuse himself in church by factorising the hymn numbers. When Hardy opened Ramanujan’s letter, one morning in early 1913, the theorems he saw appeared wild and unthinkable, surely the musings of a crank. But it didn’t take long for Hardy to realise the truth: Ramanujan was a man of untrammelled genius. His formulas ‘defeated me completely’, Hardy later admitted. ‘I had never seen anything in the least like this before… they could only be written down by a mathematician of the highest class.’2

Hardy and Ramanujan formed a legendary partnership, transforming twentieth-century mathematics and contributing to fields virtually unheard of in their lifetime, including quantum computing and black hole research. More than anything, their story demonstrates that of all the changes the human brain has experienced, its increased intelligence is perhaps the most spellbinding.

When we think about how humans became intelligent, we often picture small bands of hunter-gatherers no brighter than Homer Simpson gradually transitioning to a lifestyle of farming, settlement and industry. But the agricultural and industrial revolutions occurred long after the brain changes that made them possible.

In order truly to understand intelligence we must go back to its origins. Specifically, we must study intelligence in its proper context: that of a shifting African climate filled with uncertainty and hardship. When the African climate see-sawed between wet and dry periods, as it did 4 million years ago, ancestors such as Australopithecus needed the wits to adapt to their immediate surroundings. When the African landscape shifted towards open savannah, as it did 2 million years ago, ancestors such as Homo ergaster needed the shrewdness to make sophisticated stone tools and plan long-distance hunts. And when the African climate became too dry for human habitation, as it did 200,000 years ago, Homo sapiens needed the foresight temporarily to leave Africa and spread across the world.

The earliest evidence of human intelligence dates back roughly 2.5 million years, when Homo habilis used what are now considered the oldest human inventions: a set of stone tools discovered at Olduvai Gorge in Tanzania. Known simply as the Oldowan, they are rough, all-purpose tools, pointed at one end and were fashioned by smashing two rocks together to release sharp flakes of stone. Archaeologists call them choppers. The Oldowan were probably used by Homo habilis to split nuts and fruit and butcher small animals. The fact that they were all-purpose tools suggests that Homo habilis were not particularly intelligent. A sophisticated stone tool is one built with a particular task in mind, such as a hammer or scraper.

In all likelihood Homo habilis had an intellect that cognitive scientists call pre-representational, meaning they were clever enough to invent tools based on their sensory experience of the world but not clever enough to engage in derivative or abstract thinking. Add to that a 600-cm3 brain size (less than half that of Homo sapiens) and an underdeveloped memory, and things start to look rather bleak for Homo habilis. As evolutionary psychologists Liane Gabora and Anne Russon write,

Human intelligence had its first real growth spurt when Homo erectus appeared, some 1.8 million years ago. The brains of Homo erectus were 25 per cent larger than that of Homo habilis and 75 per cent the size of Homo sapiens. In consequence, a variety of distinctly human behaviours emerged. They made sophisticated stone hand-axes including the Aschulean hand-axe, a symmetrical blade that required several stages of production. They were the first to use fire and possibly the first to cook. They may even have been the first to use rafts or other seagoing vessels to leave Africa for Europe and Asia. All of which is far more significant than the cognitive achievements of earlier hominins (or indeed of other primates), for they involve the ability to reason, plan and grasp deep truths about the world.

It’s also thought that Homo erectus possessed brain regions for language. While such language was probably very simple (just a few words with no linguistic structure), it does indicate a mode of thinking unprecedented in the hominin lineage. Scholars call this mode mimetic cognition (Greek for mimesis, ‘to imitate’). The brain’s capacity for mimetic cognition meant that Homo erectus were no longer trapped by their immediate sensory experience, as Homo habilis had been. Instead, they could act out events that happened in the past, rehearsing and learning and planning for the future in a truly predictive fashion. Gabora and Russon explain:

[Mimetic cognition] enabled hominins to engage in a stream of thought. One thought or idea evokes another, revised version of it, which evokes yet another, and so forth recursively. In this way, attention is directed away from the external world towards one internal model of it.

Then something remarkable happened. Somewhere between 200,000 and 30,000 years ago, intelligence skyrocketed with the ascendancy of Homo sapiens. Described by anthropologists as the Human Revolution, this period saw more innovation than the previous 7 million years of human evolution. Our species developed language, music, religion, art and trade. We made the seemingly impossible journey to Australia, and combined our collective intelligence constantly to improve our technology in a process that scholars call the ratchet effect: the snowballing of progress that occurs when good ideas jump from one mind to another, until the whole population is ratcheted up. What’s more, we invented perhaps the defining feature of human existence: metacognition – the power to think about thinking.

How do we explain the surge of intelligence that allowed Homo sapiens to conquer the world? Neuroimaging offers a clue. The cerebral cortex is so large that it had to wrinkle like a crumpled ball of paper to fit inside the skull. Compared to other animals, humans have more neurons in the cortex and a greater number of fibres connecting them to different brain regions. This interconnected network – which neuroscientists call the connectome – may be the key to our intelligence because it allows the rapid distribution of information around the brain. By linking regions involved in tool-making, imitation, social cognition, memory, causal reasoning and the senses, the brain could flexibly respond to changes in the environment. In other words, it could learn.

Although learning exists in the simplest animals, human learning is thought to have driven our brains’ evolution forward by invoking something called the Baldwin effect. Named after the psychologist James Mark Baldwin, who described it in 1896, the theory says that individual learning enhances the overall learning of our species.

If, for example, an individual discovers a clever new strategy to feed the tribe, that discovery will offer any individual nearby a direct advantage. As more and more individuals thrive using the new strategy, they will reproduce more often and thus increase the chances of future generations making similar or even greater conceptual leaps. In this way, learning becomes installed in our genome not through the inheritance of DNA, but as a consequence of interactions with the environment. As the philosopher Daniel Dennett notes,

To prove that the Baldwin Effect was more than just an attractive theory, Geoffrey Hinton and Steven Nolan, computer scientists at Carnegie Mellon University in Pittsburgh, ran a computer simulation to show it in action. Using a genetic algorithm that learns by trial and error, they showed that over generations the genes that enable learning progressively increase in the population.5 They conclude that the Baldwin Effect is an important process that allows humans to change the environment in which they evolve.

Putting it another way, intelligence didn’t just encourage the proliferation of our species. It also encouraged the proliferation of culture. As humans became increasingly reliant on the cultural changes that enabled them to cooperate effectively – changing social patterns, changing social technologies, changing social goals – they became increasingly able to replace genetic evolution with cultural evolution. The cultural skills that humans acquired are why we’ve advanced more in the last 200,000 years than in the last 7 million. They set us on a trajectory of rapid technological innovation – the fruits of which, if tech experts are to be believed, will either save or destroy humankind.

It’s also possible that other influences, working in tandem with cultural evolution, played a role in spurring our cognitive abilities to great heights. Among the most interesting is the idea that early humans may have eaten psychedelic mushrooms. Dubbed the ‘Stoned Ape’ theory, developed by ethnobotanist Terence McKenna in 1992, this idea carries more intellectual weight than one might think. Around 50,000 years ago, natural climate change forced early humans such as Homo erectus to abandon life in the forest canopy and take their chances out in the open. Had they remained in the canopy, humans today would have evolved to be a fruit-eating, insectivorous primate, and our evolutionary destiny would have been a lifestyle similar to the orangutans in Borneo and the lemurs of Madagascar. But as the African continent dried up, our ancestors found themselves flung from a bountiful supply of fruit and insects to an unforgiving desert with scant food and deadly predators. To survive, they did what they had always done: foraged for nutritious plants and juicy insects wherever they could find them. These insects were often found in the dung of wild cattle, and growing alongside was a variety of mushroom which, little did they know, contained the psychedelic drug psilocybin – ‘magic mushrooms’.

It’s possible that early humans, starving and desperate in the arid grasslands, ate these mushrooms and gradually changed the chemistry of their brains. Psilocybin improves something called ‘edge detection’: the brain’s ability to determine the shape and profile of surrounding objects. More amazingly, it enlarges brain regions linked to learning, attention and creativity, and even increases the number of connections between them. The effect would have made our ancestors better hunters and survivors, swifter to ambush an antelope and quicker to spot a tiger in the bush.

Powerful stuff. But is it true? We know that psychoactive drugs were used in ancient cultures, including opium extracted from poppies in Neolithic Italy, hallucinogenic cacti harvested in around 8600 BC in Peru and marijuana found at the Bronze Age ceremonial sites in the Kara Kurum desert of Turkmenistan.6 We also know that the remains of a prehistoric woman nicknamed the Red Lady (found in a grave in the El Mirón cave in Cantabria, Spain, and thought to belong to a thirty-five-year-old woman buried some 20,000 years ago) contained spores of several mushroom species lodged in her teeth.7 This is tenuous evidence, to be sure, but the fact that psychedelics have such a potent effect on the brain means we should certainly be taking this idea seriously.

Wise Apes

No history of human intelligence would be complete without asking how we stack up against our closest relatives: chimpanzees and bonobos. It’s a question that’s divided anthropologists for centuries. Some maintain that humans are infinitely more intelligent than our ape cousins; others that the human brain is just a scaled-up primate brain – so cleverer, but not by much.

Let’s start with the ape-flattering claim. The research on chimpanzees and bonobos suggests they are among the most intelligent beings on earth. They use simple tools and hunt in groups. They are social creatures, aware of status and capable of deception. They have been taught to use sign language and can do basic arithmetic with numbers and symbols. They even have culture, passing down customs from one generation to another. In 2016, scientists working in the Republic of Guinea observed chimpanzees engaging in ritualistic behaviour, piling rocks in the hollow of a seemingly venerated tree.8

Now for the unflattering news. Over the past two decades scientists have deployed numerous tests to measure the gap between human and chimpanzee cognition. What they find is that while both groups score equally well on tests of physical intelligence (tracking hidden objects, locating noise sources, using a stick as a reaching tool), they score differently on tests of social intelligence (solving problems through imitation, comprehending non-verbal cues, gauging someone’s mental state by their behaviour). Time and time again, humans surpass chimpanzees on these tests by the age of four. The natural conclusion is that chimpanzees reach the levels of intelligence of a four-year-old child, but then hit their intellectual ceiling. Worse, if these tests are reliable they place the great apes’ intelligence in the mid-Miocene epoch, 16 to 12 million years ago – ancient by anyone’s standards.

But if our record of intelligence tests has taught us anything, it is to be deeply sceptical of them. According to the cognitive scientists David Leavens, Kim Bard and William Hopkins, there are problems with the way ape cognition has been measured over the years. First, in nearly every study the apes involved were institutionalised, born in captivity and isolated from their natural habitat. Captive chimpanzees display a range of abnormal behaviour that suggests mental illness. Second, the studies almost never age-matched the apes with the humans: infant humans were compared with adult chimpanzees. This makes it impossible to know if the test results are due to differences in intelligence or life history. ‘The only firm conclusion that can be made,’ Leavens and his colleagues wryly suggest, is that ‘apes not raised in western, post-industrial households do not act very much like human children who were raised in those specific ecological circumstances, a result that should surprise no one.’9

Understanding the difference between human and other ape cognition is all about perspective. Does a chimpanzee need to understand human social life? No. Does a chimpanzee need to understand science, mathematics and philosophy? Of course not. Their particular environment does not require that particular adaptation. Similarly we do not understand the richness and complexity of chimpanzee life. We can use our science and observation skills to build an impression of their mental world, but that is all. The depths of their minds will never be fully understood from our perspective alone, and it is a colossal feat of human arrogance to think otherwise. Wittgenstein famously said that if a lion could talk, we could not understand her. Even if she spoke perfect English we wouldn’t understand what she said because we share no common frame of reference, no common mental scheme by which to understand her world. He was right. Who knows what the world is like to a lion? Who knows what we are like to a chimpanzee?

So to answer the ‘who’s smarter’ question: chimpanzees are staggeringly intelligent creatures that may be smarter than us in ways we have yet to discover. We just don’t know. At least not until we develop more sophisticated ways of measuring intelligence. The primatologist Frans de Waal sums up our attitude to date:

Misunderstanding Intelligence

One of the first to attempt to study human intelligence (from the Latin intelligere, translated from the Greek nous, meaning good judgement) was Francis Galton. In the late nineteenth century, statistician and polymath Galton maintained that intelligence was an inherited trait, akin to eye colour and height, which could be measured scientifically. He believed it could be tested using the grip strength and reaction times of English noblemen, and so instead of trying to measure things like knowledge, reasoning and creativity, he got thousands of volunteers to punch targets, distinguish between different colours and squeeze various objects. As they did, he measured their head size and recorded their academic record, determined to find, in his view, the ‘very best people’. He coined the word eugenics (meaning ‘the good birth’) and encouraged people of the ‘genius-producing classes’ to go forth and multiply. It was a pernicious fallacy that formed the blueprint of Nazi ideology.

In 1904 the British psychologist and statistician Charles Spearman took another misguided step. He noted that people who perform well in one subject tended to perform well in another, seemingly unrelated subject. Children who scored highly on tests of vocabulary, for instance, were also likely to score highly on tests of arithmetic and vice versa. Spearman believed that this pointed to a mysterious underlying factor that he called General Intelligence, which he labelled g. In 1916 the American psychologist and eugenicist Lewis Terman used g to develop the intelligence quotient, or IQ test.

The problem with IQ, however, is that while it can help predict certain life outcomes – such as health and income – it’s completely unreliable on the individual level. Spearman had merely taken a statistical correlation obtained from a small sample of British schoolchildren and given it the grand title of General Intelligence. But as modern researchers have pointed out, a more apt description would have been ‘general test-taking ability’. Indeed, one of the greatest lessons I have learned as a neuroscientist is that intelligence is extraordinarily complex and subtle; it utilises countless neural circuits throughout the brain and often manifests in surprising ways. No single measure such as IQ can capture the diversity of cognitive ability that we see in our species: it’s too simplistic.

Despite attempts to highlight Spearman’s missteps – notably by the eminent psychologist Howard Gardner in the 1980s who pointed out that g totally ignores specialist abilities – g and IQ are still used as measures of intelligence today. Yet over the past century prominent scientists have repeatedly come to differing opinions on what intelligence actually is:

The aggregate or global capacity of the individual to act purposefully, to think rationally, and to deal effectively with his environment.

David Wechsler (1958)13

[The] process of acquiring, storing in memory, retrieving, combining, comparing, and using in new contexts information and conceptual skills.

Lloyd Humphreys (1979)14

The ability to deal with cognitive complexity.

Linda Gottfredson (1998)15

This isn’t to say IQ is meaningless. Some of the world’s greatest geniuses including Paul Allen, Philip Emeagwali, Judit Polgar and Fabiola Mann have very high IQs. If you have a high IQ you’re certainly more likely to have a high intelligence, but consider the Nobel Prize-winning physicist Richard Feyman, whose IQ was only slightly above average. Or Scott Aaronson, distinguished American computer scientist whose IQ is bang on average. For those who worry about having a low IQ, Aaronson writes, ‘[I]f you want to know, let’s say, whether you can succeed as a physicist, then surely the best way to find out is to start studying physics and see how well you do.’16

As with many other conundrums in biology, scientists have tried to understand intelligence by categorising it. There are now various prefixes for intelligence – emotional, social, fluid, crystallised, practical, analytic, interactional, experiential and perceptual. Emotional and social intelligence are the most well-known and are fairly self-explanatory, the former loosely describing the ability to understand and perceive emotions (both one’s own and others’), the latter broadly describing how well you interpret and respond to social interactions. Fluid versus crystallised intelligence essentially describes logic versus knowledge, respectively. Practical intelligence is probably the closest thing to common sense, namely the ability to apply ideas and demonstrate good judgement. An analytically intelligent person ‘generally does well at school and on standardized tests, but is not necessarily creative or high in common sense,’ explains Robert Sternberg, the psychologist who proposed it.17 Interactional intelligence describes our relationship with tools and technology, how good we are at home DIY or mending a fuse, say. Experiential intelligence (another Sternberg invention) is how well we adapt to new situations and expand on new ideas, a form of intelligence closely linked to creativity. And perceptual intelligence, brilliantly explored by the physician Brian S. Boxer Wachler in his 2017 book, is our brain’s way of distinguishing between reality and illusion, our ability to see past self-deception and observe the world for what it really is.18 These distinctions have helped, but as we will see, we still have a long way to go in order to understand intelligence.

The Power of Imagination

Intelligence, it would seem, is hard to define, but is even harder to pinpoint in the brain. For years scientists thought that human intelligence came from the frontal cortex (the outer layer covering the front of the brain) because humans appeared to have a larger frontal cortex than other apes. And bigger was assumed to mean better; when Albert Einstein’s brain was examined post-mortem, pathologists noted a larger than usual frontal cortex. But it turned out that scientists had based their interspecies comparisons on unscaled measurements: using total brain volume instead of in relation to body size. By this measure, llamas and sea lions were smarter than primates. Thanks to the work of Robert Barton and Chris Venditti at the University of California, San Diego, who correctly scaled the size of the human frontal cortex in 2013, we know now that our frontal cortex is actually nothing special relative to other primates.19 So where does the staggering intelligence of people like Einstein come from?

The answer lies in our evolutionary roots. Just as our brains have much to grapple with in today’s world of advanced technology, political unrest, and environmental catastrophe, our ancestors had to navigate a complex and ever-changing world, fraught with disease, climate change, predators, and tribal warfare. Gradually, through the process of natural selection, the brain developed increasingly sophisticated circuitry. And as that circuitry grew, it spawned something that Einstein said was more important than knowledge: imagination.

Since antiquity, our ancestors have been imaginative: wearing stone necklaces 100,000 years ago, carving on ostrich shells 85,000 years ago, adorning caves with animal paintings 15,000 years ago. Imagining that things exist can be highly advantageous. Large groups of people often collaborate best by attributing meaning to abstract ideas – such as gods, nations and money – that only truly exist in people’s collective imagination. By thinking abstractly, human minds have taken things that exist only in our imagination and given them a form in the real world.

The ability to form images in the mind without direct input from our senses is typically viewed negatively: people with overactive imaginations are seen as daydreamers living in an imaginary world, overindulgent idlers constantly in need of being snapped back into reality. However, new research has found that a strong imagination and intelligence are linked. For the past thirty years researchers have been mapping something called the default network, a brain system that participates in daydreaming, mind wandering, reflective thinking and imagining the future. It turns out that while you daydream, your thoughts are free to wander into various domains of cognition, such as memory, experience, knowledge and visual imagery. People who engage in these cognitive practices therefore have greater access to the states of mind necessary to solve complex problems.

In the brain, the default network is a web of interacting circuits spanning the frontal, parietal and temporal lobes. It contains a number of ‘hubs’ and ‘subsections’ that are important for processing thoughts associated with daydreaming: thoughts about oneself and others, thoughts about experiences and goals, thoughts about purpose and determining the motivation of others – the inner dialogue with one’s self that we often disregard as superfluous to intelligence and creativity.

Importantly, the default network is only active when a person is not focused on a task, when the brain is cycling through thoughts not associated with the immediate environment. This is in contrast with a system called the executive control network, a brain network responsible for controlling attention and awareness of the external environment. Although neuroscientists don’t entirely agree about what the default network is doing when the mind wanders, some believe it is consolidating our experiences in order to make sense of our individual autobiographies, the narrative that defines who we are. Curiously, when people watch the same film, brain scans show that their default networks are in near-perfect sync with each other; the film’s narrative has literally replaced their own.

Scott Barry Kaufman, director of the Imagination Institute at the University of Pennsylvania, likes to call the default network the imagination network.

In March 2016 researchers saw a glimpse of imagination in action for the first time: a bundle of specialised neurons, called grid cells, lighting up in the entorhinal cortex after volunteers were asked to imagine moving through a mountainous landscape.21 Tellingly, this brain area is known to act as a hub connecting different parts of the brain.

We still don’t know how these networks evolved, but many neuroscientists think they have something to do with an ancient brain function called repetition suppression. This phenomenon occurs when your brain becomes familiar with something, and displays less of a response each time it sees it. When the first iPhone was released in 2007, for instance, people’s creative brain networks would have undoubtedly shown a large response; our minds were assimilating a novel, life-changing object for the first time. But having seen so many generations of iPhone, our brain’s response is now much smaller. This mechanism probably evolved as a way to conserve energy, directing our attention to novel, potentially valuable new sources of information. For early humans, repression suppression may have been vital to improve their tools. Perhaps the diminishing novelty of sharpened stone tools 2.5 million years ago helped lead to stone hand-axes 1.6 million years ago, which subsequently led to stone knapping 400,000 years ago and cutting blades 80,000 years ago. Every improvement was the brain’s way of keeping its creative juices flowing.

This need to keep the brain active was starkly demonstrated by a team of psychologists at the University of Virginia and Harvard University, who recently found that people resist being left alone with nothing but their thoughts, even for as little as six minutes – irrespective of age, education or income. Given the option, people actually prefer to receive a mild electric shock to being made to sit and do nothing.22 Why our brains make this choice is a bit of a mystery, but we know that the brain needs stimulation in order to function well. A gentle electric shock can trigger the release of neurotransmitters at the synapse, activating the neural circuitry associated with reward. From an evolutionary perspective, this isn’t surprising: our brains evolved to be active, to engage with their surroundings and seek out new information wherever they can find it, even if the cost involves momentary discomfort.

Although imagination starts in childhood, some believe it is educated out of us at school, where we are taught literacy, numeracy and ultimately conformity. Indeed, in the UK, US and elsewhere the school system is still based on the nineteenth-century Prussian model, in which children wear uniforms and are told not to challenge authority. Such strict obedience, neuroscientists say, can quash creativity. It does this by inhibiting our brain’s dopaminergic system: a network of neurons that synthesise the neurotransmitter dopamine, which drives our motivation to explore the world and be creative. Too much dopamine can lead to fantastical thoughts and hamper the critical thinking necessary for creativity. But too little dopamine leads to unmotivated, poorly focused children who are then more likely to conform, generating a vicious cycle of dwindling imagination.

Kaufman is now investigating how teachers can promote imagination in the classroom. Some believe the key is to give schoolchildren time to reflect on what they have learned. In a 2012 report entitled ‘Rest Is Not Idleness’, the neuroscientist Mary Helen Immordino-Yang and her colleagues argue that introspection, quiet reflection and mindfulness improve academic performance.23 For example, secondary-school pupils who write in a diary about their anxiety about upcoming exams get better marks. African-American pupils who take time to imagine themselves as successful adults do better at school. Academic accomplishments are even bolstered in ten- to twelve-year-olds taught to take what psychologists call a meta-moment: pausing in an activity to reflect and think about themselves at their very best. This isn’t to say that pupils should indulge in unhealthy distractions – smartphone overuse in particular reduces default network activity, actually impairing the daydreaming state of mind. Rather, Kaufman and Immordino-Yang’s research suggests that to start improving education, we must from time to time let our minds wander.

When scientists look to see what else triggers the default network, they discover something extraordinary. In one study, the pseudonymous university student John was told a true story designed to generate compassion. The tale was about a young boy growing up in China during a depression. The boy, fatherless and impoverished, had to watch his mother work as a labourer to survive. One day she finds a coin by the road and uses it to buy the boy warm cakes, which he offers to share with her despite being starving, but she declines. Asked how the story made him feel, John replied:

The pauses in John’s answer are manifestations of default network activity. What’s interesting is how closely his pauses align with thoughts not pertaining to the immediate story. There are abstract thoughts – ‘a balloon… under my sternum’ – and reflective thoughts – ‘my parents… I don’t thank them enough.’ The astonishing conclusion is that compassion stimulates the default network as well. How this happens has a lot to do with the brain’s anatomy: the brain regions underlying compassion – the medial orbitofrontal cortex and the ventral striatum – which produce feelings of warmth, concern and tenderness are hugely interconnected with the default network. Therefore anything that activates these brain regions has the knock-on effect of activating default network activity too. Put another way, kindness makes you cleverer.

If this sounds unlikely, consider for a moment the evolutionary benefits of compassion: altruism, cooperation, generosity, reciprocity, forgiveness, self-sacrifice, love. Given that the human brain evolved by building on pre-existing circuits, it’s no surprise that intelligence also springs from compassion. It’s why we have emotional and social intelligence as well as the more familiar kinds. It’s long been known that compassion requires imagination. ‘Climb into his skin and walk around in it,’ Atticus tells Scout in To Kill a Mockingbird. But the converse is also true: imagination requires compassion.

Why did evolution select such a surprising mechanism for intelligence? Why not just opt for a data-processing intelligence like that used in artificial intelligence (AI) software? The answer is simple: because imagination is far more powerful. Unlike AI, the human brain goes beyond analysing the environment and breaking down tasks into data. Instead, we constantly adapt to an environment that we have also shaped, gaining new knowledge while simultaneously learning from the past and imagining the future. We go beyond ‘what is’ to ‘what could be’. We ceaselessly create new ideas; even the cells in our body ceaselessly create themselves.

In fact, the more we learn about human intelligence, the more we are realising it is nothing like AI. Compared to human brains, AI is clumsy and narrow-minded. An AI can quickly identify an object like a banana, for instance, but add a competing signal and the AI will suddenly think it’s a toaster. If you came across a mountain that looks like an elephant, the AI would think the mountain is an elephant. AI also needs to learn a task thousands of times before achieving proficiency. And so, while AI outperforms humans in some ways – it recently outperformed doctors in diagnosing breast cancer, for instance25 – it still doesn’t re-create human intelligence. For all the thrill and prestige of AI, no machine is a match for the breathtaking ingenuity of human intelligence at its best. Seven million years of brain evolution have seen to that.

A Cleverer World

Since the beginning of the twentieth century humans have been getting cleverer. However we measure it, intelligence scores are going up around the world. If we could send a typical teenager back in time to 1900, he or she would be among the cleverest 2 per cent of people on the planet. Equally, if we brought a typical person from 1900 into the present, he or she would have significantly below-average intelligence. We call this phenomenon the Flynn Effect, after the New Zealand political scientist James Flynn who discovered it in the 1980s. And remarkably, it is almost certainly a product of the environment. Explanations include better health, better nutrition, better education and improved standards of living. But most significantly, the Flynn Effect proves that intelligence is changeable. As Flynn notes:

There are many reasons why intelligence is primarily a consequence of the environment, why it is not fixed at birth and why we are more in control of our intelligence than we think. The first reason is the effect of experience on the brain. Experience affects the formation and elimination of neuronal synapses: strengthening the brain in some areas, weakening it in others. We know this because lab animals housed in an enriched environment filled with toys, tunnels and opportunities for social interaction, consistently score higher on tests of cognitive performance than animals housed in barren enclosures. Experience in the form of practice also leaves its mark on the brain: London taxi drivers have a larger hippocampus than other people to help them navigate the capital’s streets; violinists have larger brain regions controlling the fingers of the left hand. This enhancement and rewiring of the brain is most active during adolescence and early adulthood, but continues throughout adult life.

But to what degree does the environment alter the brain? In 2015 one group of researchers offered an answer, demonstrating that complex environments actually push brain evolution forward. Larissa Albantakis, a computational neuroscientist at the University of Wisconsin-Madison, has created a set of artificially intelligent computer characters – animated creatures she calls ‘animats’ – that possess a simple neural network.27 She lets her animats play a video game in which they must catch falling blocks. The best catchers are then selected for a more advanced game in a more complex virtual world. After 60,000 generations, Albantakis has found, the animats not only become better video-game players, but also generate more complex wiring in their neural networks. They created a kind of digital neuroplasticity.

If the environment can change the mind of a computer in this way, imagine what it can do for the human brain, a computer that is still thirty times more powerful than our best supercomputers. Perhaps the most striking example of its impact on the brain comes from an unsettling set of experiments performed by the neuroscientists David Hubel and Torsten Wiesel in the 1950s. Working in a tiny basement laboratory at John Hopkins University, they sutured closed one eye of a kitten and then let the animal mature to adulthood. When the sutures were removed, they found that the cat was blind in that eye because its brain’s visual cortex had not received enough visual experience from the outside world. Signals from the environment, not the brain, thus determine how the visual cortex is wired. Clearly, neuroplasticity makes the human brain acutely sensitive to its environment. This is good news: it means we possess far more control over things like emotion, intelligence and social behaviour than was previously thought.

A person’s intelligence can also change across their lifespan. When children from deprived backgrounds are adopted and placed in financially stable families, they get cleverer. As with lab animals, being raised in an enriched environment – one that emphasises love, learning, achievement and personal growth – has a huge impact on cognitive development. Adults too can increase their intelligence by changing everything from diet to education to work environment. And while the jury is still out on whether brain-training games affect intelligence, we do know that human cognition is responsive to training.

Perhaps the most striking study showing that intelligence can change over time was conducted by Cathy Price and her colleagues at University College London. They found that the intellectual ability of teenagers in particular is far more malleable than once thought.28 They picked nineteen boys and fourteen girls, aged between twelve and sixteen, and subjected them to brain scans and a range of verbal and non-verbal tests. They then repeated the tests four years later to see if their scores had shifted in any meaningful way. Remarkably, 39 per cent of the teenagers had an improved verbal score and 21 per cent showed improvements in spatial reasoning. When Price then looked at their brain scans, she saw that both results corresponded with growth in the left motor cortex (a region important for speech) and the anterior cerebellum (important for spatial cognition). Price notes:

Our beliefs also affect our intelligence. In 1968 the psychologist Robert Rosenthal and headmistress Lenore Jacobson showed that when teachers were told that some of their pupils were ‘late bloomers’, rather than underachievers, those pupils were transformed by their teachers’ positive expectations and went on to perform significantly better than their peers. Simply believing in themselves was enough to change their lives forever. Rosenthal and Jacobson called this self-fulfilling prophecy the Pygmalion Effect, after the mythical Greek sculptor whose love for a female statue brought her to life.30 The Pygmalion Effect has been reported in sports coaching, business management and medical training. People who believe intelligence can be improved can even perform better in maths tests than those who believe intelligence is fixed – the so-called entity theory. By adopting the idea that effort and determination is all it takes, they have the perfect attitude to succeed.

Sadly it is also true that when teachers have low expectations of their students, poor results are precisely what they get. Psychologists call it the Golem Effect and it is devastating. Children end up believing intelligence is fixed and perform poorly at school for no good reason. They become highly anxious about how clever they are and so aim low academically. They deliberately sabotage their chances of success (called self-handicapping) by not studying for tests, opting to play video games or watch television instead. In this way, when they fail a test their self-esteem is protected. The failure was their choice, not their fault.

Writing this resonates deeply with me because I experienced this very phenomenon. Growing up, I was taught that clever people were just born that way. You were either bright or a bit dim. There was no room for intellectual mobility; you were put in your lane and given a teacher set to the right speed setting. When I looked at the kids in the top class and those in the bottom class, I didn’t see the outcome of nurture. I saw a predetermined hierarchy and it petrified me. I became so anxious I would retreat into myself, playing video games and messing about instead of studying for my exams. I went to a university ranked average for my undergraduate degree. But then something miraculous happened. One of my university tutors taught me that I could do so much better: that I could achieve whatever I wanted with perseverance and – most importantly – self-belief.

I listened. I stopped playing video games and studied hard. I got a first-class degree and, wanting to continue my studies of the brain, was accepted on a master’s programme at University College London, one of the world’s leading neuroscience institutes. From there it was a snowball effect. Surrounded by Oxbridge graduates, I suddenly believed in myself like never before. I worked even harder and outperformed most of them in the exams, earning a distinction and then a scholarship for a PhD – my ticket to neuroscience heaven and an interesting life that I could be proud of. None of which would have been possible without my tutor’s kind and gentle Pygmalion influence.

And then there’s the effect of motivation. It’s easy to accept that motivation plays a role in intellectual achievement. Yet few people realise how far some cultures have taken it. Students from East Asian countries such as China, Japan and South Korea have a much stronger work ethic than white Western students. The academic motivation of Asian students arose from the teachings of the ancient Chinese philosopher Confucius (551–479 BC). Unlike Plato and other Western philosophers, who believed that intelligence is something one is born with, Confucius knew that intelligence is something one can earn. While Confucius acknowledged the existence of natural-born geniuses, he knew that such individuals are extremely rare. His focus on hard work has inspired generations of Asian students consistently to outperform their white counterparts in educational achievement.

Successes such as these remind us that for human intelligence to be advanced it must also be understood in a cultural context. The inescapable fact is that intelligence doesn’t mean the same thing in every culture. In addition to motivation, Asians emphasise modesty, open-mindedness, curiosity and the sheer enjoyment of thinking. It’s an almost child-like conception, bringing to mind Einstein’s famous words in a letter to one of his colleagues: ‘[We] never cease to stand like curious children before the great mystery into which we are born.’

Other cultures, such as those in Latin America, emphasise the creative attributes of intelligence or, as in Ugandan culture, associate intelligence with slow and deliberate thinking. In Indian culture intelligence is linked to obedience, good conduct and following social norms, whereas in the West we typically associate intelligence with analytic thinking and placing objects into categories. This explains why many Westerners have had – and still do have – a one-dimensional view of intelligence. And I believe that, thanks to Spearman’s g and our particular history of studying intelligence, our culture is segregated according to intellectual attainment. Our friends and colleagues are deemed more or less as intelligent as we are, and so our personal experience triggers a myopic attitude towards intellectual diversity.

The emerging picture of intelligence is one many of us already know. Intelligence is a muscle. All of us are born with the potential to use it, and it changes and gets stronger when we do. Very few of us – with the exception of rare geniuses like Srinivasa Ramanujan – are born either clever, of average intelligence or stupid. Like everything in the brain, intelligence evolved under ancestral conditions purely as an adaptation to change.

The effects of a good environment are not permanent, however. As soon as the nurturing influence is taken away from children, their achievements steadily diminish in what psychologists call the fadeout effect. In 2016, the University of California Santa Barbara psychologist John Protzko set out to discover how environmental interventions raise intelligence and exactly how long they last.31 Protzko had the perfect test subjects: a group of 985 children who had been born with a low birth weight, and thus were exposed to an intense programme of cognitive training to make up for it. The training lasted until the children were three, at which point they were given a range of intelligence tests. Then, at ages five and eight, they were tested again.

Protzko found that the intervention had raised the children’s intelligence at age three, but by age five the increases had completely faded. A mere two years of not exercising the mind to the same extent was all it took. Importantly, Protzko’s study provided further evidence that a person’s intelligence at one age has no bearing on their intelligence at another age. There is absolutely no causal connection, because intelligence doesn’t work that way. It can be gained and lost as quickly as getting in or out of shape.

A crucial lesson can be learned from Venezuela in the 1980s, when a lawyer named Luis Alberto Machado decided to launch a bold three-year state programme known as Project Intelligence. An idealist and stalwart supporter of universal education, Machado believed – or rather knew – that intelligence is largely determined by experience and the environment. The project’s goal was to teach thinking skills in seventh-grade classrooms (equivalent to Year 8 in England) in Venezuela. Teaching material included a range of tools designed to encourage problem solving, imaginative thinking, decision-making and self-confidence.

The results were astounding. Compared with children at six other state schools, pupils in the school carrying out the experiment scored nearly twice as high in a range of cognitive tests. To the tune of the mantra that every child has a right to an education, Machado argued that every child also has the right to develop his or her intelligence to the fullest. He published his views in a 1980 bestselling book, The Right to Be Intelligent.

The fact is, we all have the same neural hardware. What separates us from each other is the environment that we happen to have been born into. To say that a surgeon is cleverer than a bus driver would therefore be a misconception: the surgeon is simply using a different cognitive skill set, one that anyone could employ given the opportunity. This isn’t an idealistic fantasy. The rise in educational achievement in the West has occurred over just three generations, much too fast to be the result of genetic change.

What we can see over the course of our history is a gradual awakening of the mind, which in living memory has been propelled forward at an exponential rate. In our evolutionary journey from tool-maker to atom splitter, in the elevating force of nurture over nature, in the power of beliefs and societal change, and in the enriching diversity of thinking types and cultural perspectives, we can see a new understanding of intelligence blossoming.

31