In 1892, the popular US poet Walt Whitman died and left his brain to science. It’s a good job he wasn’t around to see what happened next. Science dropped it. Whitman’s brain, the source of some of America’s favourite verse, hit the floor and broke into pieces. This probably wasn’t what Whitman had in mind when he wrote the famous line, ‘If you want me again, look for me under your boot soles.’
No matter, there were still plenty of good brains to go around. This was the age of the gentleman scientist, and in the late nineteenth century there was nothing that marked a scientist as a gentleman quite as much as his willingness to allow friends to have a good rummage around in his head when he passed away.
Whitman had hoped for greater things. He had asked for his brain to be removed as part of a worldwide scientific effort to locate the anatomical basis for intelligence. These early neuroscientists were looking for markers of intelligence in the brain, and to do so they made a simple assumption: bigger is better.
It makes sense a larger brain would indicate more intelligence. Your brain accounts for about 2 per cent of your body weight yet demands 20 per cent of the oxygen you breathe. A fifth of your food goes into powering the brain and its billions of cells. The more brain cells, the more they can do, and the increasing size of the brain during human evolution is linked to the development of more complex, intelligent behaviour. We scoff at the dinosaurs because we hear their brains were the size of walnuts. (In fact dinosaur brains were a decent size.)
Just like early ways to analyse IQ, the inspiration for Walt Whitman and his friends to measure and compare brain size to find the source of high intelligence had originated in France, where a group of scholars, academics and committed secularists in Paris formed the brilliantly named Mutual Autopsy Society. Each member pledged that on his death, the others could hack open his skull, retrieve his fresh brain and place it on display for the public.
In death, these members of the Mutual Autopsy Society hoped to make a point they failed to prove in life: there was no soul and so, contrary to religious teaching, humans did not deserve to be placed on a higher spiritual plane than any other animal. New members to the society would pledge their allegiance with a solemn oath. ‘Free thinker, loyal to scientific materialism and the radical Republic, I intend to die without the interference of any priest or church’.
Similar brain donor clubs cropped up in Russia, Germany and Sweden. But it was in the United States the idea really took hold. Unlike those across the Atlantic in France, the God-fearing men of America did not want to prove the non-existence of a higher power. They wanted to demonstrate they – and their esteemed colleagues – were themselves a higher power. They wanted to use the size and shape of their dead brains to prove their kind were more intelligent than the rest.
More than a century on, we have scanners today to watch a brain at work. But much of what we know about the brain function still comes from the kinds of natural experiments carried out in Whitman’s day, which observed the impact of brain injury and disease. The most influential was work back in the 1860s, when the neuroscientist Paul Broca, with the help of stroke patients who had lost the ability to talk, pinned down generation and control of speech to the frontal lobe part of the brain where the damage was concentrated – now called Broca’s area.
Could the control of cognitive ability – the seat of intelligence – be found also? Some scientists thought so, and a school of research known as phrenology identified intellect as a key human trait divined by assessing the physical bumps and lumps on the surface of the skull. Greater intellect, the phrenologists argued, would swell that region of the brain on the inside and the increase would show up on the outside. Some popular terms we still use to describe intelligence come from this period. Highbrow, for instance, was originally a physical description, because the phrenologists associated a high forehead with cleverness (lowbrow was the opposite). Telling someone to get their head examined was first an invitation to visit not a psychiatrist as we would say today, but a phrenologist.
As the phrenologists fell from fashion, the search for intelligence switched from the outside of the skull to the inside. A new generation of researchers worked with dead bodies. At first they threw away the brains. They boiled empty skulls clean and plugged eye sockets with rags and cloth. To measure the size of the discarded brain, and by inference the cleverness of its owner, they stuffed the space it had occupied with water, mustard seeds or lead shot, then tipped the contents out and measured them. Skulls were easy to collect and keep. Collections built up, sometimes hundreds strong.
Although these skulls were measured in the name of science, using them to search for intelligence was a cover for darker motives. In most cases, skull collections were used to support claims of difference between races. More accurately, they were used by white men to supposedly show how other races were inferior.
Among the keenest of these collectors was the anthropologist Samuel George Morton, who gathered and measured more than a thousand skulls from across the world, including bones from South Africa and Australia. Morton claimed white people had consistently bigger spaces in their heads for brains than black people. This fed into the belief at the time that whites and blacks were different species, and the whites, because of their extra brains, were superior.
The heads and skulls used in these comparisons were anonymous, often scavenged from battlefields, and this limited how they could be used. Beyond ethnicity, the remains said nothing about what the dead person had been like in life – what they had done and how intelligent they had been.
To prove larger brains produced more intelligence, these early scientists needed to go a step further. They had to connect the larger heads and brains they measured to the great abilities and achievements of their former owners. This is where Walt Whitman and his friends saw an opportunity.
Inspired by France and the Mutual Autopsy Society, a group of self-regarding men of the northeast United States formed what came to be known as the Brain Club. They preferred the grander title the American Anthropometric Society. Similar to the French, each man pledged the others could remove his brain after death and examine it for clues to his greatness.
Up to 300 men are believed to have joined the society, but few admitted it publicly, and even fewer went to the trouble of writing it into their will. Walt Whitman never did. Historians think the poet – long fascinated by the brain and friends with many of those who studied it – probably agreed to donate his brain but didn’t tell his family. Certainly his brother, George Whitman, was horrified at the idea and, when Walt did die, did not want there to be an autopsy.
The idea of Brain Club members was simple: it all came down to size. Bigger, heavier brains, they reasoned, held more potential and more ability, and so should confer upon their owner more status. As they removed and weighed each other’s brains, they convinced themselves the idea was correct. They published league tables of brain weight, with the heaviest brains of their professional friends and colleagues – physicists, lawyers, composers, humorists, mathematicians, politicians, economists, editors, writers, geologists and judges – grouped towards the top. The undisputed brain heavyweight champion was the Russian poet and novelist Ivan Turgenev, who left behind a brain of 2,012g, the first and only to break the 2kg barrier.
The reverse was also true, the Brain Club believed. People of lower status – bricklayers, blacksmiths and labourers – had smaller and less powerful brains, they said, and appeared in mid-table.
At the bottom of the league were those people given the smallest and stunted brains, as incapable of producing intelligence as the owner was of moral and intellectual achievement. These people were the criminals, and there were plenty of their brains to go around.
One of the most high-profile criminal brains from the period was taken from the anarchist Leon Czolgosz, who shot US President William McKinley near Niagara Falls as the two men went to shake hands. It took President McKinley more than a week to die from his wounds. By the end of the following month, Czolgosz was dead also, after being quickly tried, convicted and then executed in the electric chair.
Within an hour of his death, Czolgosz was in pieces on the post-mortem slab. His brain was the prize, and, surprisingly, given the high profile of the case, it was removed and described by a fourth year medical student. The brain was normal, disappointingly so for those who believed criminal tendencies would show up not just in small size but as physical features. ‘It is a probable fact that certain oft-mentioned aberrations from the normal standard of brain structure are commonly encountered in some criminal or degraded classes of society,’ the young student wrote in his autopsy report. ‘But these structural abnormalities, so far as they have been described in the brains of criminals, are too few and too insufficiently corroborated to warrant us drawing conclusions from them’.
The medical student concluded the assassin was socially diseased and perverted, but not mentally diseased. ‘The wild beast slumbers in us all. It is not always necessary to invoke insanity to explain its awakening’.
The student’s name was Edward Anthony Spitzka, and his career was nearly over before it began, when an unscrupulous stenographer who transcribed Spitzka’s words during Czolgosz’s post mortem tried to sell them to the press. The student was forced to write to medical journals to warn them that if they published a ‘garbled rendition’ he would disclaim it.
Spitzka perhaps got the job of cutting open a presidential assassin despite his inexperience because someone knew his father. Edward Anthony Spitzka’s father was called Edward Charles Spitzka, and he forged a similar career to the one his son would pursue, in neurology and implications for society.
Most notoriously, Spitzka Senior had testified in 1881 that another Presidential assassin, Charles Guiteau, the killer of James Garfield, was insane. In a bad-tempered appearance in court, Spitzka Senior was forced to deny prosecution charges that an academic post at Columbia Veterinary College meant he was not a psychiatrist.
‘You are a veterinary surgeon, are you not?’ he was asked.
‘In the sense that I treat asses who ask me stupid questions I am,’ he snapped back. Despite the testimony, Guiteau was found guilty and hanged.
Spitzka Senior was present at the execution of William Kemmler in the electric chair – the story that begins this book. And he was an original member of the US Brain Club, passing control of the society to his son in 1902. When he did so, Spitzka Junior found his dad’s prized assets in a dreadful state. Three of the founder members of the society had died by then, but two of their stored brains had flattened and become distorted. Of the donated brains of other members, at least two were in pieces, one because it had been left to float in hardening fluid for ten years. Walt Whitman’s, of course, was missing.
Spitzka Junior took studies of brains for signs of intelligence and esteem out of the shadows. Buoyed by his well-received analysis of the executed Czolgosz, he investigated more brains, both of criminals and the great and good. (When his father died it was Spitzka Junior who removed and measured Spitzka Senior’s brain.)
As he chopped skulls and analysed brains, the younger man was explicit in his scientific goals. ‘It is not enough merely to admire the genius of an Archimedes or a Homer, a Michelangelo or a Newton; we wish to know how such men of brains were capable of these great efforts of the intellect’. Given so many great men were willing to leave their brains to science, he added: ‘It is our business to endeavour to ascertain why and how some are more, some less, gifted than others’.
These investigations of brains were crude and messy and unreliable. Done in a proper scientific way, the researchers should have not known if a brain being examined had belonged to an esteemed colleague or a common criminal. These early scientists were doing it the other way around. They knew whose brain they were measuring, and given their idea that successful men had larger brains, it’s not surprising they measured them in that way, because they wanted them to be so.
Anomalous results were discarded or explained away. An unusually light brain of a great man was excused, and said to be down to the degradation of ageing, or because bits must have been left behind when a clumsy technician scooped it from his great head. The too-heavy brain of a lesser individual was blamed on disease or the chemicals used to preserve it. The data were massaged to fit the pattern and the world order the scientists believed in. This form of cognitive bias is a common trap for scientists, and the early anthropologists were far from the first, or last, to fall into it as they pursued the mystery of intelligence.
Spitzka Junior’s own studies of executed criminals helped convince him an unusually heavy brain in the less gifted could be explained by disease or abnormality. It was unfair, he said, to include the weights of these brains in any true scientific analysis. ‘Those great water-logged pulpy masses in the balloon-like heads of hydrocephalic idiots did not discover and never could have discovered the laws of gravity, invent the ophthalmoscope, create Hamlet, or found modern natural history’.
He added: ‘The brains with which we here concern ourselves are those of men with healthy minds who, in their life time, attained high distinction in some branch of the professions, arts, or sciences, or who have been noted for their energetic and successful participation in human affairs’.
Their efforts sound crude, but more rigorous studies and endeavours of modern neuroscience do confirm these early intelligence researchers were on to something. Large brain size and greater IQ are linked. It’s not a massive effect, but it is significant. The same goes for head size, presumably because large brains need large heads to hold them. As crude as it sounds, the simplest way to gauge someone’s mental prowess is a tape measure around their head.
In 2007, scientists in Edinburgh used head measurements to estimate the intelligence of Scottish national hero Robert the Bruce (victor over the English at the 1314 Battle of Bannockburn). They analysed a cast of his skull prepared when Bruce’s body was exhumed in 1819.* Bruce, the scientists said, had an IQ of 128 and maybe higher. That’s about right, they claim, for a man behind the 1320 Declaration of Arbroath, which proclaimed Scotland as free from English rule and is credited by some historians as the inspiration for the US’s own Declaration of Independence.
The confirmed link between skull size and intelligence would no doubt please the members of the Brain Club and the Mutual Autopsy Society, and it shows they were on the right track. But the link doesn’t help when it comes to cognitive enhancement. We don’t have a way to make our heads and brains bigger and nor are we likely to in the future.
To find ways to boost the workings of the brain, we need to be more sophisticated and look inside. Could the shape and structure of the brain perhaps offer an insight to the source of intelligence? If so, then it should show up in the brain of a man whose name has become shorthand for genius.
The strange story of what happened to Albert Einstein’s brain after his death has been told many times. But it’s still worth recording here some highlights, if only to demonstrate the continuing allure the secrets of intelligence have for modern scientists; secrets that Albert has been reluctant to reveal.
Einstein knew his brain would be targeted. And unlike the members of the Mutual Autopsy Society, he had no wish for it to become a laboratory exhibit. Before he died he seems to have given clear instructions: his remains were to be cremated and scattered in secret.
Yet during the 1955 autopsy into the cause of Einstein’s death (a burst aorta), his brain was secretly removed by a pathologist who believed he could use it to make his name. The pathologist, Thomas Harvey, chopped it into more than two hundred pieces and prepared over a thousand tissue slides, each of which contained a thin slice. He posted these out across America, to seek the opinions of the leaders in the field. The rest of the brain he kept in jars in a cupboard of his Princeton University office, resisting for decades enquiries and requests to examine it, including from the US Army. If studies of the posted slides were ever carried out, the results showed nothing out of the ordinary, and the scattered pieces of Einstein’s brain were left to gather dust in drawers and attics. Most are still out there.
After a journalist wrote about Harvey’s work with the brain in the late 1970s, requests from scientists for new pieces to study came pouring in. Again, the enterprising pathologist popped slides in the post.
Together with detailed photographs Harvey had taken, those samples produced a wave of new studies, most of which claimed to have found something unusual. Results based on them still appear from time to time.
Einstein, according to those who have examined his brain, had an unusually high number of glial cells, which nourish neurons and keep them in place. The brain cells in his prefrontal cortex were especially tightly packed, while his inferior parietal lobule, associated with spatial and mathematical tasks, was unusually wide. As recently as 2012, new research claimed Einstein’s brain had an extra ridge on its mid-frontal lobe, a region linked to planning and memory.
But in many ways, Einstein’s brain was unremarkable. It weighed a pretty paltry 1,230g – towards the lower end of the normal range for a man in his seventies.
When it comes to intelligence, only so much can be gleaned from dead brains, however big and famous their former owners, which is why scans of the living insides of people’s heads prove so alluring to modern neuroscience. Usually taken with magnetic resonance imaging (MRI) machines, these scans offer an eyewitness account of how parts of the brain demand more blood when their owners perform mental tasks. That’s usually taken as a proxy for increased activity, and neuroscientists then try to deduce which parts of the brain are involved in, and perhaps responsible for, mental traits from cognitive skills and emotions to decision making and memory.
Charles Spearman’s general intelligence, ‘g’, can’t be found in the brain, or at least it can’t be located on a scan in a specific part of the brain. It’s real, but it doesn’t exist in a structure that can be pointed to. It’s more a measure of what the brain does; just as athletic ability is a genuine measure of how physical prowess differs between individuals, but couldn’t be traced in a scan of muscles.
If we break intelligence down into some of the constituent parts – memory, maths, language, reasoning etc – then it becomes a little easier to place each of them inside the brain. Regions in the parietal lobe are known to help us identify and process visual imagery, and the hippocampus is strongly associated with memory. But while brain scan studies continue to ascribe functions to an increasing number of specialist parts, they don’t explain why one person’s works better than someone else’s.
The brain has two types of tissue. Grey matter does the bulk of the work. White matter holds the grey in place and passes signals between different brain areas. Both seem relevant to intelligence. Just like brain volume, a larger overall amount of grey tissue seems to relate to higher intelligence, particularly so in areas including the prefrontal cortex. The same seems to be true for white, connecting matter, though the conclusion is not so clear cut. What does seem crucial is integrity of the white tissue, which makes sense given its job. Damaged connections will clearly interfere with how well the brain can work. (The progressive loss of white matter connections could explain why many cognitive abilities decrease with age.)
Some studies find people who are skilled in a specific mental ability show a measurable difference in brain structure. Most famously, neuroscientists in London reported in 2000 that London taxi drivers, who must show an encyclopaedic knowledge of the city streets to get their licence, have more grey matter than usual in the hippocampus.
While that might demonstrate that repeated use and practice of a set of mental skills can grow a specific brain region, the conclusion doesn’t really work the other way around: finding an enlarged hippocampus in a plumber from Aberdeen wouldn’t guarantee she could tell you the quickest route to drive from London Bridge to King’s Cross Station.
These structural characteristics of more intelligent brains can help pin down the neural basis of cognitive ability, but they are no more use than brain size when it comes to cognitive enhancement. We can’t go in and add grey tissue. If we want to improve the way a brain works, then we must look beyond structure and try to improve its function. So how does a more intelligent brain function?
Rather than being a product of a specific brain region, general intelligence seems to come from how effectively various brain regions can work together. To solve a problem, parts of the temporal and occipital lobes, at the base and back of the brain, first take the raw signals that flood in from the eyes and ears and process them. This information is fed into the parietal cortex, a broad arch of brain tissue just under the crown, where it is annotated and labelled with meaning. It then goes forwards to regions of the prefrontal cortex, sitting behind the forehead, which manipulate it, package it into possible ideas or solutions, and test them. As one solution emerges as preferred, another part of this prefrontal cortex, the anterior cingulate, is recruited to block the other, incorrect, responses.
Because most of the intellectual heavy lifting in that series of brain functions takes place after the processed sensory information gets shunted from the back towards the front of the brain, this model of intelligence is called the Parieto-Frontal Integration Theory (P-FIT). The better this P-FIT circuit works, then the more general intelligence a brain, and so a person, will have.
So, and we are nearing more promising cognitive enhancement opportunities here, how does one person’s P-FIT circuitry work better than another’s? And can it be artificially improved?
Like a computer, raw processing speed seems to be important. One way to study functional differences between brains is to monitor their neuronal activity. When each neuron fires, as it is recruited to help solve a problem or to transmit a signal, it produces a little burst of electrical current. Add millions, maybe billions, of these tiny bursts together, as happens when the brain does something, and the overall electrical buzz can be measured. The technique – EEG, for electroencephalogram – is pretty common, and involves sensitive electrodes placed against the scalp to listen for electrical changes in voltage.
EEG can track these voltage changes to investigate everything from sleep to epilepsy and it works on a simple principle: when the brain is active, its electrical activity increases. EEGs, for example, reveal more spikes when the brain works on a mathematical puzzle than when it is asleep.
Of particular interest to intelligence scientists is the way the EEG can record the brain’s response to a stimulus, such as a sound. Within one-tenth of a second, the EEG trace of brain activity shows a tell-tale response. There’s a small dip and then, about another tenth of a second later, it recovers. The most significant action comes after another tenth of a second – three-tenths in all after – when it shows a sharp spike. That is called the brain’s P300 response.
The P300 response is a hot area of research in neuroscience. Some scientists think it could offer a reliable way to spot when someone is lying. And psychologists have linked it to intelligence. Specifically, they have found the P300 response comes slightly earlier in people with higher mental ability (the difference is perhaps a few thousands of a second). Clever people seem to have a faster electrical response. And some studies link better performance on tests of intelligence to a higher P300 peak.
The shape of the three responses on the EEG chart might differ according to intelligence as well. Some studies suggest lower cognitive performance is associated with a less defined, less complex, response. The three bumps are not so obvious. Because the more complex traces associated with higher intelligence would, if straightened out, form a longer line, some psychologists call this the piece-of-string test. How long is a piece of string? It could depend on how bright you are.
A more significant difference is visible in the way clever people fuel their brain activity. Brain scans of the way glucose is used to release energy, another proxy of mental activity, show, as would be expected, that energy demand increases when the brain is put to work. In people who score well on intelligence tests, the required increase is smaller. High intelligence is linked to efficiency. Those with less effective brains need to burn more glucose to fire more neurons to solve the same problem. This could indicate more intelligent people need to recruit fewer neurons and set into action a smaller number of brain circuits.
We don’t have all the answers of how intelligence shows itself in brain activity yet – analysis of brain circuitry is a new focus for neuroscience. But we do know that intelligence circuits, like all those in the brain, rely on two types of communication: chemical and electrical. And, as we’ll see, neuroscience now has tools that can alter both.
In 2015, neuroscientists showed the way these brain circuits activate is highly personal. Although we all use the brain’s P-FIT system to reason and problem-solve, we each do it in a slightly different way, recruiting a different number of neurons and in a different order. In fact, the neuroscientists, from Yale University in the US, found patterns of brain activity so personal they served as a kind of neuronal fingerprint. The scientists could pick out and identify people from a large group of volunteers by mapping and then looking for their tell-tale patterns of brain connections as they performed cognitive exercises.
What’s more, the neuroscientists found these brain fingerprints also indicated a person’s intelligence. A computer could compare scans of people of known intelligence and pick out brain connections and patterns they had in common. Then it could use that information to accurately estimate the intelligence of people it had never seen before, based on a scan of the way their brain was wired. Who needs IQ tests? In future all it might take to find the brightest in society is a scan of their brain circuitry in action.
What determines the layout and workings of these brain circuits, the equipment and infrastructure of our P-FIT thinking and reasoning system? To a large extent, like much of our physical architecture – from the shape of our nose to the colour of our eyes – our brain wiring is genetically determined, influenced by those who went before. Your brain is like the brains of your parents, and like the one you will pass on to your own children. You don’t truly own a brain. You look after it for the next generation. It’s a simple principle, but also a dangerous one that appeals to the worst of human nature.
In my day job, I write editorials for the science journal Nature. The articles tend to be aimed at a specialist audience, who work in research or are involved with the funding and support for such research. Sometimes we tackle the big issues in broader society – the refugee crisis in Europe in the late summer of 2015 was one I was quite proud of. The best editorials to write are those when the narrow interests of science and the broader issues of society overlap, on topics like climate change, new biological techniques that could breed designer babies, and so on.
Nature has published since 1869, and it’s as much a journal of record as a weekly magazine to inform and entertain. Most decent libraries have bound volumes that go back decades and articles from back issues are still referred to and discussed. The refugee editorial we ran in 2015, for instance, leaned heavily on a similar piece, to address a similar crisis, which Nature published in 1939. The position that Nature takes on the big questions of the day tends to be in line with the attitudes of most professional scientists: humanitarian and evidence-based. Sometimes though, I look at editorials from past issues and wonder just what in God’s name we were thinking.
In February 1926, a predecessor who held the same job I do now wrote an editorial on the subject of intelligence in Nature. It was titled ‘Racial Purification’. And yes, it was as bad as it sounds. Trigger warning: this is where the story of intelligence takes a very distressing turn.
* Bruce’s body never really rested in peace. His heart was removed on his death and taken on the Crusades against the Moors in Spain. Returned to Scotland, it has been dug up and reburied at least twice more.