9

The iBrain

With all the great achievements of humanity, it is tempting to think that the human brain may be at the pinnacle of its evolution and development. From our vantage point, our ancestors seem like children scrambling up a hill, while we, the ‘grown-ups’, smugly gaze down from the mountaintop.

But in fact our brains are still evolving – perhaps faster than ever. Predicting where this evolution will take us next is a question for all scholars in all disciplines, for the very nature of brain evolution is also changing. New technologies such as brain–computer interfaces, artificial intelligence, reprogrammable stem cells, and genetic tools such as CRISPR – a new technology that allows scientists to make precision edits to DNA – may forever change what it means to be human.

First the bad news. It is only a matter of time before Homo sapiens go extinct. All species go extinct sooner or later. Over 99.9 per cent of all species that have ever lived are extinct, and there is no reason to think that our species will somehow be exempt from this sobering reality. Even if humans avoid a mass extinction event – a lethal pandemic or a massive asteroid – they will still branch into a new species of human, meaning the original Homo sapiens will have gone extinct. In short, evolution and extinction are not mutually exclusive: when speciation occurs, it means that selective pressures have driven the older species to extinction. Though physicists and others often dream about colonising Mars and other planets, personally I think this is the only planet we have. I might be wrong. I hope I am, given how destructive Homo sapiens can be.

The good news is that our descendants will live on. Just as earlier species of humans (including Homo habilis, Homo heidelbergenis, Homo neanderthalensis and Homo floresiensis) and other great apes (including chimps, gorillas, bonobos and orangutans) branched into different species, Homo sapiens will split into several new kinds of ape entirely. In his novel The Time Machine, H. G. Wells foretold that by the year AD 802,701 humanity would evolve into two separate species, the blissfully naive Eloi and the grisly cannibalistic Morlocks, both in zero-sum competition with each other. While some experts predict a fate similar to Wells’s dystopia, with humanity splitting into a genetic upper class and underclass, I am not so pessimistic.

A new movement called transhumanism is taking root in our world, which aims to upgrade humans with gene enhancement and tech implants to such an extent that we become different beings entirely. Posthuman beings. When old age creeps up on us, we’ll be able to turn back the clock using an app that reprogrammes our DNA. When we want to go hiking or deep-sea diving, we’ll simply tweak the genes that make our muscles strong and our lungs hold breath for longer. And if these upgrades aren’t good enough, technology will step in. Body augmentation devices such as contact lenses that take pictures or videos, language translators that allow us to speak to anyone in the world, and mind-controlled prosthetics that let us pay for our shopping and share our holiday experiences with a nod of the head will become commonplace.

These are not fantasies of science fiction. In 2017 scientists established the first academic journal dedicated to pursuing transhumanism, the Journal of Posthuman Studies. ‘As the boundaries between human and “the other”, technological, biological and environmental are eroded,’ the journal proclaims, ‘and perceptions of normalcy are challenged, they have generated a range of ethical, philosophical, cultural, and artistic questions that this journal seeks to address.’ As our brains evolve into those of a new type of human, answers to such questions will no doubt raise disturbing questions of their own. Would immortal brains keep regressive thinking alive forever? (Now there’s a terrifying thought for you.) Would posthuman brains lead us into a tech-obsessed dystopia, where all creativity and diversity of thought is obliterated?

Of course, the fear of science and advanced technology will probably always be with us. Steven Pinker calls it ‘progressophobia’. When physicists built the Large Hadron Collider (LHC), the world’s largest and most powerful particle accelerator, many feared it would generate a black hole so large it would devour Earth and all life like a cosmic Cookie Monster. Only after years of public engagement have physicists allayed the public’s concerns. Brain advancements are perhaps more unnerving because they strike at the core of what makes us human. Even I – a brain-obsessed technophile – think there’s something quite chilling about a computer knowing what I’m thinking. Nevertheless, the greatest advances have already begun.

Theseus’s Paradox

In ancient Greek mythology, Theseus, the Athenian king who slayed the Minotaur, was so revered that his ship was preserved for hundreds of years. When the ship’s wooden planks started rotting away, they’d simply be replaced with new planks made from the same material. When the entire ship’s timber had been replaced, however, philosophers asked the logical question: can it still be considered Theseus’s ship? This paradox is now a major talking point in neuroscience labs, where it leads to heated debates about identity and the self.

Consider the following: it is the distant future, mortality is a thing of the past, and stem cell technology has reached its zenith. The molecular mechanisms of memory are completely understood, and when our brains age and our memories fade, both are simply replaced. Imagine you did this for hundreds of years. Would there come a point when your past self is no longer considered you? Would you become like the ship in Theseus’s paradox?

Like it or not, our species will eventually have to confront this question. Jürgen Knoblich, director of the Institute of Molecular Biotechnology of the Austrian Academy of Sciences in Vienna, is at the forefront in experimenting with this concept. In 2013 Knoblich began growing large parts of the human brain in a laboratory dish.1 These ‘organoids’ are still in their early development – and are not, Knoblich insists, ‘humanoids in a jar’. But the notion of using organoids to generate everlasting spare parts for the brain is not far off; indeed, monkeys suffering from neurological disease have had parts of their brain replaced with laboratory-grown human brain cells.2 ‘The idea of growing a human brain in a jar has always fascinated people,’ Knoblich said in an interview following his breakthrough:

There are tons of science fiction movies that deal with that. The real use of our system, though, is that we can grow brain tissue essentially from any human, whether healthy or diseased. And that raises the hope that some of the major neurological disorders eventually can be modelled in this system.3

I’ve long been fascinated with stem cells. In my In Pursuit of Memory: The Fight Against Alzheimer’s, I devoted a chapter to exploring their potential to treat and perhaps even reverse the symptoms of Alzheimer’s, a condition I became intimately familiar with when I witnessed my grandfather succumb to the disease. In 2012, when he was nearing the end, his treatment options were still as poor as they were in the 1980s, so the idea of growing brain tissue in a dish to create an endless supply of neurons was tantalisingly promising. For a time they were my biggest hope, my deepest obsession.

But I now realise I wasn’t thinking big enough. As we saw in our discussion on memory (chapter 4), the human mind hasn’t evolved to construct libraries of past selves tucked away in some basement of the brain. We are our memories as much as a painting is its brushstrokes. In the future, then, the miracle of neural stem cells won’t be their ability to preserve what we were, but to maintain what we are.

And this will rely on our brain’s ability to self-repair. In 2019, scientists at the University of Plymouth made a breakthrough with this concept. They found that one of the skills of neural stem cells is their ability to ‘wake up’ neighbouring brain cells by releasing a package of molecules called STRIPAK (Striatin-interacting phosphatase and kinase).4 STRIPAK has evolved in countless organisms from fungi to mammals and has many jobs: it helps move things around inside neurons, it helps other cells in the body divide, it even helps our hearts function. If scientists can pin down what it is about STRIPAK that rouses our brains stem cells into action, there may be no need for Knoblich’s organoids after all. Unleashing the brain’s healing powers not only protects us from disease; it makes us smarter as well. Stem cells in the hippocampus have a remarkable effect on learning and memory, both of which are linked to intelligence. And while the adage ‘use it or lose it’ is true, because your brain really is like a muscle, it’s also true to say that if you don’t lose it, you can always use it. In this way, rejuvenating rather than replacing our brains may be the true future of brain repair.

Organoids and self-repairing brains are not the only brains of the future, of course. Could artificial intelligence (AI) brains evolve in a way similar to our own? Many neuroscientists think they could. That’s because the algorithms that underpin machine learning are slowly being replaced by what we call genetic, or evolutionary algorithms. Inspired by Darwin’s theory of natural selection, evolutionary algorithms work by mimicking the features of a genome, introducing artificial mutation and recombination events into a computer’s software until the ‘fittest’ programme is selected.

Genetic algorithms are already being used in AIs such as Google’s DeepMind and AlphaGo, the latter of which recently beat the Go world number one, Ke Jie, from China. After his defeat, Jie called the programme the new ‘Go god’, vowing never again to subject himself to the ‘horrible experience’ of challenging the AI system.5 Genetic algorithms are the first step towards an AI brain matching that of a human. In October 2017 the neuroscientist Stanislas Dehaene of the Collège de France in Paris proposed two additional steps, which he calls consciousness 1 (C1) and consciousness 2 (C2).6 C1 is our ability to maintain a diverse range of thoughts simultaneously, making faculties such as forward planning possible. C2 is our capacity for introspection, allowing us to reflect on mistakes and improve over time. If these steps are realised, Dehaene writes, AI ‘would behave as if it were conscious… it would know that it is seeing something, would express confidence in it, would report it to others… and may even experience the same perceptual illusions as humans.’

Conscious AI terrifies most people. The great worry is that it would escape our control and destroy the human race. Legendary scientists including Alan Turing, Stephen Hawking, Bill Gates and Tim Berners-Lee have all voiced grave concerns about its potential. And yet their fears remain abstract for good reason: we’re still not sure what we mean by ‘conscious’ AI. As we saw in chapter 7, all forms of consciousness are really just illusions to position organisms in their ecological niche. But what’s a machine’s ecological niche? An air-conditioned server room in the Pentagon? Your office desk? Your phone pocket? The fact is, computers do not operate in the territory of what it means to be human. A machine has to want something beyond self-replicating programmes and self-preservation to be in the same league as us. Human brains create meaning, not computation: they reflect the lives they have lived and the world they have experienced. They construct rather than represent information. So to reduce their wondrous complexity to a computer code is, in my view, a mistake. Perhaps the more realistic future lies in a combination of mind and machine.

Homo Cyber Sapiens

‘Any sufficiently advanced technology,’ said the futurist and writer Arthur C. Clarke, ‘is indistinguishable from magic.’ This is especially true of a brain–computer interface (BCI) called the BrainGate system: a series of sensors implanted in the motor cortex, which allow a person to control muscles in their arm using the power of thought. John Donoghue, the founder of BrainGate, believes that such technology can help people who have had a stroke, lost limbs or been paralysed by diseases such as amyotrophic lateral sclerosis (ALS) and locked-in syndrome (LIS). LIS received world-wide attention when the French journalist Jean-Dominique Bauby dictated his experience of the disease by blinking one eye to write his famous book The Diving Bell and the Butterfly, in which he likens his paralysed body to a diving bell, with his mind, a butterfly, trapped inside. Ever hopeful of future therapies, he longed for the day science would free his mind: ‘Does the cosmos contain keys for opening my diving bell? A subway line with no terminus? A currency strong enough to buy my freedom back? We must keep looking.’7 Donoghue is convinced his BrainGate system would have helped Bauby: ‘I would have every expectation that if we had put BrainGate in his brain, it would have immediately started giving us signals.’8

Many people are now using BCIs. In 2017, researchers at the University of California, Irvine, used BCI technology in a paraplegic man to restore his walking after a spinal cord injury.9 Other researchers are using BCIs to stimulate the visual cortex to treat the blind,10 restore lost connections in stroke victims,11 and scan the brain for signs of depression. Treating the mind with machines dubbed electroceuticals has helped hundreds of thousands of people with hearing loss – using cochlear implants – and thousands of people with Parkinson’s disease and epilepsy – using deep brain stimulation (DBS). DBS is particularly impressive because it uses tiny electrodes implanted in the brain which send electrical impulses to regulate brain activity, all powered by batteries sewn into the patient’s chest. The concept itself isn’t new. The ancient Egyptians used electric catfish to treat arthritis, and the Romans used electric rays to treat headaches. But the technology is thrilling because it represents the first step towards creating bionic brains.

It is no exaggeration to say that such technology is allowing humans to shape their own evolution. The first to do exactly that is a man named Neil Harbisson, the world’s first officially recognised cyborg (short for cybernetic organism): a being that is part human, part machine. Born in Belfast and brought up in Spain, the thirty-seven-year-old has a rare genetic condition called achromatopsia, or complete colour-blindness: he sees the world in shades of grey. If you were to bump into Harbisson, one of the first things you would notice is the long black antenna protruding from his head, which translates wavelengths of light into musical notes. This gives Harbisson extrasensory perception, and he can ‘feel’ colours inside his head.

As an artist, Harbisson uses his extra sense organ to create dazzling technicolor performances and exhibits. One of his specialities is sound portraits, an artform in which he stands in front of a subject and uses his antenna to record the different ‘notes’ emitted from their face. He then downloads this onto a sound file, producing a facial portrait you can hear. His work has become so popular that A-list celebrities including Robert De Nero, Leonardo DiCaprio and Woody Allen have had sound portraits made. Describing himself as the first ‘trans-species’ person, Harbisson feels intimately connected with his neural prosthetic, telling National Geographic, ‘There is no difference between the software and my brain, or my antenna and any other body part. Being united to cybernetics makes me feel that I am technology.’12

Such progress will take time to be accepted. In 2004, the UK Passport Office rejected Harbisson’s passport photograph, saying that it should contain ‘no other people or objects. No hats, no infant dummies, no tinted glasses.’ After weeks of Harbisson explaining that his antenna was simply ‘an extension of his brain’, the baffled officials issued his passport. Seven years later, during a demonstration in Barcelona, Harbisson was attacked by three police officers who thought he was filming them. When he explained that he was a cyborg and was simply listening to colours, they laughed and tried to pull his antenna off his head.

The concept of the cyborg dates back to at least 1839, when the writer Edgar Allen Poe published his short story ‘The Man That Was Used Up’. The plot follows an unnamed narrator who meets the formidable General John A. B. C. Smith. Six feet tall and powerfully built, Smith is also a skilled raconteur and possesses a host of other enviable characteristics. But our shrewd narrator is convinced the general has a secret and, when he visits his home, he learns what it is. Piled on the floor is an assortment of strange objects which speak in the voice of the general. A servant then enters the room and begins piecing the objects together – an eye socket here, a leg there – slowly revealing the general to be no more than an assembly of artificial prostheses.

Poe’s tale certainly captured the public’s imagination, but it was the American scientists Manfred Clines and Nathan Kline who coined the term cyborg in 1960. Writing in the September issue of the journal Astronautics, they declared (in wildly ornate language), ‘For the exogenously extended organizational complex functioning as an integrated homeostatic system unconsciously, we propose the term “Cyborg”.’13 Their dream was to create human–machine hybrids to enable humans to explore outer space. Only then would we transcend nature and finally control our own evolution. ‘Space travel challenges mankind not only technologically,’ they wrote, ‘but spiritually, in that it invites man to take an active part in his own biological evolution.’

Another great advance is the Hybrid Assistive Limb (HAL). This is a wearable robotic exoskeleton suit, designed in 2012 by Japan’s Tsukuba University and the robotics company Cyberdyne, to help those who cannot walk due to spinal injury. Using sensors attached to the skin, the HAL suit boosts the electrical signals sent down the spinal cord from the brain, bypassing the injury and allowing the person to activate their muscles. The HAL suit then feeds back to the brain, teaching it what signals are necessary to walk: the first step towards walking without being assisted by the HAL. Scientists are now exploring the potential of HAL to treat stroke, paralysis and missing limbs.

The aspirations of BCIs do not end there: Silicon Valley companies such as Facebook and Google are now investigating the possibility of thought-to-text typing, neural processors for enhanced concentration, decision-making and fitness, and even ways to record and download our dreams. Imagine it: all those whirling thoughts, feelings and sensations instantly captured and played back for analysis. Most psychologists have given up trying to interpret dreams, but with that much data I suspect we could readily interpret them ourselves. We’re not there yet, but neural interfaces are currently able to detect how we respond emotionally in different situations. This so-called ‘mood reading’ has piqued the interest of those in the ‘neuromarketing’ world, ever watchful for technology that can detect how people respond to products and advertisements, as well as monitor their employees’ moods.

One of life’s greatest bugbears is our struggle to put thoughts into words. More often than not, we have to see our thoughts written down by a gifted journalist or hear them articulated by a skilled orator. We know what we think; we just need someone else to say it. As the famous technologist Mary Lou Jepson explains, ‘The [brain’s] input’s pretty good, but the output is constrained by our tongues and jaws moving and us typing… If we could communicate at the speed of thought, we could augment our creativity and intelligence.’14 In 2019, researchers at the University of California, San Francisco took us a step closer to this reality, creating a device that converts brain activity into speech by decoding the signals the brain sends to the tongue, lips, jaw and throat. The device can even decode speech when a person silently mimes sentences.

Researchers at Kyoto University, moreover, have successfully decoded the brain activity of someone looking at an owl, and when they converted this signal to a computer screen, the image looked very similar (if a little blurry) to the owl the person saw.15 In Charlie Brooker’s science fiction show Black Mirror, in the episode titled ‘The Entire History of You’, people possess ‘grains’: neural implants that record everything they do, see or hear. They can then rewind to any memory they choose, and even project the image onto a screen. The episode takes a dark twist when the main character, Liam, uses the technology to discover that his wife had an affair. Like other episodes in the series, it questions how technology will change the rules of society. It is no exaggeration to say that the Japanese researchers’ owl experiment could be the first step towards that future.

At the University of Washington, Seattle, the neuroscientist Andrea Stocco has taken things even further, showing that two brains can be directly linked across the Internet.16 This mind-to-mind communication allows each participant to know what the other is thinking. ‘The Internet was a way to connect computers, and now it can be a way to connect brains,’ says Stocco. In Stocco’s experiment, two people, each in a room a mile apart, wear an electrode-studded cap connected to an electroencephalography (EEG) machine to pick up signals from the brain. They then play a guessing game in which one person thinks of an object and the other person has to guess what it is. Remarkably, Stocco’s subjects arrive at the correct answer 72 per cent of the time. ‘We knew in theory it could work,’ he said. ‘Now we want to discover how well it can work.’

Like the ‘mindmelds’ of Star Trek, mind-to-mind communication could allow humans to become telepathic – transmitting thoughts between two brains without ever having to utter a word. Though the experiments performed thus far have involved transmitting very simple information such as object guessing and binary choices (left or right, one or zero), it’s only a matter of time before more complex messages are sent telepathically. The days of emailing and texting would be over. Instead, humans would simply beam their thoughts to each other.

One of the biggest contributors to neural interface research is the military. The Defense Advanced Research Projects Agency (DARPA), America’s nerve centre for new military technology, is now developing brain–computer interfaces that can sharpen soldiers’ mental skills, help them see more effectively in the dark, and allow them to control swarms of drones at the speed of thought. Some experts say this technology is a cleaner, safer alternative to the prescription drugs used by soldiers to boost performance, such as modafinil and Ritalin. Others, though, are worried about the long-term effects on a soldier’s psyche, and how such advances would fundamentally change the theatre of war: killing the enemy might suddenly feel as fictitious as shooting the bad guys in a video game. Then again, perhaps the precision of thought-operated weaponry would act as a better, less destructive deterrent than nuclear weapons.

While these advances may seem to be primarily technological, they actually represent the next step in our brain’s evolution. Many neuroscientists think that our brain is already operating at maximum capacity, and that any improvement in intelligence and information-processing is constrained by the number of neurons we possess. (Among the most popular of brain myths is that we only use 10 per cent of our brain’s capacity; in reality, the entire brain is constantly active, even during sleep.) So unless we find a way to create neurons externally (something we’re not even sure is possible), this means the next step for our brains may not be the organic evolution we have witnessed thus far. Indeed, the futurologist Ian Pearson envisages a future in which humans have merged with machines effectively to become a new species, called Homo cyberneticus. As this species develops increasingly sophisticated brain–computer interfaces, they evolve into Homo hybridus, until the organic parts of our brain are completely replaced by machines, leading to the rise of Homo machinus: a brain made entirely out of silicon, allowing our minds to achieve digital immortality.

The question on everyone’s mind, of course, is when? What timeline can we realistically expect for such radical brain advances? To answer that we must look to history. It took little more than a century to go from the horse-drawn carriage to NASA’s Mars Exploration Rover, and, if we follow Moore’s law – which states that the speed and capability of a computer doubles every two years – brain–computer interfaces may be at the stage computers were at in the 1970s. Which means that many of the advances discussed so far could become a reality in the twenty-first century. I suspect thought-to-text typing will come out just around the time I shuffle off this mortal coil. Damn.

Sceptics among you might point out that we surely need to understand the brain for such technologies to work, and we are certainly a long way from that understanding. The brain is endlessly, dizzyingly, stupefyingly complex; if I had to quantify it, I would say that we comprehend less than 1 per cent of its functionality. Yet a fuller understanding isn’t always necessary. The entire computer industry is built on quantum mechanics, a field just as complex and bewildering as neuroscience. So too are lasers, telecommunications, atom clocks, GPS, and Magnetic Resonance Imaging (MRI). Whatever the scientific discipline, there will always be levels of understanding beyond our current grasp.

Some are unsettled by this evolutionary future and the ethical issues that neural interfaces raise. If brains can be reached on the Internet, they can also be hacked. In 2019, scientists at the UK’s Royal Society scrutinised the risks of BCIs, questioning how governments and tech giants would – or should – control access to neural interfaces, and whether there should be limitations on their use. ‘If neural interfaces can be voluntarily used to influence behaviour by individuals, then should these be prescribed by states?’ they ask.

No one, for now, has any answers.

Another concern for the Royal Society report authors is that if we start using machines to perform brain functions, ‘is it us as humans doing it? Or is it the technology?’ Human agency, our capacity to make choices and act on those choices in the world, is among the most important features of our existence. We enshrine it in law and state institutions. It is the backbone of human freedom. One could of course argue that neural interfaces enhance human freedom by making life easier; after all, spending hour after hour concentrating, solving problems, and struggling to express oneself is hardly liberating. (Just think what you could do with all that extra time.) And yet, there is something distinctly human about overcoming life’s challenges. A fortitude that gives meaning to our lives and one that we might be unwise to forsake.

Whatever happens in the years ahead, the fact is that brain technology will change everything forever. As I write, all manner of neurotechnology is being developed, with sci-fi names such as neural lace (a mesh of tiny electrodes implanted in the skull to monitor brain function), neural dust (a wireless nerve sensor, powered by ultrasound, to monitor and control nerves) and neuropixels (probes that can record the activity of hundreds of neurons). As neuroscientists and others continue to think of ways to make machine-enhanced minds a part of our world, such advances appear, to many, to be a step too far. Nowhere does this debate seem more urgent than on the subject of human violence.

Rewiring for Peace

In the early afternoon of 24 February 1999, Donta Page, a career burglar, broke into the home of a twenty-four-year-old charity worker named Peyton Tuthill in Denver, Colorado. Terrified, Peyton ran upstairs – but Page caught her. After punching her several times in the face, he dragged her into a bedroom and raped her. He then slit her throat and stabbed her six times in the chest. According to his taped confession, Page murdered her because it was ‘just the first thing that came to mind’.

This type of senseless violence is not unique to our species. Chimpanzees display psychopathic behaviour. Dolphins rape and murder each other for fun. Nature, wrote Tennyson, is ‘red in tooth and claw’. No one knows what the evolutionary roots of crime are. In the 1870s Cesare Lombroso, an Italian criminologist and physician, became convinced that criminals are a devolved, undeveloped form of our species. He made curious biological observations such as this: ‘A criminal’s ears are often of a large size. The nose is frequently upturned or of a flattened character in thieves. In murderers it is often aquiline like the beak of a bird of prey.’18 These observations were erroneous, of course, but the latest research suggests that certain brain attributes may incline some people to violence.

A new field known as neurocriminology is revolutionising our understanding of what makes humans commit crimes. Its core argument is that while the environment undoubtedly contributes to crime and violence, neurobiological factors also play a fundamental role. A word coined by the Canadian psychologist James Hilborn, neurocriminology is changing everything from prison sentences to the way we view rehabilitation. It also raises profound questions regarding the nature of free will and personal responsibility. If criminal behaviour is physically engrained somewhere in the inner recesses of the mind, can we really say that criminals are truly responsible for their actions? Perhaps it is like obesity, with some people’s biology making them more likely to overeat and others less likely. In that case, perhaps a graded system of punishing criminals in which some receive shorter sentences despite having committed the same crime would be more appropriate than our current system.

Page was sentenced to life in prison without parole. In the years following Page’s arrest, a British neuroscientist called Adrian Raine became interested in the science of criminology and decided to study Page’s brain using functional brain imaging. He found reduced activity in the prefrontal cortex (which regulates impulse control) and increased activity in the amygdala (that regulates emotion). Raine can now use this distinctive brain pattern to predict with 70 per cent accuracy which criminals will reoffend following their release from prison.19

The discovery has already been put to use. In one study, Kent Kiehl, a neuroscientist at the Mind Research Network in Albuquerque, New Mexico, studied ninety-six male prisoners just before their release. He found that men who have lower activity in the anterior cingulate cortex, a small region at the front of the brain, had a 2.6-fold higher rate of rearrest for all crimes, and a 4.3-fold higher rate for nonviolent crimes.20 In another study, researchers showed that if a released prisoner has a small amygdala, he or she is three times more likely to reoffend.21 Remarkably, we may be able to use this information to make adjustments to our brains. Scientists at the University of Colorado, Boulder have created a brain implant as small as a human hair that can detect brain waves linked to violence, and alter a subject’s brain chemistry accordingly.22

Discovering which brain regions are responsible for violence is a priority for neurocriminologists. In 2016, a ground-breaking study by Annegret Falkner and her colleagues at Princeton University showed that an area of the hypothalamus known as the ventromedial hypothalamus (VMH) becomes active during behaviour such as stalking, bullying and sexual aggression.23 The hypothalamus is an ancient brain region that has been conserved throughout mammalian evolution. Located at the base of the brain, the hypothalamus controls sleep, hunger, thirst, sex, anger, hormone release and body temperature. With such a myriad of functions, pinning down its role in violence has been tricky. But strikingly, Falkner’s study showed that the VHM becomes active in the moments leading up to an act of aggression, in those menacing seconds just when you feel your blood is beginning to boil. Now, researchers are investigating ways to turn the VHM on and off like a switch, arresting our violent impulses before they ever have a chance to harm others.

Science fiction enthusiasts will have already spotted where this is heading. In Phillip K. Dick’s 1956 dystopian novella Minority Report (adapted into a 2002 film starring Tom Cruise), a specialised police department dubbed ‘PreCrime’ apprehends criminals before they commit their crimes using a group of psychics called ‘precogs’. Swap psychics for a neural implant that monitors the VHM, and the novella may just have predicted the future. How scientists propose to switch the VHM on and off is a fascinating advance in its own right. The technique, known as optogenetics, uses light artificially to activate or deactivate groups of neurons. First thought up by Francis Crick in 1999 (discovering DNA clearly wasn’t enough for him) and later perfected by Zhuo-Hua Pan at Wayne State University and Edward Boyden and his colleagues at Stanford University, optogenetics relies on injecting a light-sensitive protein (derived from a jellyfish) into a specific brain region. Before long, the neurons start producing the protein themselves, as if it had been there all along, allowing parts of the brain to be switched off with a beam of light.

Think about what happens when people get into a fight. First there’s the verbal dispute and rising anger, which then quickly erupts into fisticuffs. Often, though certainly not always, a friend or passer-by sees what’s happening and tries to break up the fight. Our social brains and evolved moral sense intervene for the benefit of our species. But there’s a heavy cost: people get hurt, sometimes killed. Imagine if that friend or passer-by was a chip implanted in our heads; not there all the time, of course, just whenever we need. Ah, but what about self-defence, you might think. Yet even self-defence would be unnecessary because, if every brain had such a chip, violence towards another human would be literally unthinkable.

But there doesn’t have to be a chip imposed from outside. We also know that contemplative practices like mindfulness and meditation can, quite literally, rewire the brain. ‘You can use your mind to change the function and structure of your brain,’ explains Dan Siegel, a psychiatrist and mindfulness expert at the University of California, Los Angeles, whose research has shown that mindfulness meditation stimulates the growth of integrative fibres in the brain. Siegel believes that through mindfulness-based empathy and compassion, ‘we can evolve better brains.’24 Perhaps, then, world peace also lies in a greater understanding of how our brain’s neurophysiology affects our mood and sense of well-being.

Mindfulness meditation relies on paying close attention to the present moment. All thoughts of the past and future must be silenced, and the person must simply focus on the thoughts, feelings and bodily sensations she is experiencing right now. In addition to its burgeoning popularity among Westerners in recent years, mindfulness has gripped the medical world, with studies showing that it ameliorates stress, anxiety, depression and even drug addiction. In 2017, moreover, scientists discovered its true biological impact: it lowers inflammatory molecules and stress hormones by around 15 per cent. Now, mindfulness is recommended by the UK’s NHS and other leading healthcare providers around the globe.

Thankfully, while human history appears to suggest that the brain is inherently predisposed to violence, new research indicates that humans are actually evolving more peaceful and cooperative minds. In 2016, José María Gómez at the University of Granada conducted the first thorough survey of violence in the mammal world, collecting data on more than a thousand species.25 Believe it or not, in addition to starring in adverts for car insurance, the mammal most likely to be murdered by its own kind is the meerkat. It roams the Kalahari Desert in gangs of fifty, with 19.4 per cent meeting their end at the claws and teeth of another meerkat. When Homo sapiens first evolved, our murder rate was about 2 per cent, meaning one in fifty would have been murdered by other people. This is six times more lethally violent than the average mammal and is a pretty high percentage when you consider that the average for all the species Gómez and his team studied is 0.3.

The rates of lethal violence in human societies has risen and fallen throughout history. During the Palaeolithic period (the Stone Age, up to 2.4 million years ago), our rate was thought to be about 3.9 per cent. During the Medieval period, that rate rose to around 12 per cent, only then to fall precipitously in recent centuries to a rate lower than when we first evolved.

It’s not entirely clear how the brain is becoming more peaceable as it evolves. In his famous essay Leviathan (1651), Thomas Hobbes argued that peace and social harmony are only achieved through a social contract established by a strong government. Left to their own devices, Hobbes believed, humans are selfish, violent creatures – ‘solitary, poore, nasty, brutish’ – who will exist in a state of nature and constantly seek to destroy one another. By organising themselves into states, humans create what he called an ‘artificial person’, a ‘common power’ or ‘political community’ that enforces and protects civil society. In one of his many proclamations, he asserts: ‘a man [must] be willing when others are so too… to lay down his right to all things; and be contented with so much liberty against other men, as he would allow other men against himself.’26 After witnessing the British Civil Wars, Hobbes was starkly aware of the consequences of state failure, and of how easily humans will embrace brutality.

Hobbes was definitely onto something. Human minds adapt and develop in response to the external environment, and in our time that environment is heavily defined by large states and the rule of secular law.

Your mind is changing. Or rather, your mind is changing still. The human brain is an engine of change. Its biological powers have been bestowed upon us by millions of years of evolution. But from its beginnings in chance and mishap, our brain’s history has evolved into a tale shaped by us – by our societies, our discoveries and our uncompromising quest for knowledge. Nothing in our brain is fixed or permanent. In its endless cycles of change, in its susceptibility to nurture, and in its power to shape our world, our minds offer us limitless possibilities.

26