You get mutant powers from outsider genes

Genes from other species, and cells from your relatives, live inside your body, writes Sean O’Neill – and they hint at how we can improve ourselves.

Let’s begin with the obvious. You are the product of billions of years of evolution, the accumulation of trillions of gene-copying errors. That’s what led single cells to evolve into jellyfish, ferns, warthogs and humans. Without mutations, life would never have evolved into Darwin’s ‘endless forms most beautiful’, and you would never have seen the light of day.

Today, while most of our genes are undeniably Homo sapiens, many of us also carry DNA from other species. We have known for a decade that people of non-African descent inherit between 2 and 4 per cent of their DNA from Neanderthals. And we now know that DNA from several other extinct human species is also still in circulation, on every continent including Africa.

Not only do you carry DNA from other species, you probably also play host to other people’s cells. Before you were born, your mother’s cells crossed the placenta into your bloodstream. Decades later, some of these migrants are still there in your blood, heart, skin and other tissues. This ‘microchimeric’ exchange was mutual: if you are a mother, your children may still be inside you in the form of their embryonic stem cells.

You may even be carrying cells from your grandmother and any older siblings. Because microchimeric cells persist for a long time, there is a chance that during pregnancy your mother was still carrying cells from any previous children she had, as well as cells from her own mother – and she may have shared some with you.

Maternal microchimerism is extensive, says Lee Nelson at the University of Washington in Seattle, and probably useful too. ‘There are so many examples in biology where organisms thrive as a result of exchange – why wouldn’t it also be useful for humans to exchange cellular material?’ Foetal cells may help to repair a mother’s damaged heart tissue and lower her risk of cancer. Other research shows that mothers can end up with their child’s DNA in their brains, something that may even be linked to a reduced risk of the mother developing Alzheimer’s.

In future, we could become mutants by design. Gene-editing tools like CRISPR should allow genetic diseases to be treated by injecting genes into the body. For example, a small number of people with a mutation in the CCR5 gene, which supplies a protein to the surface of white blood cells, are resistant to HIV. CRISPR opens the possibility of inserting that mutation into the DNA of others, giving them a genetic vaccine against the virus.

From there, it’s only a baby-step to genetic superpowers. Ethical questions notwithstanding, future generations could be enhanced with genes for extra-strong bones, lean muscles and a lower risk of cardiovascular disease and cancer. A mutation in the ABCC11 gene currently found in about 1 in 50 Europeans even renders underarms odourless. Think of the savings on deodorant. Be warned, however: this mutation also makes your ear wax dry up. Swings and roundabouts.

Your body is a nation of trillions

Think you’re only human? Legions of creatures inhabit the cracks, contours and crevices of your body – and they all contribute to who you are, says Daniel Cossins.

Last night, while you were sleeping, legions of eight-legged creatures had an orgy between your eyebrows. No, you haven’t suddenly been invaded by sex tourists. Demodex mites, close relatives of ticks and spiders, are permanent and mostly harmless residents of the human face.

‘Every person we’ve looked at, we’ve found evidence of face mites,’ says Megan Thoemmes at North Carolina State University in Raleigh. ‘You can have thousands living on you and never even know they’re there.’

Growing up to 0.4 millimetres long, these beasts spend their days buried head-down in hair follicles gorging on who-knows-what and crawling out under cover of darkness to copulate. They have no anus, so on death disgorge a lifetime of faeces into your pores.

Before you lunge for the exfoliating brush: Demodex mites are far from your only microscopic residents. You host astonishing biodiversity, from anus-less arthropods to pubic lice to all manner of bacteria and fungi, and without it you wouldn’t be who you are. ‘Each of us is really a complex consortium of different organisms, one of which is human,’ says Justin Sonnenburg at Stanford University in California.

Our resident aliens aren’t all benign. There are big beasts like parasitic worms: roundworm, hookworm and whipworm are prevalent in the developing world, and pinworm still infects kids in the West. Then there are hidden viruses such as Herpes simplex, which lies dormant inside the nerve cells of two-thirds of people until it mistakes your sniffles for a deadly fever and attempts to save itself by rushing outwards, causing cold sores.

By far the dominant group, however, are bacteria. You have at least as many bacterial cells as human cells, perhaps 10 times more. Only recently have we begun to grasp the extent of their diversity, and there’s plenty left to discover. We’ve even found bacteria that survive by parasitising other bacteria. They live in your spit.

Similar battles play out across your many habitats, from the caves of your nostrils and your anal-genital badlands to the crevices between your toes where the fungus Trichophyton rubrum can flare up as athlete’s foot. All of these critters are constantly shedding from your skin and lungs, forming your own unique cloud of airborne bacteria that follows you everywhere.

But the densest microbial gathering is in our gut, a community that affects aspects of health from digestion and immune defences to possibly even mood and behaviour. In mice, seeding the gut with Lactobacillus rhamnosus bacteria has been shown to alleviate anxiety, perhaps by producing molecules that alter brain chemistry.

The balance of gut microbiota can shift rapidly in response to diet and lifestyle. To tend it you need to feed it right. Your best bet isn’t much-hyped probiotics or live bacteria, but simply to eat more fibre, the preferred meal for a group of bacteria with potent anti-inflammatory powers. ‘It has been known for a long time that plant-based fibre is associated with good health,’ says Sonnenburg. ‘Now we know why.’

There is a physics genius inside your brain

Without even realising, you perform fiendishly complex real-time calculations and predict the future like no other species can, says Richard Webb.

The washing-up pile wobbles precariously as you balance another saucepan at its summit. For a second, it looks like the whole stack will come down. But it doesn’t. Swiftly, instinctively, you save it.

Congratulations – not just on another domestic disaster averted, but also on showing a peculiarly human genius. Octopuses rival our dexterity, New Caledonian crows have a frighteningly clever way with tools and chimps beat us in tests of short-term memory. But no other species can perform complex, real-time calculations of their physical environment and generate specific, actionable predictions quite like the ones that rescued your crockery. ‘It’s kind of amazing,’ says artificial intelligence researcher Peter Battaglia from Google DeepMind in London. ‘To me it defies my ability to understand.’

In 2013, Battaglia and two colleagues showed that our inbuilt ‘physics engine’ works in a similar way to a graphics engine, software used in video games to generate a realistic playing environment. It is programmed with rules about objects’ physical behaviour, and uses limited real-time inputs (from a player in a game, from our senses in reality) plus probabilistic inference to generate a picture of what comes next. ‘What you have in your head is some means for running a simulation,’ says Battaglia. ‘You make a 3D model of what’s around you and press the run button, it tells you what will happen. It’s a way to predict the future.’

In 2016, Jason Fischer at Johns Hopkins University in Baltimore, Maryland, and his colleagues scanned the brains of people doing a task involving physics intuition – predicting how a tower of stacked wooden blocks would fall – and showed that the physics engine sits in specific brain regions. Areas of the motor cortex associated with the initiation of bodily movement consistently lit up during the first task, but not on a second, purely mathematical task, estimating the number of different coloured blocks in the tower.

That was surprising at first, says Fischer. ‘But on the other hand it makes perfect sense: you don’t execute any action without mental models.’ So our inbuilt genius won’t necessarily help us with physics as an academic discipline, which relies on different brain circuits. That much is clear in experiments where researchers get people to draw the predicted path of a falling object, says Fischer: their intuitions are completely off. But have them catch the same falling object, forcing them to engage their motor system, and they’re spot on.

There’s still a lot to learn about how we generate our simulations – not least given that our device’s power consumption, at around 20 watts, is less than a tenth that of a medium-range graphics card. ‘The type of processing we use is clearly vastly more efficient,’ says Battaglia.

But we should be aware of our limitations, too. Our physics engine is programmed with the equations of classical mechanics, which describe the visible world around us – things like falling plates. It does not work so well on less obvious layers of reality. ‘Understanding electromagnetism and quantum mechanics, our instincts are not going to be so useful,’ says Fischer. There, things don’t stack up so easily.

You are frighteningly easy to manipulate

Your behaviour is heavily influenced by your environment and the people around you – but Julia Brown finds there are easy ways you can take back control.

You wouldn’t stand facing the back of the lift or sit out front in the garden, would you? Well there you are. Proof positive that you’re not in control of your actions – the people around you are. And not just them. Your environment controls you, as do habits you don’t even know you have. But realise what’s really pulling your strings, and you can work out how to manipulate yourself for the better.

US social scientist Roger Barker was the first to notice this sort of environmental control. Back in the 1950s, he observed the population of a small US town, and realised that the best predictor of a person’s behaviour was not personality or individual preferences, but their surroundings. People in a shop behaved as people in shops do. Ditto for libraries, churches, bars, music classes, everything.

More recently, Wendy Wood of the University of Southern California and her colleagues have shown how almost half of the behaviours we adopt in any given situation are habitual – an automated action learned by repetition until we do it without thinking. ‘These were a wide range of behaviours,’ says Wood, ‘including eating, napping, watching TV, exercising and talking with others.’

Social control bubbles up from beneath, too. ‘Reputations are so important in the social world,’ says Val Curtis, who studies behaviour change at the London School of Hygiene and Tropical Medicine. Society is founded on cooperation, and we can’t benefit from it unless we gain acceptance by adhering to the unwritten rules. So we face front in the lift.

We work this way because neurons are expensive to run. If we had to do everything consciously, we would have no energy for anything else – automation frees up processing power. We notice that when we lose the comfort blanket of subconscious control. ‘If you have ever walked into a restaurant in a foreign country, you are almost paralysed until you work out what everyone else is doing and then copy them,’ says Curtis.

Identifying your unconscious workings provides you with ways to fine-tune your behaviour. For a start, if you want to change bad habits, have a look at where and how you enact them, and then try to disrupt that pattern. If you want to stop smoking, avoid the places where you are likely to spark up, or move your cigarettes out of sight. If you want to start eating more healthily, stop meeting friends for lunch at a burger restaurant. ‘Yes, you think now that you’ll order the salad, but when you get there, the cues and smells will be hard to resist,’ says Wood.

Curtis has used such insights to develop ways to encourage handwashing with soap in India and to modify the tendency for mothers in Indonesia to feed their children unhealthy snacks. She suggests we can all prime ourselves in similar ways. If you think you ought to do some exercise but don’t really feel like it, just put your running gear on anyway, and wait and see what happens, she says. ‘The kit takes you for a run. You let it control your behaviour.’

You are a fantasist

Think you’re saner, smarter and better-looking than the average? Well so does everyone else. Recognising our delusions is the first step to doing better, writes Tiffany O’Callaghan.

Ever had the sense that everyone else is an idiot? Maybe that’s a tad overblown, but when it comes to smarts, looks, charisma and general psychological adjustment, there’s no denying you are a cut above the average person in the street. Or on the road: have you seen how those jerks drive?

Well, here’s the bad news. Pretty much everyone else is thinking the same thing.

The phenomenon of self-enhancement – viewing ourselves as above average – applies across human ages, professions and cultures, and to capabilities from driving to playing chess. It does have advantages. People who are more impressed with themselves tend to make better first impressions, be generally happier and may even be more resilient in the face of trauma. High self-estimation might also let you get ahead by deceiving others: anthropologist Robert Trivers at Rutgers University in New Brunswick, New Jersey, argues that when we’ve tricked ourselves, we don’t have to work so hard to trick others, too.

Confidence also helps in finding a romantic partner, and so in reproduction. When it comes to overestimating our looks, we’re all at it – although men are on average worse offenders than women. According to a 2016 study by Marcel Yoder of the University of Illinois in Springfield and his colleagues, men seem to suffer from a ‘frog prince’ delusion: they accurately assess other people’s lesser perception of them, while persisting in a more positive perception of themselves.

The real downsides come when you’re less aware of how others perceive you. If you are self-confident without being self-aware, you are likely to be seen as a jerk. ‘It’s hard to come off as humble or modest when you’re clueless about how other people see you,’ says Yoder. Plus we may make bad decisions on the basis of an inflated sense of expertise or understanding.

Particularly in the political arena, our ‘bias blind spot’ – a belief that our world view is based on objective truth, while everyone else is a deluded fool – can become problematic, especially as the echo chamber of social media exposes us to fewer contrary views. ‘It can make opposing parties feel that the other side is too irrational to be reasoned with,’ says Wenjie Yan, who studies communication at Washington State University in Pullman.

So how can we preserve the good while avoiding the downsides? Different strategies and training programmes do exist for overcoming our inbuilt biases. Most begin by simply making people aware of them and how they can affect our decision-making.

At home, we can use an exercise that psychologists call ‘perspective-taking’. This amounts to trying to see a dispute from the other person’s point of view, says Irene Scopelliti, who studies decision-making at City University of London. She also points out that acting when you’re all riled up – in a state of high emotion – only entrenches your bias. ‘We know how to make unbiased decisions, but often emotion pushes us, or we aren’t willing to put in the effort,’ she says. But then comes the good news: ‘practice can make us better.’

Evolution made you a scaredy-cat

An inbuilt fear factory makes us err on the side of caution. But by engaging a different way of thinking we can stop panicking and weigh up the real risks, says Sally Adee.

In the aftermath of 11 September 2001, most people in the US believed that they or their families were highly likely to become victims of terrorist attacks. ‘Which is just off the charts crazy when you think about it for even a minute,’ says Dan Gardner, an author and risk consultant based in Canada. Instead of boarding planes, people in the US got in their cars. Over the following five years, the annual death toll on the road was on average 1100 higher than it had been in the five preceding years.

We are, in general, appalling at assessing risk: driving is inherently riskier than flying, terrorists or no terrorists. We also underestimate our chance of divorce, and spend more than is rational on lottery tickets and less than is rational on climate change. We fear our kids being abducted, so drive them to school, ignoring the greater risks that poses to their health and well-being.

How to do better? First, switch off your gut. Psychologists characterise our risk problem as a clash between system 1 and system 2 thinking. System 1 is the product of evolved biases shaped over thousands of years. ‘If you saw a shadow in the grass and it was a lion and you lived to tell the tale, you’d make sure to run the next time you saw a shadow in the grass,’ says Gardner.

This inbuilt fear factory is highly susceptible to immediate experience, vivid images and personal stories. Security companies, political campaigns, tabloid newspapers and ad agencies prey on it. System 1 is good at catastrophic risk, but less good at risks that build up slowly over time – hence our lassitude in the face of climate change or our expanding waistlines.

So when your risk judgement is motivated by fear, stop and think: what other, less obvious risks might I be missing? This amounts to engaging the more rigorous, analytical system 2. People who deal with probability and risk professionally have been found to use system 2 more, among them bookies, professional card players – and weather forecasters. ‘Meteorologists get a bad rap,’ says Gardner, ‘but they tend to be highly calibrated, unlike most of us.’

These people receive precise, near-immediate feedback about their predictions – abuse for a false weather forecast, or a crucial card trick lost – which helps them constantly recalibrate their risk thermometer. That’s something we can all do. ‘Choose something specific you want to improve your risk intelligence for,’ says Dylan Evans, a risk researcher. ‘What time will your spouse be home tonight? Make bets with yourself. Were you right? Keep track.’

That sounds trivial in the home, but it’s crucial in business. Part of the problem in the run-up to the financial collapse of 2008 was that individuals were no longer accountable for their own actions, says Andre Spicer, who studies organisational behaviour at City University of London. ‘At banks, there was no direct relationship between what you did and the outcome,’ says Spicer. ‘That produced irrational decisions.’

There’s one feature you see over and over in people with good risk intelligence, says Gardner. ‘I think it wouldn’t be too grandiose to call it the universal trait of risk intelligence – humility.’ The world is complex – be humble about what you know, and you’ll come out better.

Mindfulness can defeat your inner bigot

We are wired to be prejudiced and a bit racist, says Caroline Williams, but our instinct for collaboration can trump our worst instincts.

From Brexit to President Trump, recent political events have let some nasty cats out of the bag. Racists and xenophobes are on the march. But perhaps that shouldn’t be so surprising: after all, that is what we are.

Here’s the unpalatable truth: we are biased, prejudiced and quite possibly a little bit racist. Psychologists have long known that we put people into little mental boxes marked ‘us’ and ‘them’. We implicitly like, respect and trust people who are the most similar to us, and feel uncomfortable around everybody else. And before you deny it, this tendency towards in-group favouritism is so ingrained we often don’t realise we are doing it. It is an evolutionary hangover affecting how the human brain responds to people it perceives as different.

In one study from 2000, just showing participants brief flashes of faces of people of a different race was enough to activate the amygdala, part of the brain’s fear circuitry, even though the participants felt no conscious fear. According to more recent research, however, the amygdala doesn’t just control fear; it responds to many things, calling on other brain areas to pay attention. So although we’re not automatically scared of people different from us, we are hardwired to flag them. Evolutionarily, that makes sense: it paid to notice when someone from another tribe dropped by.

We’re also prone to dehumanisation. When Susan Fiske at Princeton University scanned volunteers’ brains as they looked at pictures of homeless people, she found that the medial prefrontal cortex, which is activated when we think about other people, stayed quiet. Volunteers seemed to be processing the homeless people as subhuman.

‘The bad news is how fast this automated “us” and “them” response is, and how wired-in it is,’ says Fiske. ‘The good news is that it can be overcome depending on context.’ In both the homeless study and a rerun of the amygdala study, Fiske found that fear or indifference quickly disappeared when participants were asked questions about what kind of food the other person might enjoy. ‘As soon as you have a basis for dealing with a person as an individual, the effect is not there,’ says Fiske.

What’s more, what we put in the ‘them’ and ‘us’ boxes is remarkably flexible. When Jay Van Bavel at New York University created in-groups including people from various races, volunteers still preferred people in their own group, regardless of race. All you have to do to head off prejudice, it seems, is to convince people they are on the same team.

We are also instinctively cooperative, at least when we don’t have time to think about it. Yale University psychologist David Rand asked volunteers to play gambling games in which they could choose to be selfish, or cooperate with other players for a slightly lower, but shared, payoff. When pressed to make a snap decision, people were much more likely to cooperate than when given time to mull it over.

So perhaps you’re not an arsehole after all – if you know when to stop to think about it and when to go with your gut. Maybe, just maybe, there is hope for the world.

We’re all reading each other’s minds all the time

Your power to predict what other people think is the secret sauce of culture and social connections. And there’s scope for us all to improve, writes Gilead Amit.

Meet Sally and her flatmate Andy. Sally has made a birthday cake for Andy, and leaves it in the fridge while she pops out to buy some candles. While she’s gone, Andy sneaks into the kitchen, takes the cake and hides it on a shelf to consume at leisure. When Sally comes back, where does she think the cake will be?

If you answered ‘the fridge’ then congratulations: you understand that, based on what they know, people can have different views from you. You possess a ‘theory of mind’ – something that informs your every waking moment, says Josep Call, a psychologist at the University of St Andrews. ‘When we get dressed in the morning, we’re constantly thinking about what other people think about us.’ No other animal can match our ability, making it the essential lubricant for the social interactions that set humans apart.

Take the arts. Artists must be able to imagine what their audiences will think of their characters. Without a theory of mind, there would be no compelling TV soaps, sculptures or books. Some think William Shakespeare must have had a particularly well-developed theory of mind to create such rich, complex characters.

Mind reading is also crucial for societal norms. ‘People not only respond to what you do, but to what you intend to do,’ says Call. If you hit someone with your car, the difference between a verdict of murder or manslaughter depends on your intent.

Yet we can’t all read minds equally well, says Rory Devine, a psychologist at the University of Cambridge. Most of us come a cropper when attempting nested levels of mind reading. Think of Sally hunting for her cake again, but imagine where she might look if we take into account what she thinks about how Andy’s mind works. The more recursive steps we add, the more we stumble. ‘When you go beyond five levels, people get really, really bad,’ says Call.

Being a good mind reader pays. Children who are relatively proficient later report being less lonely, and their teachers rate them as more sociable.

We may be able to improve our skills. We know our mind reading apparatus mostly develops before the age of 5, and the principal factor that determines its development is whether our families and friends talk much about the emotions and motivations of others. ‘The ability to read minds is something we might learn gradually from the guidance of others,’ says Devine.

This suggests that it could help to just think about what it’s like to be in other people’s shoes. In 2014, Devine and his colleagues showed that this learning can continue far beyond early childhood. When they asked 9- and 10-year-old children to read and discuss short vignettes about social situations, the team found they developed better mind-reading skills than children in a control group. Similar improvements have also been seen in people over the age of 60. You’re never too old to be a better mind reader.

You are the greatest runner on Earth

Other species might be better at speed or distance, but no species can run faster, further under all conditions than humans can. And yes, says Catherine de Lange, that includes you.

In October 2016, Daniel Lieberman set out on the race of a lifetime. A 25-mile slog in the Arizona heat, climbing a mountain more than 2,000 metres tall. To top it all, 53 of his competitors had four legs. This was the 33rd annual Man Against Horse Race. Lieberman, by his own admission not a great runner, outran all but 13 horses – and so could you.

Lieberman studies human evolutionary biology at Harvard University, and part of his work over the past 15 years has focused on a unique set of adaptations that suggest modern humans evolved not just to walk, but to run long distances.

One is our cooling equipment. ‘The fact we have sweat glands all over our body and we’ve lost our fur enables us to dump heat extremely effectively,’ says Lieberman. This is crucial when running for long periods. It helps to explain why animals struggle to beat us in the heat, even though sled dogs can run more than 100 kilometres a day pulling humans in cold climates. Hence also Lieberman’s success in Arizona. ‘The hotter it is, the better humans are able to run compared with horses,’ he says.

Then there are adaptations that offset our clumsy, inefficient bipedal frames. Short toes and large gluteal muscles assist with balance and stability. The Achilles tendon and other springs in the feet and legs help us to store and release energy. We tend to have a high proportion of slow-twitch muscle fibres, which produce less power but take longer to tire than the short-burst, fast-twitch fibres needed for sprinting.

The nuchal ligament at the base of the skull also helps to keep our heads, and therefore our gaze, steady when we run. Other decent runners such as dogs and horses have one, but they’re not found in poor runners such as pigs and non-human primates or early hominids like Australopithecus. Many of these adaptations are specific to running, suggesting we’re not just good at it because we are good walkers.

One theory is that we began running as scavengers, where an ability to outrun other carnivores to reach fresh meat was to our advantage. As we improved, we became better hunters, able to track and outrun our prey over large distances before we had spears and arrows. This all helped to provide us with the extra protein we needed to acquire our greatest advantage: a bigger brain. ‘The features that we see in the fossil record that are involved in running appear about when we start to see evidence for hunting. And soon thereafter their brains start to get bigger,’ says Lieberman.

So can you unleash your inner marathon runner? In a word, yes. Genetics is important but training is key, says sports scientist Chris Easton at the University of the West of Scotland. You’ll need stronger leg and bum muscles, to be sure, but you can get these simply by starting to run. You will find it hard to increase the proportion of slow-twitch muscle fibres you have, but if you find yourself flagging, take your time and take comfort in the fact we evolved to jog, rather than sprint, over the finish line. ‘Millions of people run marathons and people tell us we are crazy,’ says Lieberman. ‘Actually, it’s part of who we are.’

Think you’re an atheist? Heaven forfend!

Graham Lawton reveals that your default is to believe in the supernatural, and there is no manual override.

Fingers crossed, touch wood. By the time you finish this, you’ll believe you believe in the supernatural.

For most of us, that is a given. The vast majority of people are religious, which generally entails belief in a supernatural entity or three. And yet amid the oceans of religiosity are archipelagos of non-belief. Accurate numbers are hard to come by, but even conservative estimates suggest that half a billion people around the world (and counting) are non-religious.

But are they, really? Among the scientists who study the cognitive foundations of religious belief, there is a widespread consensus that atheism is only skin-deep. Scratch the surface of a non-believer and you’ll find a writhing nest of superstition and quasi-religion.

That’s because evolution has endowed us with cognitive tendencies that, while useful for survival, also make us very receptive to religious concepts. ‘There are some core intuitions that make supernatural belief easy for our brains,’ says psychologist Ara Norenzayan at the University of British Columbia.

One is the suite of cognitive abilities known as theory of mind, which enables us to think about and intuit other people’s thoughts. That’s damn useful for a social species like us, but also tricks us into believing in disembodied minds with mental states of their own. The idea that mind and body are distinct entities also seems to come instinctively to us. Throw in teleology – the tendency to seek cause and effect everywhere, and see purpose where there is none – and you can see why the human brain is a sitting duck.

The same thought processes probably underlie belief in other supernatural phenomena such as ghosts, spiritual healing, reincarnation, telepathy, astrology, lucky numbers and Ouija boards. These are almost as common as official religious beliefs; three-quarters of Americans admit to holding at least one of 10 common supernatural beliefs.

With all this supernatural equipment filling our heads, atheism and scientific materialism are hard work. Overriding inbuilt thought patterns requires deliberate and constant effort, plus a learned reference guide to what is factually correct and what is right and wrong. Just like a dieter tempted by a doughnut, willpower often fails us.

Many experiments have shown that supernatural thoughts are easy to invoke even in people who consider themselves sceptics. Asked if a man who dies instantly in a car crash is aware of his own death, large numbers instinctively answer ‘yes’. Similarly, people who experience setbacks in their lives routinely invoke fate, and uncanny experiences are widely attributed to paranormal phenomena.

Obviously, it is impossible to prove that everyone falls prey to supernatural instincts. ‘There is no more evidence than a few studies, and even they do not provide enough support for the argument,’ says Marjaana Lindeman, who studies belief in the supernatural at the University of Helsinki. Nonetheless, the supernatural exerts a pull on us that is hard to resist. If you’re still under the illusion that you are a rational creature, that really is wishful thinking.

You might be a hologram

You, I and the entire universe may be a hologram – and a major new experiment is dedicated to finding out, says Marcus Chown.

Take a look around you. The walls, the chair you’re sitting in, your own body – they all seem real and solid. Yet there is a possibility that everything we see in the universe – including you and me – may be nothing more than a hologram.

It sounds preposterous, yet there is already some evidence that it may be true. If it does turn out to be the case, it would turn our common-sense conception of reality inside out.

The idea has a long history, stemming from an apparent paradox posed by Stephen Hawking’s work in the 1970s. He discovered that black holes slowly radiate their mass away. This Hawking radiation appears to carry no information, however, raising the question of what happens to the information that described the original star once the black hole evaporates. It is a cornerstone of physics that information cannot be destroyed.

In 1972, Jacob Bekenstein at the Hebrew University of Jerusalem showed that the information content of a black hole is proportional to the two-dimensional surface area of its event horizon – the point of no return for in-falling light or matter. Later, string theorists managed to show how the original star’s information could be encoded in tiny lumps and bumps on the event horizon, which would then imprint it on the Hawking radiation departing the black hole.

This solved the paradox, but theoretical physicists Leonard Susskind and Gerard ’t Hooft decided to take the idea a step further: if a three-dimensional star could be encoded on a black hole’s 2D event horizon, maybe the same could be true of the whole universe. The universe does, after all, have a horizon 42 billion light years away, beyond which point light would not have had time to reach us since the big bang. Susskind and ’t Hooft suggested that this 2D ‘surface’ may encode the entire 3D universe that we experience – much like the 3D hologram that is projected from your credit card.

It sounds crazy, but we have already seen a sign that it may be true. Theoretical physicists have long suspected that space–time is pixelated, or grainy. Since a 2D surface cannot store sufficient information to render a 3D object perfectly, these pixels would be bigger in a hologram. ‘Being in the [holographic] universe is like being in a 3D movie,’ says Craig Hogan of Fermilab in Batavia, Illinois. ‘On a large scale, it looks smooth and three-dimensional, but if you get close to the screen, you can tell that it is flat and pixelated.’

Quantum fluctuation

Hogan recently looked at readings from an exquisitely sensitive motion-detector in Hanover, Germany, which was built to detect gravitational waves – ripples in the fabric of space–time. The GEO600 experiment has yet to find one, but in 2008 an unexpected jitter left the team scratching their heads, until Hogan suggested that it might arise from ‘quantum fluctuations’ due to the graininess of space–time. By rights, these should be far too small to detect, so the fact that they are big enough to show up on GEO600’s readings is tentative supporting evidence that the universe really is a hologram, he says.

Bekenstein is cautious: ‘The holographic idea is only a hypothesis, supported by some special cases.’ A dedicated instrument built at Fermilab in 2014, the Holometer, hoped to find more evidence. It didn’t, but that doesn’t rule the idea out.

Solid evidence for the holographic universe would challenge every assumption we have about the world we live in. It would show that everything is a projection of something occurring on a flat surface billions of light years away from where we perceive ourselves to be. As yet we have no idea what that ‘something’ might be, or how it could manifest itself as a world in which we can do the school run or catch a movie at the cinema. Maybe it would make no difference to the way we live our lives, but somehow I doubt it.

You might be older – or younger – than your years

Biological age can diverge from the number of years we celebrate on our birthdays, says Helen Thomson – and it sheds light on the time we have left.

Age is a peculiar concept. We tend to think of it as the number of birthdays we have celebrated – our chronological age. But this is just one indicator of the passage of time. We also have a biological age, a measure of how quickly the cells in our body are deteriorating compared with the general population. And these two figures don’t always match up.

Just take a look around: we all know people who look young for their age, or folks who seem prematurely wizened. Even in an individual, different parts of the body can age at different speeds. By examining how chronological age lines up with biological age across the population, researchers are starting to pin down how these two measures should sync up – and what it means for how long we have left when they don’t.

Studies have shown that our biological age is often a more reliable indicator of future health than our actual age. It could help us identify or even prevent disease by tracking the pace at which we’re getting older. It may even allow us to slow – or reverse – the ageing process.

I became interested in my biological age after discovering in my 20s that my ovaries were ageing prematurely. Yet now, at 33, I am still often asked for identification when buying alcohol, suggesting my face is holding up pretty well. It made me wonder about other aspects of my biological age, and whether knowing more might help me to live a longer, healthier life. So, I set out to answer the question: How old am I really?

Ageing is the progressive loss of function accompanied by decreasing fertility and increasing mortality, according to Thomas Kirkwood from the Institute for Ageing at the University of Newcastle. Surprisingly, it’s not universal across species. The Turritopsis dohrnii, or ‘immortal jellyfish’, can revert to a larval state and turn back into an adult indefinitely, for instance. We don’t have that luxury. According to the UK Office for National Statistics, I can expect to live to 83.

The most widely cited theory of ageing is that telomeres, genetic caps on the ends of chromosomes, grow shorter each time a cell divides – like a wick burning on a candle. Once these are used up, the cell withers and dies. But a new idea gaining ground suggests ageing is instead a byproduct of how energy intensive it is for our bodies to continuously repair faults that occur in our DNA as cells divide. ‘It doesn’t make evolutionary sense to maintain that process for ever,’ says Kirkwood. Indeed, several animal studies have shown that genes that affect lifespan do so by altering cells’ repair mechanisms. Little by little, faults build up in cells and tissues and cause us to deteriorate.

This is where biological age comes in – it attempts to identify how far along we are in this process. It’s not a simple task, because no one measure of cellular ageing gives a clear picture. As Kirkwood says, ‘Attempts to measure biological age have been bedevilled by the difficulty of taking into account the many different biological processes at work.’

Still, a growing number of researchers have taken up the challenge. Before seeking them out, however, I began to wonder whether I could be in for a nasty surprise. When Daniel Belsky and his team at Duke University in North Carolina studied 18 different markers of cellular ageing – including blood pressure and cardiovascular function – in almost 1000 adults, they found that some were ageing far faster or slower than their birth certificates would suggest. One 38-year-old had a biological age of 28; another’s was 61.

So if I have an accelerated biological age, does it mean I’m less likely to make it to 83? Studying humans until they die takes a long time, so the causal relationship is tricky to pin down. But an increasing number of studies suggest this is a fair assumption. Belsky’s team found that 38-year-olds with an older biological age fared worse on physical and mental tests, for instance. And when James Timmons and colleagues at King’s College London examined expression of 150 genes associated with ageing in 2015, they found that biological age was more closely tied to risk of diseases such as Alzheimer’s and osteoporosis than chronological age.

Braced for a rocky ride, I started the hunt for my real age by looking in the mirror. In 2015, Jing-Dong Jackie Han and colleagues at the Chinese Academy of Sciences in Shanghai analysed 3D images of more than 300 faces of people between 17 and 77 years old, and created an algorithm to predict age. When they used it on a new group of faces, they found that people born the same year differ by six years in facial age on average, and that these differences increase after 40.

‘Some molecular changes in the body can be reflected on the face,’ says Han. High levels of low-density cholesterol (the ‘bad’ kind) are associated with puffier cheeks and pouches under the eyes, for instance. Dark circles under the eyes can result from poor kidney function or blood circulation. The message is that if we look older than we should, it could be a sign of underlying disease.

The algorithm was developed using a population of Han Chinese people and so far has only been tested in four people of white European descent. So, as a white woman, I had my face analysed by a similar algorithm designed by anti-ageing company Youth Laboratories in Russia. The result was a win for me: I apparently have the face of a 25-year-old.

Next it was time to draw some blood. Using 32 different parameters that reflect disease risk, a team at the company Insilico Medicine developed a deep-learning algorithm to predict age. After training it on more than 60,000 blood samples of known chronological age, they used it to accurately predict age from new samples to within 6.2 years. The team found that people whose blood age was higher than their actual number of years were more likely to have health problems. The algorithm is free to use, so after I had my blood taken by Medichecks in London, I plugged in my details at www.aging.ai. Reassuringly, it shaved off a couple of years, estimating my real age to be 31.

Another method for measuring biological age is to look at how complex carbohydrates called glycans are attached to molecules in the body, a process called glycosylation. Gordan Lauc and colleagues at the University of Zagreb recently discovered that glycosylation of an antibody called immunoglobulin G changes as we get older, and that this can be used to predict chronological age. When Lauc’s team compared 5117 people’s ‘glycan age’ with known markers for health deterioration, such as insulin, glucose, BMI and cholesterol, they found that those who scored poorly on these markers also had an older glycan age.

‘Your glycan age seems to reflect how much inflammation is occurring in the body,’ says Lauc. Prolonged inflammation can make cells deteriorate faster, so having an accelerated glycan age could be used as an early warning signal that your health is at risk, he says.

Lauc and Tim Spector, a genetic epidemiologist at King’s College London, founded GlycanAge – a company that tests people’s glycan levels – and kindly tested mine for free. It turns out my glycan age is just 20, a whopping 13 years younger than I am.

With a new spring in my step, I moved on to what is now considered the most accurate way to measure human ageing: an intrinsic ‘epigenetic’ clock present in all our cells. Epigenetics refers to the process by which chemical tags called methyl groups are added to or removed from DNA, which in turn influences which genes are switched on or off. Some changes in methylation patterns over time can be used to estimate age.

The father of this technique is Steve Horvath at the University of California, Los Angeles. In 2011, looking at methylation patterns in blood samples, Horvath and colleagues were able to predict chronological age to within five years. He has since analysed data from more than 13,000 samples and identified methylation patterns to estimate a healthy person’s age to within 2.9 years. ‘The age estimate is so accurate it continues to amaze me,’ says Horvath. (Unfortunately, for the purposes of my investigation, at $900 a pop, I decided to give this test a miss.)

Horvath is also interested in discrepancies between our chronological age and epigenetic clock, which diverge most drastically in cancer tissue. Trey Ideker, a medical researcher at the University of California, San Diego, and his colleagues discovered that the epigenetic age of kidney, breast, skin and lung cancer tissue can be almost 40 per cent older than the person it came from.

A recent study by Horvath and his team suggests that breast tissue from healthy women aged 21 appears 17 years older than their blood, which tends to correlate closely with their chronological age. This difference decreases as we get older; for women aged 55 years, breast tissue appears around eight years older than blood. By identifying what the normal differences are, researchers hope to flag outliers. ‘Ultimately, we want to be able to collect data from a particular organ, or from a surrogate tissue and say, “Wow, this woman has breast tissue that is 20 years older than it should be, so she needs to be monitored more closely for breast cancer”,’ says Horvath.

Beyond monitoring and aiding diagnoses for diseases, can any of these measures give us a better idea of how much life we have left? There is an association between our epigenetic clock and our time to death, but it’s not very accurate – yet.

In his analyses, Horvath found an association between accelerated epigenetic ageing – an older epigenetic age compared with your real age – and time to death. Around 5 per cent of the people he studied had an accelerated epigenetic age. Their risk of death in the next decade was about 50 per cent higher than those whose epigenetic age lined up with their actual years.

If our epigenetic clock is ticking down to our death, is there anything we can do to intervene? Horvath has started studying the epigenetic age of induced pluripotent stem cells (iPSCs), which are adult cells that can be pushed to revert to an embryonic-like state, from which they are capable of turning into most types of cells in the body.

The epigenetic age of iPSCs is zero. Transforming normal body cells into stem cells would be an ‘extreme rejuvenation procedure’, Horvath says. You wouldn’t want to do it to all of your cells, but perhaps it’s a strategy that could be modified to intervene with the ageing process. ‘It sounds like science fiction, but conceptually it’s possible,’ he says. ‘All epigenetic marks are reversible, so in theory it’s possible to reset the clock.’

Turn back time

Another promising, if speculative, plan might be to freeze blood stem cells when you are young so that you can use them to reconstitute your immune system when you are old.

Short of miraculous anti-ageing treatments, understanding more about biological age can still improve our health. People told their heart age – measured using parameters such as blood pressure and cholesterol – are better able to lower their risk of cardiovascular problems compared with people given standard information about heart health, for instance. (My heart, I learned, is 28 years old.)

There are not yet any placebo-controlled trials to determine whether certain lifestyle interventions can reduce biological age, and so risk of early death. But Horvath did find that the epigenetic clock is accelerated in the livers of obese people, and ticks more slowly for those who regularly consume fish and vegetables, and only drink in moderation.

Unsurprisingly, exercise also seems to help. In a trial of more than 57,000 middle-aged people, those whose fitness levels resembled a younger person’s were less likely to die in the following decade or so. Fitness-associated biological age was a stronger predictor of survival than chronological age.

There is still a long way to go before we can pinpoint the exact ways to reverse ageing. But for now, I’m relieved to know that most of my body is younger than my years would suggest and, in the not too distant future, knowing my biological age could hold the key to preventing disease or even postponing death. I’ll happily celebrate turning 34 in the knowledge that my age really is just a number.

How the story of human origins is being rewritten

The past 15 years have called into question every assumption about who we are and where we came from. It turns out our evolution is more baffling than we thought. Colin Barras tries to unpick a very tangled family tree.

Who do you think you are? A modern human, descended from a long line of Homo sapiens? A distant relative of those great adventure-seekers who marched out of the cradle of humanity, in Africa, 60,000 years ago? Do you believe that human brains have been getting steadily bigger for millions of years, culminating in the extraordinary machine between your ears?

Think again, because over the past 15 years, almost every part of our story, every assumption about who our ancestors were and where we came from, has been called into question. The new insights have some unsettling implications for how long we have walked the earth, and even who we really are.

Once upon a time, the human story seemed relatively straightforward. It began roughly 5.5 to 6.5 million years ago, somewhere in an East-African forest, with a chimpanzee-like ape. Some of its descendants would eventually evolve into modern chimps and bonobos. Others left the forest for the savannah. They learned to walk on two legs and, in doing so, launched our own hominin lineage.

By about 4 million years ago, the bipedal apes had given rise to a successful but still primitive group called the australopiths, thought to be our direct ancestors. The most famous of them, dubbed Lucy, was discovered in the mid-1970s and given arch-grandmother status. By 2 million years ago, some of her descendants had grown larger brains and longer legs to become the earliest ‘true’ human species. Homo erectus used its long legs to march out of Africa. Other humans continued to evolve larger brains in an apparently inexorable fashion, with new waves of bigger-brained species migrating out of Africa over the next million years or so, eventually giving rise to the Neanderthals of Eurasia.

Ultimately, however, those early migrant lines were all dead ends. The biggest brains of all evolved in those hominins who stayed in Africa, and they were the ones who gave rise to Homo sapiens.

Until recently, the consensus was that our great march out of Africa began 60,000 years ago and that by 30,000 years ago, for whatever reason, every other contender was extinguished. Only H. sapiens remained – a species with a linear history stretching some 6 million years back into the African jungle.

Or so we thought.

Starting in the early 2000s, a tide of new discoveries began, adding layer upon layer of complexity and confusion. In 2001 and 2002 alone, researchers revealed three newly discovered ancient species, all dating back to a virtually unknown period of human prehistory between 5.8 and 7 million years ago.

Very quickly, Orrorin tugenensis, Ardipithecus ramidus and Sahelanthropus tchadensis pushed a long-held assumption about our evolution to breaking point. Rough genetic calculations had led us to believe our line split from the chimp lineage between 6.5 and 5.5 million years ago. But Orrorin, Ardipithecus and Sahelanthropus looked more like us than modern chimps do, despite predating the presumed split – suggesting our lineage might be at least half a million years older than we thought.

At first, geneticists made grumpy noises claiming the bone studies were wrong, but a decade later, even they began questioning their assumptions. In 2012, revised ideas about how quickly genetic differences accumulate in our DNA forced a reassessment. Its conclusion: the human–chimp split could have occurred between 7 and 13 million years ago.

Not so chimp-like

Today, there is no longer a clear consensus on how long hominins have walked the earth. Many are sticking with the old assumption, but others are willing to consider the possibility that our lineage is almost twice as old, implying there are plenty of missing chapters to our story still waiting to be uncovered.

The struggles don’t end there. The idea that our four-legged ancestors abandoned the forests, perhaps because of a change in climate conditions, and then adapted to walk on two legs is one of the oldest in human evolution textbooks. Known as the savannah hypothesis, it was first proposed by Jean-Baptiste Lamarck in 1809. Exactly 200 years later, an exquisite, exceptionally preserved 4.4-million-year-old skeleton was unveiled to the world, challenging that hypothesis.

‘Ardi’, a member of A. ramidus, is a jewel in the hominin fossil record. She is all the more important because of the number of key assumptions she casts doubt on. Ardi didn’t have a chimp’s adaptations for swinging below branches or knuckle-walking, suggesting chimps gained these features relatively recently. In other words, the ape that gave rise to chimps and humans may not have been chimp-like after all.

And contrary to Lamarck’s hypothesis, her feet, legs and spine clearly belonged to a creature that was reasonably comfortable walking upright. Yet, according to her discoverers, Ardi lived in a wooded environment. This suggests that hominins began walking on two legs before they left the forests, not after – directly contradicting the savannah hypothesis.

Although not everyone is convinced that Ardi was a forest-dweller, other lines of evidence also suggest we have had the upright-walking story back to front all these years. Susannah Thorpe at the University of Birmingham studies orangutans in their natural environment and has found that they stand on two legs to walk along branches, which gives them better access to fruit. In fact, all living species of great ape will occasionally walk on two legs as they move around the forest canopy. It would almost be odd if our own ancestors had not.

Whether before or after standing on two legs, at some stage our ancestors must have come down from the trees. We can depend on that, at least. Entering the twenty-first century, we knew of just one group that fitted the transition stage: the australopiths, a group of ape-like bipedal hominins, known from fossils found largely in east and south Africa and dating to between 4.2 and 1.2 million years ago. They lived in the right place at the right time to have evolved into humans just before 2 million years ago. Lucy would have shown up in the middle of that period, 3.2 million years ago. Since her discovery, she has served as a reassuring foundation stone on which to build the rest of our hominin family tree, a direct ancestor who lived in East Africa’s Rift Valley.

Then, in 2001, researchers unveiled a 3.5-million-year-old skull discovered in Kenya. The skull should have belonged to Lucy’s species, A. afarensis, the only hominin species thought to be living in East Africa at the time. But its face didn’t fit. It was so flat that it could barely be considered an australopith, says Fred Spoor at University College London, who analysed the skull. He and his colleagues, including Meave Leakey at Stony Brook University in New York, gave it a new name: Kenyanthropus platyops.

On the face of it, the suggestion that Lucy’s species shared East Africa with a completely different type of hominin seemed only of marginal interest. But within a few years, the potential significance of Kenyanthropus was beginning to grow. After comparing the skull’s features with those of other hominin species, some researchers dared suggest that K. platyops was more closely related to us than any australopithecus species. The conclusion pushed Lucy on to a completely different branch of the family tree, robbing her of her arch-grandmother position.

If that wasn’t confusing enough, other researchers were making a similar attack from a different direction. The discoverers of O. tugenensis, the 6-million-year-old hominin found in 2001, also concluded that its anatomy was more human-like than that of australopiths, making it more likely to be our direct ancestor than Lucy or any of her kin.

Most of the research community remains unconvinced by these ideas, says Spoor, and a recent announcement that a human-like jawbone 2.8 million years old had been discovered in Ethiopia once more shored up Lucy’s position. ‘In many respects it’s an ideal transitional fossil between A. afarensis and earliest Homo,’ says Spoor.

Even so, Lucy’s status as our direct ancestor has been formally challenged, twice, and Spoor says it’s not inconceivable that the strength of these or other challenges will grow. ‘We have to work with what we have and be prepared to change our minds if necessary.’

Tiny brains and alien hobbits

Intriguingly, in 2015, a team announced the discovery of the oldest known stone tools. The 3.3-million-year-old artefacts were found in essentially the same deposits as Kenyanthropus. ‘By all reasonable logic Kenyanthropus would be the tool-maker,’ says Spoor. Perhaps that hints at a tool-making connection between Kenyanthropus and early humans – although there is circumstantial evidence that some australopiths used stone tools too. In any event, determining which hominins evolved into humans is no longer as clear-cut as it once was.

Other important parts of the human evolution narrative were untouched by these discoveries, in particular, the ‘out of Africa’ story. This idea assumes that the only hominins to leave Africa were big-brained humans with long legs ideally suited for long-distance travel.

But discoveries further afield have begun to chip away even at this core idea. First came news, in 2002, of a 1.75-million-year-old human skull that would have housed a brain of no more than 600 cubic centimetres, about half the size of modern human brains. Such a fossil wouldn’t be an unusual find in East Africa, but this one turned up at Dmanisi in Georgia, in the Caucasus region. Clearly, small-brained hominins had left Africa.

In other respects, the Dmanisi skull and several others found at the site did not threaten the standard narrative. The Dmanisi hominins do seem to be early humans – perhaps unusually small-brained versions of H. erectus, conventionally regarded as the first hominin to leave Africa.

A discovery in 2003 would ultimately prove far more problematic. That year, researchers working on the Indonesian island of Flores found yet another bizarre skeleton. It had the small brain and small body of an early African hominin, from around 2 to 3 million years ago. To make matters worse, it seemed to have been alive just a few tens of thousands of years ago in a region thought to be home only to ‘true’ long-limbed and large-brained humans. The team named the peculiar species Homo floresiensis, better known by its nickname: the hobbit.

‘I said in 2004 that I would have been less surprised if they had found an alien spacecraft on Flores than H. floresiensis,’ says Peter Brown at the Australian National University, who led the analysis of the remains. The primitive-looking skeleton was, and still is, ‘out of place and out of time’.

There’s still no agreement on the hobbit’s significance, but one leading idea is that it is evidence of a very early migration out of Africa involving prehuman australopith-like hominins. In fact, the entire out-of-Africa narrative is in flux, with genetic and fossil evidence suggesting that even the once widely held opinion that our species left Africa 60,000 years ago is hopelessly wrong. Some lines of evidence suggest H. sapiens may have reached China as early as 100,000 years ago.

The hobbit was just one bizarre hominin, and could reasonably be discounted as a simple anomaly. But within little more than a decade of its discovery, two more weird misfits had come to light, both in South Africa.

Australopithecus sediba and Homo naledi are quite unlike any hominin discovered before, says Lee Berger at the University of Witwatersrand in South Africa, who led the analysis of both. Their skeletons seem almost cobbled together from different parts of unrelated hominins. Significantly, the mishmash of features in the A. sediba skeleton, unveiled in 2010, is very different from those in the H. naledi skeleton, unveiled in 2015.

A. sediba’s teeth, jaws and hands were human-like while its feet were ape-like. H. naledi, meanwhile, combined australopith-like hips with the skull of an early ‘true’ human and feet that were almost indistinguishable from our own.

No other ancient species seems quite as strange – but, as Berger points out, very few other ancient hominins are preserved in so much detail. Perhaps that’s just an interesting coincidence. Or perhaps, he says, it’s a sign that we have oversimplified our understanding of hominin evolution.

We tend to assume that ape-like species gradually morphed into human-like ones over millions of years. In reality, Berger thinks, there may have been a variety of evolutionary branches, each developing unique suites of advanced human-like features and retaining a distinct array of primitive ape-like ones. ‘We were trying to tell the story too early, on too little evidence,’ says Berger. ‘It made great sense right up until the moment it didn’t.’

In 2017, Berger announced the age of the H. naledi remains. They are just 236,000 to 335,000 years old. Weeks later, news broke that 300,000-year-old fossils from Morocco might belong to early members of H. sapiens. If correct, the fossil extends our species’ history by a whopping 100,000 years.

H. naledi’s relatively young age is also a striking example of how complex and confusing the human evolutionary tree might really be. Human brains didn’t grow and grow for millennia, with smaller-brained species falling to the wayside of the gradual evolutionary road. Instead, our species occupied an African landscape that was also home to humans with brains half the size of theirs.

We can only speculate on how (or whether) the small-brained H. naledi interacted with the earliest H. sapiens. Tantalising but controversial evidence from Berger’s team suggests that H. naledi intentionally disposed of its dead – perhaps a sign that even ‘primitive’ hominins could behave in an apparently sophisticated way.

Another independent line of evidence suggests that different behaviour was not necessarily a barrier to interspecies interactions.

In the late 1990s, geneticists began to show an interest in archaeological remains. Advances in technology allowed them to sequence a small chunk of mitochondrial DNA (mtDNA) from an ancient Neanderthal bone. The sequence was clearly distinct from H. sapiens, suggesting that Neanderthals had gone extinct without interbreeding (‘admixing’) much with our species.

But mtDNA is unusual. Unlike the nuclear DNA responsible for the bulk of human genetics, it passes intact from a mother to her children and doesn’t mix with the father’s genes. ‘Mitochondrial DNA is the worst DNA you can choose to look at admixture,’ says Johannes Krause at the University of Tübingen in Germany.

By 2010, a very different picture was emerging. Further advances in technology meant geneticists such as Krause could piece together a full nuclear genome from Neanderthal bones. It showed subtle but distinct evidence that Neanderthals had interbred with our species after all. The behavioural differences between humans and Neanderthals were evidently not enough to preclude the occasional tryst.

Arguably, this wasn’t the biggest genetics announcement of the year. In their searches, Krause and his colleagues had examined genetic material extracted from a supposed Neanderthal bone fragment unearthed in a Siberian cave in 2008. To everyone’s surprise, the DNA in the bone wasn’t Neanderthal. It came from a related but distinct and entirely new hominin group, now dubbed the Denisovans.

To this day, the Denisovans remain enigmatic. All that we have of them are one finger bone and three teeth from a single cave. We don’t know what they looked like, although H. sapiens considered them human enough to interbreed with them: a Denisovan nuclear genome sequence published in 2010 showed clear evidence of sex with our species. The DNA work also shows that they once lived all across East Asia. So where are their remains?

Slap and tickle

Fast-forward to 2017, and the interbreeding story has become more complex than anyone could have imagined in 2000. Krause reels off the list. ‘Neanderthals interbred with H. sapiens. Neanderthals interbred with Denisovans. Denisovans interbred with H. sapiens. Something else that we don’t even have a name for interbred with Denisovans – that could be some sort of H. erectus-like group …’

Although weird bones have done their bit to question our human history, it’s the DNA inside them that may have done the most to shake up our evolutionary tree. With evidence of so much ancient interbreeding, it becomes far more complicated to decide where to draw lines between the different groups, or even if any lines are justified.

‘How do you even define the human species now?’ says Krause. ‘It’s not an easy discussion.’ Most of us alive today carry inside our cells at least some DNA from a species that last saw the light of day tens of thousands of years ago. And we all carry different bits – to the extent that if you could add them all up, Krause says you could reconstitute something like one-third of the Neanderthal genome and 90 per cent of the Denisovan genome. With this knowledge, can we even say that these species are truly extinct? Pushing the idea one step further, if most living humans are a mishmash of H. sapiens DNA with a smattering from other species, is there such a thing as a ‘true’ H. sapiens?

Having dug ourselves into this philosophically troubling hole, there’s probably only one way to find our way out again: keep digging for fossils and probe them for more DNA.