The riddle of the nocturnal sun

What could be more sure than night following day? Yet before artificial lights blinded our sight, reports of nights as bright as day were common, discovers Rebecca Boyle. What lay behind the phenomenon was a mystery – until now.

In the millennia before street lights and smartphones, humans could, on rare occasions, walk around on a moonless night and see clearly. Looking up, they could see broad luminous patches of light stretching across the sky, which brightened the heavens in all directions as though it were daylight. People could read without candlelight, view small details in their surroundings, and make out landscapes in the distance. It was as if the world were illuminated by a hidden night-time sun.

The existence of bright nights is well accepted, but their cause remains a mystery. Frustratingly, sightings have almost entirely faded away in the past few decades, making it seem that any hope of solving the riddle was dimming. Now, though, one man says he has seen the solution.

The earliest account of a bright night comes from Pliny the Elder, a Roman army commander who studied nature in his spare time. In his encyclopaedic Natural History of around AD 77, he wrote that the ‘phenomenon commonly called a nocturnal sun … a light emanating from the sky at night’ has been seen many times. In 1988, a French atmospheric scientist named Michel Hersé produced the definitive collection of accounts of bright nights, which documented similar stories from the past millennium and all over the world. In French, they were nuit claires, and in German helle Nächte.

But sightings have become rarer. The most recent may be from 22 and 23 August 2001 at the Leoncito Astronomical Complex high in the Argentinian Andes. During that event, Steven Smith of Boston University in Massachusetts and his colleagues reported a night sky that was 10 times brighter than normal.

There is an obvious reason why the frequency of reported bright nights might have fallen: it has to be dark for us to notice them, and these days, 99 per cent of people in Europe and North America sleep under an artificially lit sky.

Hersé’s book suggests that about one bright night used to be observed every year, but aside from that reveals no obvious temporal or geographical pattern. ‘I think you would have to have been in the right place at the right time, and in the right situation, to see one,’ says John Barentine, an astronomer at the International Dark-Sky Association, which works to combat light pollution.

Luminous smog

However, Barentine points to an interesting clue buried in nineteenth-century accounts. These frequently include a description of a ‘luminous smog’ in the air. Astronomers and maritime observers said the effect was distinct from auroras or the faint nocturnal glow known as the zodiacal light, a pyramid-shaped brightness produced when space dust reflects sunshine coming from below the horizon. This suggests there might be some sort of reflective haze hanging in the upper atmosphere.

Perhaps that could have been dust from volcanoes or meteors, says Barentine. Take the account of a diarist we know only as M. Toucher, writing near Paris on 30 June 1908. It is possibly no coincidence that this was the day of the Tunguska event, when a huge space rock exploded in the upper atmosphere over Siberia. People around the world reported a haze in the atmosphere for months afterwards, and light reflecting from the haze might explain why Toucher could write: ‘At 22.30 … Very clear sky, full of stars which shine to the horizon. No moonlight. All the details of the landscape are visible.’

Despite Toucher’s observation during a night when there was no moonlight, some have wondered whether bright nights could simply be cloudless nights with a full moon and bright stars. However, in 1909 L. Yntema, a doctoral student at the University of Groningen in the Netherlands, settled that question. After measuring the total amount of light from all the stars reaching Earth’s surface, he found a discrepancy in the light on bright nights. That seemed to point to some sort of atmospheric phenomenon as their cause. Yntema called it ‘Earthlight’.

Rolling on the waves

So, are we sure that this isn’t just a rare, mid-latitude aurora? That possibility was ruled out by Robert Strutt, son and heir to Lord Rayleigh, a physicist who, among other things, had discovered that the way gas molecules scatter light explains why the sky looks blue. The younger Rayleigh witnessed a bright night on 8 November 1929, and demonstrated that the light came from all directions. In an aurora, it typically comes in streaks.

Today, bright nights may have all but vanished, but we do have certain advantages over Rayleigh – satellites, for example. In the late 1980s, Gordon Shepherd of York University in Toronto, Canada, built a satellite instrument called WINDII, which could monitor waves of air as they rolled through the atmosphere. He soon found that these waves could pile up on top of one another to produce towers of pressurised air.

Along with the waves, Shepherd also studies how the chemical make-up of the atmosphere changes through the day. During daylight hours, ultraviolet radiation from the sun splits molecular oxygen into individual atoms. When the sun goes down, the atoms rejoin. This produces a small amount of light, called airglow.

Airglow is usually barely visible with the naked eye from Earth’s surface, but looking at WINDII readings, which spanned from 1991 to 2004, Shepherd noticed the airglow emissions varied wildly from night to night, and from place to place.

In 2017, it occurred to him that air waves and airglow could be connected. The waves might force the oxygen into a higher concentration, he thought, creating a more intense glow that could explain bright nights. ‘I don’t know why it came to me, but I said, “Ah, that’s the explanation”,’ he says.

To verify his suspicion, Shepherd first had to account for the sun’s activity, which can affect the brightness of airglow too. He and his colleague Young-Min Cho went back through WINDII data for 1992 and 1996, when the sun’s activity was at different levels. Cho wrote an algorithm that could search the data, discard any nights when there was an aurora and find times when waves might have piled up enough to produce a bright night. For both years, that analysis showed that the waves could have produced a bright night about 7 per cent of the time at any given spot on Earth.

That convinced them that the action of the waves was a greater influence on the airglow than increased solar activity. But it also indicated that you would get about 25 bright nights a year, which doesn’t tally with Hersé’s collected accounts. However, further analysis showed that stacked waves and a cloud-free night are not very likely to coincide at any given spot, reducing the expected frequency. ‘I think that’s pretty consistent with the historical record,’ says Shepherd.

Shepherd, now in his eighties and retired, hasn’t been able to link any of the suitably stacked waves with eyewitness reports of bright nights, partly because modern reports are so rare. To do so would be a neat confirmation, though, so he is looking into crowdsourcing a bright night. ‘You could get together 500 to 1000 people via the web and cover different longitudes and potentially all nights of the year,’ he says. Enough to brighten someone’s night, anyway.

Lightning shouldn’t be possible

You might have been told that lightning is created by highly charged particles in thunderclouds rubbing together, but the picture is far more mysterious, finds Stephen Battersby.

One mystery is how thunderclouds become so highly charged. The best explanation is that collisions between small ice particles and heavier gobs of slush called graupel tend to transfer electrical charge, but the role of this process in real clouds is not proven.

An even bigger puzzle is how the huge current of a lightning bolt ever begins to flow when air is an electrical insulator. It is possible to make air break down to form a conducting plasma, but this requires a fearsomely intense electric field of more than a million volts per metre. Although meteorologists have sent hundreds of instrument-laden balloons and rockets into thunderclouds to test local conditions, the strongest fields they have seen are only about a tenth of that critical value.

Perhaps lightning needs some kind of catalyst to let fly? One theory is that cosmic rays are involved. These charged particles are probably generated by supernova explosions far away in the galaxy. A cosmic-ray proton can have enough energy to generate a cascade of relativistic particles when it collides with a molecule in the atmosphere. This cascade ionises the air, producing a conical shower of free electrons where a current might begin to flow.

Lightning often produces flashes of X-rays and gamma rays, and even beams of antimatter. These phenomena imply that some relativistic process is involved, but they don’t prove that cosmic rays are the trigger.

To find out whether supernovae really are implicated, meteorologist William Beasley at the University of Oklahoma is collaborating with a team of physicists to develop a ground-based monitoring system. ‘We are putting out cosmic-ray detectors along with our lightning mapping array to see if they coincide,’ he says.

Even if they do, the puzzle will not be solved. The electrons liberated by a cosmic-ray shower are free for just a few microseconds – not long enough to maintain a large current. That is long enough, though, to boost the electric field in a cloud to a few hundred kilovolts per metre, says Beasley’s colleague Danyal Petersen at the University of Nevada, Reno. A strong field might allow another process to kick in, as it can stretch raindrops inside a cloud into pointed, needle-like shapes. Like the point of a lightning conductor, these rain needles could enhance the local electric field, forming coronas of charged air. These could spread and merge, eventually forming an ionised path called a leader that can carry the full fury of a lightning bolt. Measuring the sequence of radio frequencies emitted by the whole process could test this theory.

The metals made with forbidden chemistry

From bronze to steel, alloys are the backbone of the modern world. James Mitchell Crow discovers how a recipe that shouldn’t work is creating metal mixtures with totally unexpected abilities.

One of the oldest shipwrecks ever discovered lies off the southern coast of Turkey. First glimpsed by a young sponge diver in 1982, it was carrying a curious cargo: 9 tonnes of copper and 1 tonne of tin. A curious cargo, that is, unless you know the recipe for bronze. The Uluburun wreck, named after a nearby town, dates from 1300 BC, smack in the middle of the Bronze Age.

Today, arguably, we live in the steel age, but the principle underlying our defining material remains the same. You take a pure metal and enhance it by adding a pinch of another element. One part tin to nine parts copper gives bronze; a smattering of carbon added to iron yields steel. This is the recipe for making alloys – materials whose strength, durability and workability make them the basis of everything in the modern world from cutlery to lamp posts to bridges.

But are traditional alloys the best we can do? Increasingly, metallurgists are questioning this received wisdom. By ripping up the millennia-old rulebook, they are making wild metallic mixtures where no single element dominates, and with it producing materials the likes of which we have never seen. With applications from nuclear fusion reactors to jet engines to basic chemistry and more, it’s a rich new material seam to mine – and we’ve only just begun to scratch the surface.

There are good reasons why we have made alloys in the same way for so long. Because their layers of identical atoms can slip past each other easily, pure metals are soft. That’s why gold panners can tell a pure nugget just by biting it. But introduce interloper atoms and you can disrupt that slipping, producing a tougher material. This idea has furnished us with many of the materials that underpin modern technology.

The heretical idea to throw out the established recipe came to researcher Jien-Wei Yeh in 1995. Driving across the Taiwan countryside to a meeting in Taipei, Yeh found himself pondering a problem ancient alloy-makers were already familiar with: there’s invariably a point beyond which adding more of the alloying elements negates their benefits. At the atomic scale, the additive atoms start to form little clusters of metal-within-metal that make the material brittle.

Yeh’s sudden thought was that the concept of entropy might provide a workaround. Entropy is a way of quantifying disorder in a system, and the rules of thermodynamics say that when something is more disordered it is more stable. So rather than make orderly alloys from one main element spiked with pinches of others, why not mix five, six or more elements? Swirl together enough elements in equal proportions, Yeh reasoned, and the resulting cocktail would be so disordered that there would be no chance for those crumble-inducing clusters to form.

Yeh was convinced he was on to something – so much so that he didn’t go home after his meeting. Instead, he drove 80 kilometres back to his lab at National Tsing Hua University in Hsinchu City and immediately assigned a research student to work on the idea. He produced the first high-entropy alloy within one or two weeks, Yeh says. Within a year, they had made at least 40 different ones. From the very start, their properties looked promising: they were hard, tough and corrosion resistant.

But there was a problem. Yeh and colleagues couldn’t be sure what they had made. Electron microscopy and firing beams of X-rays at a new alloy can usually tell us its structure, but Yeh’s high-entropy creations were so unlike any other material that he had no reference data to make sense of what he was seeing. He spent the next eight years systematically making new materials, changing the composition bit by bit and comparing X-ray and microscopy scans until he had learned how to interpret them. In 2004, he was finally ready to unveil his idea.

Treasure maps

It came at an opportune moment. ‘In the past ten years, we’ve run out of ideas for new alloy bases for high-temperature structural metals,’ says Dan Miracle, who works on jet turbines at the US Air Force Research Laboratory in Ohio. Once the initial scepticism had subsided, other researchers began to try their hand at making the new high-entropy alloys.

It was a daunting new research space. Imagine mixing equal amounts of five or more of the 60 or so elements commonly used in commercial materials in equal proportion. That gives you about 1040 possible combinations, says Dierk Raabe at the Max Planck Institute for Iron Research in Dusseldorf. Relax the rules to allow 5 per cent variation in the proportions of the elements and you jump to 10,120 combinations. ‘That’s a vast untapped reservoir of possible compositions and properties we have not discovered yet,’ says Raabe.

The trouble is, we could never even in principle make and test all these possible alloys, and it is not obvious where to begin looking for the ones with desirable properties. ‘I think we’re going to need treasure maps, theory-guided treasure maps, to show us where we can expect to find something really interesting,’ Raabe says.

The internal structures of high-entropy alloys are so different, however, that we can’t use existing theories to predict their behaviour. But even without a map, researchers have already made a few expeditions into uncharted territory – and come back with exotic new materials. ‘We’ve seen some properties that are a little bit expected, and others that are totally unexpected,’ says metallurgist Cem Tasan at the Massachusetts Institute of Technology.

One of those unexpected findings has to do with brittleness. All known alloys become more shatter-prone when cooled, but in 2014, Easo George at Ruhr University in Bochum, Germany, and his team found a high-entropy alloy of iron, manganese, nickel, cobalt and chromium that became less brittle the colder it got, right down to -200 °C.

Why the material bucks the trend is not yet clear, but it has already caught the eye of researchers hoping to make nuclear fusion a reality, says George. Taming fusion, the process that powers the sun, involves containing a cloud of charged particles at temperatures in excess of 100 million degrees. This in turn requires superconducting electromagnets that must be kept very cold – without breaking. ‘You need high strength at cryogenic temperatures, and if it fails you need graceful rather than catastrophic failure,’ says George. His alloy isn’t magnetic, but it is made from elements with interesting magnetic properties, so similar alloys might combine magnetism with strength at low temperatures.

Hard to crack

This is not the only cast-iron principle of materials science that looks a little more pliable with the advent of high-entropy alloys. There’s also the idea that the harder something is, the more prone it is to crack when struck: think of a soft clay compared with a hard china teacup. In 2016, Raabe, Tasan and their colleagues stumbled on a high-entropy alloy containing iron, manganese, cobalt and chromium that ignores this rule. They think this behaviour must stem from multiple crack-preventing mechanisms at play, corresponding to various atomic rearrangements the material can adopt to absorb the force of an impact.

The chemical properties of some of these alloys are confounding researchers, too, making it all the more important to work out how to make the useful ones. One brute-force approach, adopted by Matthew Kramer and his team at the US Department of Energy’s Ames Lab in Iowa, is simply to try out as many combinations as possible. They have developed a machine that can 3D print up to 30 rod-shaped alloy samples in an hour and test their physical properties automatically.

But what are we looking for? With more ingredients needed to make these new materials, some of them exotic and expensive, it’s hard to see a high-entropy alloy replacing cheap, plentiful steel in buildings or bridges, for instance. But besides nuclear fusion reactors, there are plenty of niche environments where we would wish for better materials than we currently have.

High on the wish list are alloys that would turn waste heat – from a car exhaust, say, or laptop chip – into electrical energy. Materials that can perform this trick, known as thermoelectrics, spontaneously generate a current when one side of the material is hot and the other cold. That requires good electrical conductivity, but poor thermal conductivity to maintain the temperature difference. In regular materials electrical conductivity goes hand in hand with thermal conductivity, but early research hints high-entropy alloys might get around this logjam: their structural complexity could suppress heat flow through the material, while allowing electrons to whizz through unimpeded.

Miracle’s employer, the US Air Force, also has some special material requirements. He is hunting for an alloy that can raise the operating temperature of a jet engine: the hotter you run a turbine, the more efficiently it performs. ‘Getting better gas mileage out of our jet engines is way more important than lightening the load a little bit by making a lighter wing material,’ he says.

A random search for this sort of material is unlikely to pay dividends – but we can narrow the field using what we already know. Miracle is concentrating on a cluster of elements in the centre of the periodic table known as the refractory metals. These elements, including molybdenum and niobium, have unusually high melting points of around 2500 °C, compared with 1455 °C for nickel, on which jet engine alloys are now based.

Pure niobium and molybdenum corrode too quickly to be useful, says Miracle. Mix them with other materials that offer corrosion resistance, however, and new possibilities might emerge. So far he has shown that high-entropy refractory metal alloys do indeed remain strong at temperatures beyond the wilting point of existing nickel superalloys. Corrosion resistance is one of the next properties to test.

And it’s not necessarily the case that the alloy-making revolution will bypass all but the highest of high tech. Not, at least, if Kevin Laws has his way. At his lab at the University of New South Wales in Sydney, he has made high-entropy versions of those most ancient of alloys – bronze and brass. Both are so useful that they are still found in all sorts of everyday items, from keys and coins to bathroom taps. The main metal in conventional brasses and bronzes is copper, which is costly. Laws substitutes most of this with nickel, manganese, zinc and aluminium, a mixture that’s less expensive overall and gives better performance. ‘We’ve made them stronger, more corrosion resistant and cheaper,’ says Laws. It looks set to be one of the first commercially produced high-entropy metals. Laws’s team has already signed an agreement with Swiss alloy-makers Ampco Metal, which will probably use it first for making wear-resistant brass parts for industrial machines and potentially car engines.

Back when the Uluburun ship was sailing, alloys were used to make art, utensils and tools, as well as a new weapon that would help shape the jostling empires of the time: the sword. Who knows what technologies the new era of mixed-up metals will bring? Perhaps they will be just as world-changing.

Four impossible things the laws of physics might allow

Einstein’s general theory of relativity is famous for its prediction of wormholes – shortcuts that might allow time travel by connecting different areas of space and time. Nobody’s ever seen one, though, and debate rages over whether you could travel down one even if they did exist. While we wait for a visitor from the future to let us know, Gilead Amit explores some other physical impossibilities that might already have been proved possible.

Perpetual motion machines

The idea of devices that can move and do other sorts of useful work with no external power has seduced some famous names over the centuries. Leonardo da Vinci worked on several designs involving spinning weights. Robert Boyle imagined a funnel that feeds itself. Blaise Pascal wisely abandoned the search and invented the roulette wheel instead.

Large-scale perpetual motion machines would break all sorts of physical laws, not least the cast-iron laws of thermodynamics. But Nobel Prize-winning theoretician Frank Wilczek’s ‘time crystals’ – materials that eternally repeat in time with no external power source – seem to come close. Examples made in the lab in 2016 don’t do any useful work, however, and so the quest continues.

Teleporters

Ever wish that the ground would swallow you up and spit you back out somewhere far away? Strangely enough, there is nothing in the laws of physics to stop that happening. In his 2008 book Physics of the Impossible, physicist Michio Kaku calls teleportation a ‘Class I Impossibility’, meaning that the technology is theoretically feasible, and could even exist within our lifetimes.

In fact, teleporters already exist: not for whole human beings, but for subatomic particles. Quantum entanglement, the phenomenon that Albert Einstein called ‘spooky action at a distance’, allows information and quantum states to be transmitted apparently instantaneously across space. The first quantum teleportation experiments, carried out in 1997, involved one photon’s quantum state being reconstructed in another photon tens of centimetres away. Today, the world quantum teleportation record stands at over 100 kilometres.

Invisibility cloaks

Harry Potter’s invisibility cloak is just one fictional example of magical garb that makes you disappear. But so-called metamaterials suggest a similar possibility in real life too.

The principle behind metamaterial cloaks is simple: waves of light bend around an object in your field of vision, much like water folds itself around a boulder in a stream. In practice, though, whole new nanostructured materials must be developed that can bend light in unfamiliar ways.

The first metamaterials were made in the lab in 2000, and basic cloaking devices soon followed. Cloaking has been ruled impossible for human-sized objects, but that’s no great loss – even if it were possible, you would only be able to reroute specific wavelengths of light, making the cloaked object weirdly coloured and more conspicuous. Instead, similar cloaking principles might be used to divert seismic waves and shield entire cities from earthquakes.

Matter married with antimatter

Normally, when matter comes into contact with its opposite, antimatter, both ‘annihilate’ in a sudden burst of energy. It’s just lucky we live in a universe with a lot of matter and mysteriously little antimatter.

But then again, bizarrely, some matter might also be antimatter. So-called Majorana fermions would be their own antiparticles, capable of self-annihilating under the right conditions. Physicists have long suspected that neutrinos could fall in this category, although proving that means spotting some of the rarest processes in the universe in action, that happen perhaps once in 100 trillion trillion years.

Meanwhile, there are persistent reports we’ve made something similar in the lab. When an electron is torn out of a superconductor, a hole is left behind that acts like a positively charged particle with exactly the same mass. If the two are manipulated in just the right way, they can be made to act like Majorana particles.

The clinic offering young blood to cure ageing

In 2016, a California start-up began offering $8000 blood transfusions to people who hope they can turn back the clock. Is it safe, and will it work? Sally Adee rolls up her sleeve for a dose of the elixir of youth.

I discover the French bistro tucked into a strip mall that wraps around a parking lot it shares with a hardware shop and a barber. The man I am meeting has asked to come here instead of the clinic because he doesn’t want to do an interview while being transfused.

I am led to a booth where I find him drinking a glass of wine, wearing the blazer-and-T-shirt uniform common among venture capitalists. His youthful looks have an enhanced, slightly uncanny cast, but I am still shocked when he tells me he is 65. To protect his privacy, he chooses to be identified for this article as JR.

JR is a minor celebrity in these parts. It is the fifth time this year that he has flown in from Atlanta to have the treatment. Monterey doesn’t get a lot of traffic from people like him. You might think of the Californian coast as a homogeneous stretch of sun-drenched sand and rich people. But things change around the middle of the state. Driving there from San Francisco, you quickly lose the sun behind a permanent blanket of fog. The central coast is long and flat, with glimpses of iron grey choppy waves behind squat buildings.

So it’s a bit odd that this is the epicentre of a phenomenon rocking Silicon Valley: young blood treatments. JR is one of about 100 people who have each paid $8000 to join a controversial trial, offering them infusions of blood plasma from donors aged between 16 and 25 in a bid to turn back the clock. Participants have come from much further afield, including Russia and Australia.

It’s not hard to see why. After a spate of trials showed astonishing rejuvenation in old mice, the notion of filling your veins with the blood of the young has gone from creaky vampire myth to the latest tool in Silicon Valley’s quest to ‘disrupt death’.

Now start-ups, universities and pharmaceutical companies are clambering to commercialise the potential of young blood. Venture capitalists and high-level hospital executives are rumoured to be partaking behind the scenes. The idea’s popularity is sparking fears of red markets and a dystopian future in which the old steal youth from the young, and no longer just metaphorically.

Scratch beneath the hype, however, and we may have been looking at young blood the wrong way round. Within a few years, new insights could usher in a safer, more effective way for blood to stop the inevitable declines of ageing.

Mystery ingredients

Vampire tales aside, we have suspected since the mid-nineteenth century that young blood has rejuvenating powers, thanks to a grim surgical technique known as parabiosis. Scientists would stitch together two animals, usually rats – like twins conjoined only at the skin – and wait a week for capillaries to form and fuse their blood supplies. The new plumbing seemed to change the old rats, making them physically and cognitively resemble their younger partners. By 1972, research began to suggest that after being conjoined, old rats even lived longer.

In the early 2000s, researchers at Stanford University in California revisited the technique, this time with a view to reversing specific ailments of ageing. They damaged the livers and muscles of old mice before connecting each one up to an undamaged mouse. Those with young partners healed well. Those with old partners did not. Similar results emerged with regard to heart health, then age-related cognitive declines.

What was it in blood that was having these rejuvenating effects? The prime suspect seems to be plasma, the yellow liquid that gets separated out after donation. Components like red blood cells are used for medical transfusions, but the plasma often goes spare.

Plasma is rich in all sorts of proteins and other compounds, which could hold the key to what makes young people young and old people old. Not that we know what all these components are. But we do know that their amounts and ratios change as we age. For example, old blood has higher levels of inflammatory compounds that damage tissues they reach. Inflammation has been linked to cancer, heart disease and depression. Younger blood, by contrast, is characterised by a higher concentration of stimulating and restorative factors.

An amazing discovery, but to be medically relevant, we must deliver young blood without having to stitch pensioners to 20 year olds. So in 2014, a team led by Tony Wyss-Coray, a neuroscientist at Stanford University, injected middle-aged mice with plasma from young mice. Sure enough, after three weeks they had anatomical improvements in the brain and a cognitive boost, compared with mice given a placebo. Every other system they tested fared similarly.

The plasma didn’t even need to come from the same species – old mice became just as sprightly when the injection came from young humans. ‘We saw these astounding effects,’ Wyss-Coray told New Scientist in 2014. ‘The human blood had beneficial effects on every organ we’ve studied so far.’

Wyss-Coray had the proof he needed to start a human trial. In October 2014, his start-up, Alkahest, began recruiting participants for a trial at Stanford School of Medicine, using young blood in people with late-stage Alzheimer’s disease. The following year, Bundang CHA General Hospital in South Korea launched a gold-standard trial to compare the anti-ageing effects of cord blood, young plasma and placebo on markers of frailty in ageing. Both trials were met with enthusiasm. Wyss-Coray was invited to give lectures, including at the World Economic Forum and a TED talk.

Then there’s the Ambrosia trial, which JR is taking part in. Ambrosia is a start-up headquartered in Washington DC. The trial didn’t need regulatory approval because plasma is already a standard treatment to replace missing proteins in people with rare genetic diseases. And there’s no placebo arm to it. All you need to join is a date of birth that makes you over 35 – and a spare $8000.

For your money, you are infused with 2 litres of plasma left over from young people who have donated to blood centres. Unlike the trials looking at young blood’s effects on specific diseases, Ambrosia has a softer target: the general malaise of being old. In addition to measuring changes in about 100 biomarkers in blood, the firm is also ‘looking for general improvements’, says Jesse Karmazin, who runs the start-up.

The methodology falls short of the normal standards of scientific rigour, so it’s unsurprising that scientists and ethicists have accused Karmazin’s team of taking advantage of public excitement around the idea. ‘I don’t think the Ambrosia trial can be called a trial at all, since they treat healthy people and they have no clear read-outs,’ Wyss-Coray says.

This makes any findings virtually unpublishable, which may explain why Karmazin announced his first results to a room full of technologists at the Silicon Valley Code Conference in early 2017 instead of at a medical conference or in a journal. The numbers were as unverifiable as they were impressive: one month after treatment, 70 participants saw reductions in blood factors associated with risk of cancer, Alzheimer’s disease and heart disease, and reductions in cholesterol were on par with those from statin therapy.

Karmazin says this could explain his observations during the trial: a woman with chronic fatigue syndrome is now able to get out of bed and live normally; another participant, who had early stage Alzheimer’s when he enrolled, no longer meets the clinical criteria for having the disease.

‘Whatever is in young blood is causing changes that appear to make the ageing process reverse,’ Karmazin told me. Even healthy participants ‘just have more energy’.

JR agrees. ‘I do feel it a bit,’ he says. ‘I am starting to run again now.’ However, although Karmazin says that the effects in the blood are identical regardless of participant age, JR says his 39-year-old girlfriend feels no different after two treatments. As for his youthful looks, JR says he tries out many of the therapies his company invests in.

Not that any of this should be taken at face value. Many of Ambrosia’s claimed improvements could be down to the placebo effect. Even so, the numbers are proof enough for Karmazin. He originally aimed to recruit 600 participants. However, the results have made him so optimistic that he is expanding the business. When I travelled to Monterey in June 2017, Karmazin had just opened his third clinic, and thanks to recent infusions of investor cash, he is planning a total of six in the US in 2018.

I ended up in Monterey because, for the past year, I’ve been preparing myself to enrol in the trial too.

As it did with much of the public, the idea of the glittering Silicon Valley ‘blood spa’ captured my imagination. The reality of this clinic is somewhat different, though. The one-storey building shares an intersection with a flaking self-storage facility and a pockmarked parking lot.

The interior is also modest; patients pass through a wood-panelled kitchen on their way to the reception. In the main room, a row of armchairs, each with an IV drip stand, faces a window overlooking washed-out scenery that culminates with the Pacific Ocean, barely discernible under the fog. Most of the elderly clients occupying those chairs are not getting plasma, but IV fluids.

When I visit, the trial is being run by Craig Wright, Karmazin’s erstwhile partner. Karmazin has a medical degree but not a licence to practice, so he teamed up with Wright, an immunologist formerly at Walter Reed Army Medical Center in Washington DC, who is licensed to run one of the West Coast’s few non-hospital affiliated infusion clinics.

Although at 67 he could be retired, Wright maintains the clinic to look after his patients. ‘Healthcare in this country doesn’t give a crap about old people,’ he says. One of his clients has dementia that used to regularly put him into the emergency room for dehydration. Another, after several rounds battling lymphoma, had a compromised immune system that left her struggling with recurring infections.

Eventually Wright enrolled her in the Ambrosia trial. Her infections went away. But by the time I get to the clinic, I am rethinking whether I want to go through with this. Wright and I go into his office so he can give me the consent form. I tell him I’m starting to reconsider, and I am surprised to find he doesn’t try to talk me round. ‘You need to think long and hard before you do this,’ he says. For some of his older, sicker patients, plasma has proved beneficial. For younger would-be participants like me, however, he lists a litany of potential side effects.

Risks commonly associated with plasma transfusion include transfusion-related acute lung injury, which is fatal; transfusion-associated circulatory overload; and allergic reactions. Rare complications include catching an infectious disease: blood products carry a greater than 1 in a million chance of HIV transmission. That’s too risky for JR, who tells me that before every treatment he takes a dose of the HIV prophylactic PrEP.

Karmazin had previously assured me that none of the risks associated with plasma transfusion exceed 1 or 2 per cent, a statistic borne out in the trial: in his Code presentation, he told the room that ‘none’ of his participants had reported any negative effects. ‘Not one.’

Complications

But when I meet up with JR and Wright on the second day of my visit, both are visibly shaken. A participant had arrived earlier that day from Moscow to get the infusion. As he started on his second unit of plasma, the man had an anaphylactic reaction. His face and tongue swelled up, and he developed a rash all over his body. ‘Even the whites of his eyes turned red. He was in a lot of trouble,’ says JR. Wright administered emergency treatment to stabilise him and sent him back to his hotel.

I am astonished that my visit has coincided with the first complication of the treatment. There’s an uncomfortable silence as JR and Wright exchange glances. ‘It’s not the first one,’ says Wright. When I press him for more information, he demurs. ‘You’ll have to talk to Jesse.’

When I call Karmazin, he clarifies that there was also an eyelid rash and a case of pneumonia that was probably already in place before the patient got the treatment. But in later conversations with Wright, he tells me of worse cases. Without published data, it’s impossible to get to the bottom of it.

Either way, those are the known knowns. There are also known unknowns associated with injecting material from someone who is genetically different to yourself, says Irina Conboy, a co-author on the early Stanford work that put young blood on the map, and now at the University of California, Berkeley.

There could be risks of developing autoimmune disorders. And some fear that pumping stimulating proteins into people for years could lead to cancer. ‘If you keep infusing blood, the risk of reactions goes up,’ says Dobri Kiprov, an immunologist at California Pacific Medical Center in San Francisco. ‘Many of these people are just eager to get younger – they don’t have a particular disease, so it’s not justified.’

It’s easy to criticise Ambrosia for charging people to receive an unproven procedure, but there is also no evidence yet that the other trials are yielding anything more promising. When Alkahest presented results from its Alzheimer’s trial at a conference in November 2017, it revealed that the control group had been abandoned, something critics said made any positive findings irrelevant.

The man who wants to transplant human heads

Neurosurgeon Sergio Canavero shocked the world when he announced plans to perform the first ever human head transplant. The maverick surgeon likens himself to Peter Parker and Victor Frankenstein, and in 2017 told Nic Fleming that a successful procedure was imminent. What happened next?

There’s a story doing the rounds about Sergio Canavero. One day, as a 9 year old, he sought refuge from playground bullies in the pages of a superhero comic. ‘I myself have surgically rejoined severed neurolinkages,’ declares Dr Strange in the November 1976 issue of Marvel Team-Up. The brilliant but egotistical fictional surgeon continues: ‘The nerve endings have been fused – the healing process begun.’ The young Canavero was captivated. Four decades on, he announced plans for the world’s first human head transplant …

If this story sounds a little too neat to be true, that’s because it probably is. The Italian neurosurgeon, who describes himself as a ‘big comics nut’, says he didn’t read that issue until adulthood. He claims the publication that reported the comic as inspiration for his work was mistaken.

Whatever the true picture, Canavero does not flinch at comparisons with fictional characters. Quite the opposite, he encourages them. He sees a lot of himself in Peter Parker – aka Spider-Man – who, as a nerdy student, was bullied by classmates and shunned by girls. After dismissing the Marvel Team-Up story, he sends me PDFs of the relevant comic frames, along with a screen grab of a doctor discussing fusing severed spinal cords from the 2016 film Dr Strange. Canavero says he was the inspiration for that scene. ‘I have good ties with Hollywood and I can tell you for a fact that line came out of my book.’

Since 2013, Canavero has been promoting the idea that head transplants – better understood as body transplants – are feasible, and should be offered to people with conditions involving muscular and nerve degeneration, for example. The response, in the West in particular, has ranged from disbelief and opposition to the questioning of his motives and scientific credibility.

Unperturbed, in 2015 Canavero agreed to help set up a team to carry out the procedure in China, working with Xiao-Ping Ren, an orthopaedic surgeon at Harbin Medical University who helped with one of the first hand transplants in 1999. In mid-2017, Canavero claimed several papers supporting the feasibility of human head transplantation will soon be published, that an operation could go ahead by the end of the year. (Spoiler: it did, although perhaps disappointingly, it was performed on a pair of corpses.)

‘The team in China is ready to roll,’ says Canavero, who worked as a neurosurgeon at Turin University Hospital in Italy until 2015. ‘All the preclinical and clinical studies have been conducted successfully.’ Much of this work will not be published, he says, but insists that what will be published ‘will be more than enough to show where China stands’. The precise date of a transplant attempt depends on finding a donor of the right height, build and complexion, he says. ‘The problem now is only organisational.’

Canavero calls his proposed procedure the head anastomosis venture, or HEAVEN. He says it would begin by cooling the donor body and recipient’s head to delay tissue death. The heads would be detached and the donor body attached to the recipient’s head. Polyethylene glycol (PEG) would help fuse the cords by encouraging the fat in adjoining cells to mesh together. Stimulation from implanted electrodes would help to strengthen nerve connections. That’s the plan, anyway.

Canavero doesn’t agree with mainstream medical thinking that movement below the neck depends primarily on bundles of long-range nerve fibres in the spinal cord. Inspired by research dating back to the first half of the twentieth century, he believes a person undergoing a head transplant could regain close to full movement after those nerves have been severed, thanks to the regeneration and fusing of short-range nerve fibres that are part of an additional, interconnected network of cells in the spinal cord called propriospinal neurons. Think of a fire brigade passing buckets of water along a line if their main hose has been severed.

C-Yoon Kim of Konkuk University in Seoul, South Korea, is part of Canavero’s group and has led animal experiments that use PEG to encourage regrowth of severed spinal cords. In a paper published in 2016, Kim claimed that five of eight mice whose spinal cords were severed and treated with PEG regained some movement after four weeks, while none of the mice in the control group did. In a paper published in 2016, Ren’s team reported a similar experiment, claiming that five of nine PEG-treated mice ‘regained independent ambulation, with two basically normal’. And in a widely criticised experiment in 2016, Kim’s team claimed that a beagle with around 90 per cent of its spinal cord cut at the neck regained 90 per cent of its motor function within three weeks, with the aid of PEG. Critics pointed out there was no control in the study and no data proving the degree of damage to the dog’s spinal cord.

Ren and Canavero also published a paper in the journal Surgery, in which they claimed to have carried out a monkey head transplant, described as successful on the basis of ‘unpublished observations’. ‘There’s not such a big difference between animals and humans – the basic functions, physiology and the possibility of recovery are the same,’ says Kim. ‘I believe we will succeed in the human operation in the near future.’

Many scientists refuse to comment on the record about Canavero’s work, often because they have serious doubts about the published papers. Most who have commented say a successful human head transplant is far from feasible. ‘There is no way that I know of, that has been published, that would allow fusion of the spinal cord,’ says José Oberholzer, director of the Charles O. Stickler Transplant Center at the University of Virginia, Charlottesville. ‘The rest, including reconnecting the blood vessels and the airways, is theoretically possible but very challenging, and carries major risks of complications from leaks and immune system rejection.’

Others have used words such as ‘charlatan’ and ‘self-promoter’. Canavero dismisses his critics as too short-sighted to share his vision. ‘The history of science is full of people being called nuts and then going on to prove their point,’ he says. ‘The academe tried to destroy me, they tried everything to stop me, to slander me, but they failed. It’s no problem: I practice ju-jitsu and the mindset is let your enemy come to you, and then exploit his momentum to bring him to the ground.’

Canavero claims the media has failed to report his work properly. And in his efforts to seek favourable coverage he tries tactics more often employed by spin doctors. For example, he offers me ‘exclusive’ early access to as-yet unpublished papers if I interview specified people, avoid others who have criticised his work, show him my article in advance and we agree ‘mutually acceptable’ terms. I politely refuse his offer.

Canavero, now 52, does not want to talk about his early years, but the few details he does give me could help explain why he self-identifies as an outsider. He describes growing up as an only child in a poor neighbourhood of Turin in a ‘difficult family situation’. He says he was bullied by classmates because he was bright, which made him resentful. ‘I’m a loner, I’m a maverick,’ he says. ‘I believe the initial suffering when I was a child had a lot to do with it.’

When I ask Canavero about what motivates him, he does not mention helping patients. ‘The goal is to understand the nature of consciousness and to answer the basic question of what happens when we die. I believe consciousness is not generated in the brain, which merely acts as a filter.’

Canavero believes the mind and body are separate and that so-called near-death experiences support this view. He wants to show that those who dismiss these reports as hallucinations are wrong.

‘My idea is to generate near-death experiences,’ he says. ‘When you detach a head or a brain, the brain is cooled so there is no electrical activity and no blood flow, that brain is clinically dead. Patients will tell us about their near-death experiences and we will know for a fact that they could not have been generated by the dying brain.’ That will change how we consider ourselves as human creatures, he says. ‘That will trigger the greatest revolution ever. That is the final goal, the real goal.’

‘Of course, there is another goal,’ he continues, ‘which is life extension.’ The book Canavero says inspired Hollywood’s scriptwriters is his 2014 work Head Transplantation and the Quest for Immortality, in which he predicted the transfer of human brains to artificial bodies ‘possibly by 2025’. He also claims to be assembling a team to perform a human brain transplant, and that this procedure will happen within three years.

Many believe the chances are remote that Canavero’s aspirations will become reality any time soon. ‘The complexity of what he is proposing is enormous,’ says neurophysiologist Peter Ellaway, an emeritus professor at Imperial College London. ‘I think his aims are almost in the realm of science fiction. In my opinion this will never happen.’

Others fear indirect downsides to Canavero’s actions, even if no head transplant is ever attempted. ‘Being a transplant donor is one of the very few purely altruistic things a human can do,’ says Oberholzer. ‘When someone wants to be the hero by putting on this audacious show, people may be put off donating their organs or bodies if they think doctors are going to do something crazy like this.’

Superheroes aren’t the only fictional characters Canavero likens himself to. Victor Frankenstein is another. When I confess I haven’t read Mary Shelley’s Frankenstein, he emails me quotes from the novel. Such as: ‘Whence, I often asked myself, did the principle of life proceed? […] I was surprised, that among so many men of genius who had directed their inquiries towards the same science, that I alone should be reserved to discover so astonishing a secret.’

Dr Strange-meets-Victor Frankenstein, with a dash of Spider-Man, might make for an entertaining science fiction character. And Canavero certainly loves to entertain, playing on his renegade scientist image. ‘Are you sitting tight?’ he asks, in full showman mode, at the start of a recording of his 2014 TEDxLimassol talk. ‘I’m about to give you one hell of a ride.’ He smiles at his audience’s nervous laughter. But fast-forward to today, with the scene shifting from the stage to the operating theatre, and the Canavero carnival risks turning into a horror show. With real-life patients at stake, no one’s laughing any more.

Weird earthquake warning lights

For millennia, people have reported strange, baleful lights appearing before and during earthquakes. Stephen Battersby investigates the origin of these ill portents.

Glowing orbs that drift through the air; blue-white sheets of light; sparks and flashes and flames licking up from the ground … all may be signs of disaster to come. In 1746, the flames dancing on San Lorenzo Island in Peru impressed prison governor Manuel Romero so much that he briefly released the detainees to let them watch. Three weeks later a huge quake hit nearby Lima and a tsunami washed 5 kilometres inland.

There is plenty of photographic evidence for earthquake lights. They tend to accompany large quakes – with magnitudes above 6 – centred at fairly shallow points in Earth’s crust. It is not clear how the lights are produced, but Friedmann Freund of NASA’s Ames Research Center in Moffett Field, California, thinks that when rocks in the crust are squeezed, chemical bonds break to produce a pulse of electrical charge that travels up to the surface. ‘The rocks become like a battery and produce an enormous amount of electric power,’ he says.

This process only generates a low voltage, but Freund thinks that the charge forms an ultra-thin layer at the surface. Since the charge is concentrated over a small distance, it would create a strong electric field, perhaps enough to ionise the air and create a luminous discharge that travels up, away from the ground – explaining the orbs, flames and aurora-like sheets of light.

Freund does not know why the charge should form such a thin layer, or how the wave of ionisation is maintained for any distance through the air. But experiments are encouraging: crushing rocks in the lab produces electric charge and flashes of light. And low-frequency radio waves have been measured in earthquake zones, suggesting that there are currents underground.

As earthquake lights are so rare, it is hard to show that ionisation and the emission of radio waves coincide with them. Freund has yet to secure funding for a network of cameras and a data-processing centre to monitor such events, but he hopes that such a system, along with satellite imagery, could one day provide earthquake warnings akin to weather forecasts.

Lights may not be the only aerial omens of impending doom. In 2004, a curious linear gap appeared in the clouds above a fault line in Iran. An earthquake followed 69 days later. The gap opened again in 2005, and this time an earthquake followed after six days. Two Chinese geophysicists, Guangmeng Guo and Bin Wang, have suggested that hot gas escaping from the fault might cut through the clouds.

The computer that goes beyond logic

For 75 years, computers have worked within limits defined by Alan Turing. Michael Brooks reveals how work has now begun to fulfil his prophecy of a machine that can solve the unsolvable.

He called it the ‘oracle’. But in his PhD thesis of 1938, Alan Turing specified no further what shape it might take. Perhaps that is fair enough: aged just 26, the British mathematician had already lit the fuse of a revolution. His blueprint for a universal computing machine, published two years earlier, set the specs for every computer that followed, from the humblest pocket calculator to the mightiest supercomputer – via laptop, smartphone and all points in between.

So absorbed have we been in exploring this rich and varied legacy, and transforming our world with the machines and applications that built on it, that we have rather overlooked the oracle. Turing had shown with his universal machine that any regular computer would have inescapable limitations. With the oracle, he showed how you might smash through them.

In his short life, Turing never tried to turn the oracle into reality. Perhaps with good reason: most computer scientists believe anything approximating an oracle machine would soon fall foul of fundamental restrictions on how information and energy flow in the universe. You could never actually make one.

In a laboratory in Springfield, Missouri, two researchers are now seeking to prove the sceptics wrong. Building on theoretical and experimental advances of the past two decades, Emmett Redd and Steven Younger of Missouri State University think a ‘super-Turing’ computer is within our grasp. With it, they hope, could come insights not just into the limits of computation in the cosmos, but into the most intriguingly powerful computer we know of within it: the human brain.

Computers as we know them are in essence very capable, rigorous and efficient renderings of what we humans might be capable of if given precise instructions, a high boredom threshold and a limitless supply of paper and pencils. They excel at successive additions, multiplications, logical decisions, if x then y, that sort of thing. Indeed, the first ‘computers’ were young researchers employed by astronomers for the tedious and time-consuming task of working out the orbits of comets or calculating the brightness cycles of variable stars.

A universal computing machine – often known simply as a Turing machine – does the same things, only without the tedium. ‘Electronic computers are intended to carry out any definite rule-of-thumb process which could have been done by a human operator working in a disciplined but unintelligent manner,’ as Turing himself wrote in the programmer’s handbook for the University of Manchester’s Mark II computer in 1950.

So computers have their blind spots just as we do. No matter how disciplined, well-schooled or patient we are, certain questions defy our logic. What is the truth of the statement, ‘This statement is false’? You can spend a lifetime grumbling over the answer, as many a philosopher has. In 1931, the mathematician Kurt Gödel demonstrated that this problem was universal with his infamous incompleteness theorems, showing that any system of logical axioms would always contain such unprovable statements.

Similarly, as Turing showed, a universal computer built on logic alone always encounters ‘undecidable’ problems that never yield straight answers, no matter how much processor power you throw at them. One example is the halting problem. A computer can never tell if any program will run to the end, or get stuck in some infinite loop or at some instruction, without trying out the program first – and possibly getting stuck. The ‘blue screen of death’ feared by many a PC user is just one consequence of this fundamental undecidability.

An oracle, as Turing envisaged it, was essentially a black box whose unspecified contents would be able to solve undecidable problems. An ‘O-machine’, he proposed, would exploit whatever was in this black box to go beyond the bounds of conventional human logic – and so surpass the abilities of every computer ever built.

That is as far as he went in 1938. ‘Turing realised that models for more powerful computing machines may exist,’ says Younger. ‘But he did not present any super-Turing computational models.’

Over two decades ago, Hava Siegelmann came up with one by accident. In the early 1990s, she was working on her PhD in computer science at Rutgers University in Piscataway, New Jersey, just a 40-minute drive from Princeton, where Turing had presented his thesis. Her subject was neural networks, circuits designed to mimic the human brain and the myriad neurons connected by synapses that realise its unparalleled computing power. In a neural net, many simple processors are wired together so that the output of one can act as the input of others. These inputs are weighted to have more or less influence, and the idea is that the network ‘talks’ to itself, using its outputs to alter its input weightings until it is performing its task optimally – in effect, learning as it goes along, just as the brain does. Neural nets have scored some notable successes performing tasks that cannot easily be reduced to a set of straightforward instructions, from reading medical scans and diagnosing illnesses to driving cars.

Siegelmann’s initial aim was to prove theoretically the limits of neural networks: to show that, for all their flexibility, they could never have the full logical capabilities of a conventional Turing machine. She failed time and again. Eventually, she proved the reverse. One of the hallmarks of a Turing machine is that it is incapable of generating true randomness. By weighting a network with the infinite, non-repeating number strings of irrational numbers such as pi, Siegelmann showed you could, in theory, make it super-Turing. In 1993, she even showed how such a network could solve the halting problem.

Her fellow computer scientists met the idea with coolness, and in some cases downright hostility. Various ideas had been floated for ‘hypercomputers’ that might exploit exotic physics to go super-Turing, but they always seemed to lie on a scale from implausible to utterly wacky. Siegelmann eventually published her proof in 1995, but she soon lost interest, too. ‘I believed it was mathematics only, and I wanted to do something practical,’ she says. ‘I turned down giving any more talks on super-Turing computation. I told everyone, “I’m out of this field now”.’

Redd and Younger had been aware of Siegelmann’s work for a decade before they realised that their own research was heading in the same direction. In 2010, they were building neural networks using analogue inputs that, unlike the conventional digital code of 0 (current on) and 1 (current off), can take a whole range of values between fully off and fully on. There was more than a whiff of Siegelmann’s endless irrational numbers in there. ‘There is an infinite number of numbers between 0 and 1,’ says Redd.

Powered by chaos

In 2011, they approached Siegelmann, by then director of the Biologically Inspired Neural & Dynamical Systems lab at the University of Massachusetts in Amherst, to see if she might be interested in a collaboration. She said yes. As it happened, she had recently started thinking about the problem again, and was beginning to see how irrational-number weightings weren’t the only game in town. Anything that introduced a similar element of randomness or unpredictability might do the trick, too. ‘Having irrational numbers is only one way to get super-Turing power,’ she says.

The route the trio chose was chaos. A chaotic system is one whose response is very sensitive to small changes in its initial conditions. Wire up an analogue neural net in the right way, and tiny gradations in its outputs can be used to create bigger changes at the inputs, which in turn feed back to cause bigger or smaller changes, and so on. In effect, the system becomes driven by an unpredictable, infinitely variable noise.

The researchers are working on two small prototype chaotic machines. One is a neural network based on standard electronic components, with three ‘neurons’ in the form of integrated circuit chips and 11 synaptic connections on a circuit board a little larger than a hardback book. The other, with 11 neurons and around 3600 synapses, uses lasers, mirrors, lenses and photon detectors to encode its information in light.

If only on a small scale that should be enough, the team thinks, to take them beyond Turing computation. It is a claim that invites plenty of scepticism. Scott Aaronson of the Massachusetts Institute of Technology voices the concern that mathematical models involving any sort of infinity always run into problems when they are forced to deal with reality. ‘People ignore the fact that the physical system cannot implement the idea with perfect precision,’ he says. Jérémie Cabessa of the University of Lausanne, Switzerland, who used to work with Siegelmann, is similarly doubtful about super-Turing machines in practice. ‘To me, at the moment they are unbuildable,’ he says. Again, it’s not that the maths doesn’t work – it is just a moot point whether true randomness is something we can harness, or whether it even exists. ‘Does nature achieve some intrinsic randomness? If so, perhaps there really is some super-Turing ability in nature,’ he says.

That question was clearly on Turing’s mind: he often speculated about a connection between intrinsic randomness and the origin of creative intelligence. In 1947, he went so far as to suggest to his astounded bosses at the UK National Physical Laboratory near London that they should put radioactive radium into the Automatic Computing Engine he had devised, in the hope that its seemingly random decays would give its inputs the desired unpredictability. ‘I don’t think he intended to build the oracle machine,’ says Siegelmann. ‘What he had in mind was to build something that’s more like the brain.’

Since then, building a computer with brain-like qualities has been a perennial aim, with the latest large-scale initiative being part of the Human Brain Project based at the Swiss Federal Polytechnic School in Lausanne. These endeavours, though, are all about building replica neurons with standard, digital Turing-machine technology. Younger is convinced the less-rigid approach of their chaotic neural networks is more likely to bear fruit. ‘Applying this might take us towards brain-like intelligence,’ he says.

Hypercomputer hype

Younger and Redd are aware they are shooting for the moon. Even if their machine works significantly differently from a standard computer, proving that this is due to super-Turing computation will be pretty tough. At the moment, their best idea lies in a side-by-side comparison of the output of their machine and a standard computer, given the same inputs. A super-Turing machine can, in theory, produce outputs identical to those of chaotic systems, but a standard Turing machine will always start rounding them.

There has always been a lot of hype about what hypercomputers might do if they were ever to get off the ground – for example, breaking through the boundaries of conventional computation might give us a new hold on other things that currently befuddle human logic, such as quantum theory.

Most of us would be happy if an oracle would just put an end to the blue screen of death and its equivalents. While promising nothing specific of the first trials, Redd is bullish about the outcome. ‘I’m actually kind of confident we’ll see something significant,’ he says.

The real monsters of the deep

They were dismissed as sailors’ tall tales, writes Stephen Ornes, but they’re real: huge waves that rise without warning and can destroy ships. Is there any way to predict them?

When the cruise ship Louis Majesty left Barcelona in eastern Spain for Genoa in northern Italy, it was for the leisurely final leg of a hopscotching tour around the Mediterranean. But the Mediterranean had other ideas.

On 3 March 2010, storm clouds were gathering as the boat ventured eastwards out of the port at around 1 p.m. The sea swell steadily increased during the first hours of the voyage, enough to test those with less-experienced sea legs, but still nothing out of the ordinary.

At 4.20 p.m., the ship ran without warning into a wall of water 8 metres or more in height. As far as events can be reconstructed, the boat’s pitch as it descended the wave’s lee tilted it into a second, and possibly a third, monster wave immediately behind. Water smashed through the windows of a lounge on deck 5, almost 17 metres above the ship’s water line. Two passengers were killed instantly and 14 more injured. Then, as suddenly as the waves had appeared, they were gone. The boat turned and limped back to Barcelona.

A few decades ago, rogue waves of the sort that hit the Louis Majesty were the stuff of salty sea dogs’ legend. No more. Real-world observations, backed up by improved theory and lab experiments, leave no doubt any more that monster waves happen – and not infrequently. The question has become: can we predict when and where they will occur?

Science has been slow to catch up with rogue waves. There is not even any universally accepted definition. One with wide currency is that a rogue is at least double the significant wave height, itself defined as the average height of the tallest third of waves in any given region. What this amounts to is a little dependent on context: on a calm sea with significant waves 10 centimetres tall, a wave of 20 centimetres might be deemed a rogue.

If that seems a little lackadaisical, for a long time the models oceanographers used to predict wave heights suggested anomalously tall waves barely existed. These models rested on the principle of linear superposition: that when two trains of waves meet, the heights of the peaks and troughs at each point simply sum. It was only in the late 1960s that Thomas Brooke Benjamin and J. E. Feir of the University of Cambridge spotted an instability in the underlying mathematics. When longer-wavelength waves catch up with shorter-wavelength ones, all the energy of a wave train can become abruptly concentrated in a few monster waves – or just one.

Longer waves travel faster in the deep ocean, so this is a perfectly plausible real-world scenario. The pair went on to test the theory in a then state-of-the-art 400-metre-long towing tank, complete with wave-maker, at the UK National Physical Laboratory facility on the outskirts of London. Near the wave-maker, which perturbed the water at varying speeds, the waves were uniform and civil. But about 60 metres on they became distorted, forming into short-lived, larger waves that we would now call rogues (though to avoid unwarranted splashing, the initial waves were just a few centimetres tall).

It took a while for this new intelligence to trickle through. ‘Waves become unstable and can concentrate energy on their own,’ says Takuji Waseda, an oceanographer at the University of Tokyo in Japan. ‘But for a long time, people thought this was a theoretical thing that does not exist in the real oceans.’

Theory and observation finally crashed together in 1995 in the North Sea, about 150 kilometres off the coast of Norway. New Year’s Day that year was tumultuous around the Draupner sea platform, with a significant wave height of 12 metres. At around 3.20 p.m., however, accelerometers and strain sensors mounted on the platform registered a single wave towering 26 metres over its surrounding troughs. According to the prevailing wisdom, this was a once-in-10,000-year occurrence.

The Draupner wave ushered in a new era of rogue-wave science, says physicist Ira Didenkulova at Tallinn University of Technology in Estonia. In 2000, the European Union initiated the three-year MaxWave project. During a three-week stretch early in 2003, it used boat-based radar and satellite data to scan the world’s oceans for giant waves, turning up 10 that were 25 metres or more tall.

We now know that rogue waves can arise in every ocean. The North Atlantic, the Drake Passage between Antarctica and the southern tip of South America, and the waters off the southern coast of South Africa are particularly prone. Rogues possibly also occur in some large freshwater bodies such as the Great Lakes of North America. That casts historical accounts in a new light, and rogue waves are thought to have had a part in the unexplained losses of some 200 cargo vessels in the two decades preceding 2004. More recently, what is thought to have been a freak wave struck the cruise ship Marco Polo in the English Channel in 2014, smashing windows in a restaurant on deck 6 and killing a passenger.

Rogue elements

So rogue waves exist, but what makes one in the real world?

Miguel Onorato at the University of Torino, Italy, has spent more than a decade trying to answer that question. His tool is the non-linear Schrödinger equation, which has long been used to second-guess unpredictable situations in both classical and quantum physics. Onorato uses it to build computer simulations and guide wave-tank experiments in an attempt to coax rogues from ripples.

Gradually, Onorato and others are building up a catalogue of real-world rogue-generating situations. One is when a storm swell runs into a powerful current going the other way. This is often the case along the North Atlantic’s Gulf Stream, or where sea swells run counter to the Agulhas current off South Africa. Another is a ‘crossing sea’, in which two wave systems – often one generated by local winds and a sea swell from further afield – converge from different directions and create instabilities.

Crossing seas have long been a suspect. A 2005 analysis used data from the maritime information service Lloyd’s List Intelligence to show that, depending on the precise definition, up to half of ship accidents chalked up to bad weather occur in crossing seas.

In 2011, the finger was pointed at a crossing sea in the Draupner incident, and Onorato thinks it might also have been the Louis Majesty’s downfall. When he and his team fed wind and wave data into his model to ‘hindcast’ the state of the sea in the area at the time, it indicated that two wave trains were converging on the ship, one from a north-easterly direction and one more from the south-east, separated by an angle of between 40 and 60 degrees.

Simpler situations might generate rogues, too. In 2013, Waseda revisited an incident in December 1980 when a cargo carrier loaded with coal lost its entire bow to a monster wave with an estimated height of 20 metres in the ‘Dragon’s Triangle’, a region of the Pacific south of Japan notorious for accidents. A Japanese government investigation had blamed a crossing sea, but when Waseda used a more sophisticated wave model to hindcast the conditions, he found it likely that a strong gale had poured energy into a single wave system far larger than conventional models allowed.

He thinks such single-system rogues could account for other accidents, too – and that the models need further updating. ‘We used to think ocean waves could be described simply, but it turns out they’re changing at the same pace and same time scale as the wind, which changes rapidly,’ he says. In 2012, Onorato and others showed that the models even allow for the possibility of ‘super rogues’ towering as much as 11 times the height of the surrounding seas, a possibility since borne out in water-tank experiments.

Early warning

With climate change potentially whipping up more intense storms, such theoretical possibilities are becoming a serious practical concern. From 2009 to 2013, the EU funded a project called Extreme Seas, which brought shipbuilders together with academic researchers including Onorato, with the aim of producing boats with hulls designed to better withstand rogue waves.

That is a high-cost, long-term solution, however. The best defence remains simply knowing when a rogue wave is likely to strike. ‘We can at least warn that sea states are rapidly changing, possibly in a dangerous direction,’ says Waseda.

Various indices have been developed that aim to convert raw satellite and sea-state data into this sort of warning. One of the most widely used is the Benjamin–Feir index, named after the two pioneers of rogue-wave research. Formulated in 2003 by Peter Janssen of the European Centre for Medium-Range Weather Forecasts in Reading, UK, it is calculated for sea squares 20 kilometres by 20 kilometres, and is now incorporated into the centre’s twice-daily sea forecasts. ‘Ship routing officers use it as an indicator to see whether they should go through a particular area,’ says Janssen.

The ultimate aim would be to allow ships to do that themselves. Most large ocean-going ships now carry wide-sweeping sensors that determine the heights of waves by analysing radar echoes. Computer software can turn those radar measurements into a three-dimensional map of the sea state, showing the size and motions of the surrounding swell. It would be a relatively small step to include software algorithms that can flag up indicators of a sea about to go rogue, such as quickly changing winds or crossing seas. Such a system might let crew and passengers avoid at-risk areas of a ship.

The main bar to that happening is computing power: existing models can’t quite crunch through all the fast-moving fluctuations of the ocean rapidly enough to generate fine-grained warnings in real time. For Waseda, the answer is to develop a central early warning system, such as those that operate for tsunamis and tropical storms, to inform ships about to leave port. Thanks to our advances in understanding a phenomenon whose existence was doubted only decades ago, there is no reason now why we can’t do that for rogue waves, says Waseda. ‘At this point it’s not a shortage of theory, but a shortage of communication.’

Five mythical animals that turned out to be real

Since we began exploring the world, travellers have returned with stories of distant lands filled with improbable creatures. Michael Marshall chronicles five that turned out to be true.

In 2017, biologists in Taiwan discussed whether or not to re-introduce the Formosan clouded leopard, a creature so mysterious that some have claimed it may never have existed. It’s not an entirely unusual state of affairs. Explorers have claimed to have seen bizarre animals over the centuries, only to be exposed as hoaxers. But not always. Sometimes the most outlandish creatures turn out to be extremely real.

Duck-billed platypus

It is perhaps no surprise that the platypus was once thought to be a hoax. It looks a bit like a mole but has a duck’s bill. Not only did this strange-looking mix of mammal and bird not fit with what was then known of biology, it was also immediately obvious how the hoax might have been achieved, with little more than scissors, thread and a sewing needle.

The platypus was scientifically described for the first time in 1799 by the British naturalist George Shaw, based on a skin sent by John Hunter, then the governor of Australia. Shaw admitted to being suspicious: ‘it naturally excites the idea of some deceptive preparation by artificial means,’ he wrote.

But Shaw couldn’t find any telltale stitching. Over the years more specimens followed, as well as descriptions of the animals in the wild. By 1823, the anatomist (and grave robber) Robert Knox could write that, while the extraordinary nature of the platypus had been ‘sufficient to rouse the suspicions of the scientific naturalist’, nevertheless ‘these conjectures were immediately dispelled by an appeal to anatomy’.

Since then, the story of the platypus has only grown stranger. Its genome is deeply peculiar, it lays eggs much like birds do, has venomous spurs on its hind legs, and it is descended from ‘King Kong platypuses’ which were a metre long.

Okapi

For centuries, European travellers in West Africa – particularly in what is now the Democratic Republic of the Congo – reported seeing glimpses of a mysterious animal in the forest.

The descriptions were sketchy. It was hoofed, perhaps looked a little like a deer, but had stripes on its rear end that suggested it might be a forest-dwelling zebra. Nobody could catch one or even get a good look at one. For want of a better name, people began referring to it as the ‘African unicorn’.

From 1871 onwards, the British explorer Henry Morton Stanley undertook several expeditions to Africa. In his 1890 book In Darkest Africa, he mentioned that a group called the Wambutti knew of ‘a donkey’ called the ‘atti’. There the matter rested until 1901, when explorer and colonial administrator Harry Johnston mounted a determined search. Local people had told him about a forest animal called the ‘o’api’ – the name Stanley had evidently misheard. He managed to obtain some skins to send to London, where they caused a great deal of confusion and were briefly misidentified as zebra skins.

Eventually, it was recognised that the okapi belonged to a new genus, and it was given the name Okapia johnstoni. It remains incredibly elusive. Very few photographs exist of them in the wild. It is a threatened species, due to illegal logging and the continuing unrest in the DRC. Unexpectedly, its closest living relatives are giraffes.

Giraffe

Speaking of giraffes, there is a much-repeated canard that when Europeans first found out about giraffes, they thought they were a cross between a camel and a leopard, not a species in their own right. They didn’t, but there is a tale of confusion here nonetheless.

The scientific name of the giraffe is Giraffa camelopardalis. The latter part of the name dates back to the Roman Empire, when Julius Caesar brought a giraffe back to Rome from Alexandria: the first time in recorded history that a giraffe visited Europe. Various writers described the animal in terms of camels and leopards. The Roman senator and historian Cassius Dio, in his Roman History, described ‘the so-called camelopard’.

But it seems Dio didn’t think the giraffe was any kind of cross. Instead, he was just helping his readers picture what it looked like using animals they would have known. He describes the animal as ‘like a camel in all respects’ except for its unusual height and proportions, and notes that ‘its skin is spotted like a leopard, and for this reason it bears the joint name of both animals’.

Camelopard, then, is not so much a misidentification as a neologism.

Modern science has revealed that giraffes hum and that baby giraffes are inveterate milk thieves. It has also found answers to the long-standing mystery of how giraffes got such long necks.

Colossal squid

For centuries, sailors told tales of enormous tentacled creatures that could drag entire ships down to Davy Jones’ Locker. The Scandinavian legend of the kraken is just one example.

Such things remained mythical until 1925, when G. C. Robson published a description of a squid called Mesonychoteuthis hamiltoni. Robson based his description on two tentacles found in the stomach of a sperm whale. Only a few specimens have been found in the intervening 90 years, so our knowledge of this squid is still sketchy.

What is clear is that M. hamiltoni is the largest known species of squid, with at least one specimen measured at 4.5 metres long. It has been given the moniker ‘colossal squid’, not to be confused with the jumbo squid and giant squid, which are different species.

However, the idea that it could sink a ship or pose any kind of threat to humans on the surface appears to be a fantasy. Colossal squid live deep underwater where the pressure is intense, and they have adapted accordingly. If they find themselves in surface waters their bodies become floppy and incapable.

Narwhal

Thanks to their huge tusks, narwhals are often called ‘unicorns of the sea’ and for a long time people seem to have thought that was literally true. The earliest attempt at a scientific description may have been that by Nicolaes Tulp, a doctor and anatomist working in Amsterdam in the 1600s, who was famously painted by Rembrandt.

Tulp wrote a monumental medical book called Observationes Medicae, in which he included a few snippets of natural history. The book contains what may be the first Western illustration of an orangutan, and a drawing of the horn of a ‘unicornum marinum’, or ‘marine unicorn’. It is almost certainly a narwhal tusk. These were often presented as belonging to unicorns, and were popular in cabinets of curiosities.

It was not until 1758, more than 100 years after Tulp’s description, that Linnaeus described narwhals for the first time in the tenth edition of his Systema Naturae. He correctly identified them as related to whales. Today narwhals are classed as ‘near threatened’ because of continued hunting and the risk that climate change will melt the pack ice where they live.

Bonus beast: King Louie

Disney gave King Louie an upgrade it its 2016 remake of The Jungle Book when it reimagined the king of the jungle as an impossibly large orangutan. In fact, the creature was closer to reality than might first appear.

Once upon a time, the forests of East Asia were home to the largest of all apes. Gigantopithecus reached 3.5 metres tall and weighed 540 kilograms. At that size, it wasn’t quite as large as King Kong, but would have looked down on Chewbacca from on high.