Science is a journey, a process of generating ideas and testing them. And it’s not always plain sailing. As John William Strutt’s work demonstrates in this chapter, it can be a mystery tour needing real determination and painstaking work to stay on course. On the other hand, occasionally a single piece of intelligence can immediately illuminate a large area of terra incognita. There’s an old saw which claims that ‘getting there is half the fun’. In at least one of the following vignettes that does seem to be the case – even though it is a quest for the ultimate in boring things to do.
At first blush it seems that the universe is lopsided. Light can be blindingly bright and sound deafeningly loud. But you can’t get darker than dark or quieter than silence. Likewise, while stars can burn at 1 billion °C, there’s a temperature below which the universe cannot go. There’s a single reason for all these things – the nature of energy – yet it took centuries to understand what was really going on. Physicist Michael de Podesta picks up the story.
Absolute zero is an ideal and unattainably perfect state of coldness – the ultimate in cool. Since the concept first emerged in the mid-19th century, people have been driven to get ever closer to it. Along the way they have uncovered states of unparalleled beauty and order, developed engineering marvels and enhanced scientific insight, not least about notions of temperature and matter itself.
The idea of temperature is something we become familiar with at an early age. It is a parental rite to ensure a baby’s room is stiflingly warm, its bathwater is ‘just right’, and that it learns that some things are ‘Hot! Don’t touch’. Later on we associate numbers with different temperature sensations, and learn that 20 °C describes a warm day and 37 °C is a biochemical Mecca.
This familiarity makes it difficult to appreciate what an astonishing concept temperature embodies. Yet if you approach it with the naiveté of early natural philosophers such as Galileo, Newton and Robert Boyle, you will no longer laugh at early notions of heat. Some thought it a kind of fluid, called caloric, and we still speak of heat ‘flowing’. Others thought that cold was caused by the presence of coldness, sometimes envisaged as ‘frigorific atoms’. To the untutored eye, are these ideas any more absurd than the notion that light is a wave?
One experiment performed back in 1791 by Swiss physicist Marc-Auguste Pictet illustrates how baffling even simple things must have seemed. Pictet used two parabolic mirrors facing each other 21 metres apart. Each mirror reflects the light that hits it towards a focal point. He placed a thermometer at the focus of one mirror and a hot object at the focus of the other. The thermometer showed a rise in temperature indicating that ‘calorific rays’ of some kind were being transmitted – an impressive experiment. More amazing, when snow was placed at the first focus, the thermometer reading fell several degrees. Witnesses at the time were reluctant to conclude that snow emitted cooling frigorific rays, but given knowledge at the time it would have been hard to conclude anything else!
Early efforts at measuring temperature were purely empirical. There existed a standardised method – within each laboratory at least – for determining ‘degrees of heat’ in a reproducible manner. The most useful thermometers exploited the thermal expansion of liquids constrained in glass bulbs and narrow tubes. The level of the liquid was marked at two ‘fixed temperatures’, such as the melting and freezing temperatures of water. Then, unknown temperatures were measured as ‘degrees of heat’ that were etched as a scale between the two fixed points.
The biggest problem for early workers was a ‘thermal catch-22’. The scale-marking process assumes that the liquid expands an equal amount for every unit rise in temperature. But this assumption cannot be verified unless one measures the thermal expansion of the liquid, and to do that one requires … a thermometer.
By the early 19th century, no solution to this circularity was in sight. Instead, different workers simply asserted that one thermometer or another was better than the others. Early thermometers used ‘spirit’ – essentially brandy – and this was generally inferior to mercury. However, exhaustive comparisons in the 1840s by French scientist Henri Victor Regnault showed that an ‘air thermometer’ – which measures changes in pressure of dry air in a sealed container – was superior to both in its reproducibility and inter-comparability.
Different designs of air thermometer calibrated at the freezing and boiling points of water gave consistent estimates of temperatures. In contrast, liquid-in-glass thermometers varied in their performance depending on the properties of glass, and the type of liquid. Slowly the air thermometer, which was difficult to use, began to be viewed as definitive and was used to calibrate other, more practical thermometers.
Crude as early measurements were, they brought some order to the thermal world. Reproducible readings aided everything from cooking to industrial processes. But still no one really knew what it was they were measuring!
As practical confusion lessened, theorists could turn their attention to this problem. And William Thomson, later to become Lord Kelvin, focused on the possibility of constructing a temperature scale that did not depend on the materials from which thermometers were made – an absolute scale of temperature. Kelvin’s recipe for an absolute temperature scale was obscure, resting on the operation of an ideal heat engine, first imagined by the French scientist Nicolas Léonard Sadi Carnot. But a more powerful and ultimately successful ‘meme’ was emerging: the explanation of the physical properties of matter in terms of atoms.
It is hard to imagine a time when even the greatest scientific pioneers did not understand that everyday objects are made of atoms, that heat is the kinetic energy of moving atoms, and that temperature is a measure of the speed with which atoms move – specifically the square of the average molecular speed. Although ideas of this kind were advanced by the likes of John Herapath in 1820 and John James Waterson in 1845, they were roundly rejected by London’s Royal Society. Yet by the time the book Heat: A mode of motion was published by John Tyndall in 1865 the idea was taught as fact.
To put this advance into a modern context, consider recent discussions about the existence of the Higgs particle. This particle is supposed to give rise to the property of matter we call ‘mass’ – a property so familiar that most people barely think it needs explanation. Similarly the idea that the motion of hypothetical atoms was the source of heat was posited but unconfirmed for many years. The idea that heat needed a microscopic explanation was not obvious, but once established it offered astonishing insight into the role of atoms in everyday life: when we feel the temperature of a substance we are literally sensing the ‘buzzing’ of matter.
And once the idea of molecules jiggling within a substance is accepted, the concept of absolute zero becomes inevitable: it is the temperature at which atoms become completely still.
The Fahrenheit and Celsius temperature scales assign arbitrary numbers to different phenomena. Daniel Fahrenheit used the freezing temperature of brine as a ‘zero’ because it was the coldest temperature he could achieve. How could we hope to identify the location of absolute zero if we couldn’t get close to it? Clues were around for those who knew where to look.
Guillaume Amontons, a 17th-century French instrument-maker, investigated the way the pressure of gas sealed in a vessel changed with temperature. He noted that the pressure fell by ‘around a quarter’ when the gas was cooled from the boiling point of water to around the ice temperature. He then speculated that if cooled further, the pressure might eventually disappear. This would happen, he calculated, at what we would now describe as –300 °C, which is not far off! Later experiments of a conceptually similar kind refined this answer.
Scientists now use two temperature scales, the familiar Celsius scale and the Kelvin scale; the magnitude of a degree is the same on both scales. The Kelvin scale starts at 0 K, which translates as –273.15 °C. The melting temperature of ice (0 °C) is at 273.15 K.
With the concept of absolute temperature becoming clearer, and the possible location of ‘zero’ identified, the race to reach zero mirrored the race to Earth’s poles – a journey into the unknown.
One gas after another was cooled under pressure before being allowed to expand rapidly, which lowered its temperature further and condensed it like steam on a window. Using cascades of gases, Louis-Paul Cailletet liquefied oxygen at – 183 °C and nitrogen at – 196 °C. (It is doubtful whether scientists realised at this point how commonplace liquid oxygen and liquid nitrogen would become in the 20th century. Applications may have been envisaged at that time, but I would bet that making instant ice cream and destroying warts would not have been among them.)
The penultimate conquest was hydrogen by James Dewar in 1898 at – 250 °C. The race to liquefy helium – the most incondensable of gases – was won by Dutchman Heike Kamerlingh Onnes at the University of Leiden, who on 10 July 1908 reached a temperature of 4.2 K. This was an astonishing technical achievement, and while it marked the end of one race, it started another that continues today – to ever lower temperatures. The few cubic centimetres of almost perfectly transparent liquid that Kamerlingh Onnes produced that day was so precious that it must have been inconceivable that it would one day be used routinely in hospitals and laboratories.
Shortly after liquefying helium, Kamerlingh Onnes discovered that at very low temperatures metals become superconductive – their electrical resistivity falls to a value indistinguishable from zero. The change is enormous – at least 15 orders of magnitude. Superconducting technology is not as commonplace as many had hoped it would become, but it is widely used in magnetic resonance body scanners where a huge magnetic field is created by an electric current in a coil of superconducting wire.
Kamerlingh Onnes did not realise that perhaps the most astonishing low temperature phenomenon of all was taking place in front of his eyes. Through small gaps in the insulated glass vessel, the precious liquid could be seen boiling. By sucking out helium vapour from the space above the liquid, the fastest helium molecules were removed and the liquid cooled even further; the vigour of the boiling increased. And then suddenly, below what we now know was 2.17 K, the bubbling stopped and the liquid became eerily still. This phenomenon was observed on the first day that helium was liquefied, but it was years before anyone understood it. A fraction of the liquid had changed to a new state known as a superfluid which has a thermal conductivity indistinguishable from infinity – so that whenever a region of the liquid became marginally hotter and began to form a bubble, the superfluid carried the heat away before the bubble could form.
As well as two protons, the nuclei of helium usually contain two neutrons (4He). Thousands of times rarer than this was its isotope 3He, which has only a single neutron. It was expected that the 25 per cent difference in mass would change the properties of a liquid made from 3He rather than 4He. But the change was more profound than imagined. The lighter atoms of 3He condensed at 3.2 K, instead of the 4.2 K of 4He, and once liquefied it behaved completely differently, becoming more and more viscous as the temperature fell.
The difference between 3He and 4He exposes insight that can only be obtained at low temperatures. Who would have guessed that the presence or absence of a neutron could so transform the physical properties of a liquid made from those atoms? It is only when random thermal vibrations are reduced that the fantastic nature of atoms themselves is manifest. I would call the properties ‘extraordinary’ but they are not – they are completely ordinary. We are just unaware of how astonishing ‘ordinary’ matter is.
The truth is that the world in which we live is described by quantum mechanics – the laws of Newton and Lagrange that rule our familiar classical world are only approximations. The cooling of a substance exposes the quantum mechanical nature of matter. In helium, the consequences are dramatic. The electric repulsion between helium atoms is so weak that the quantum uncertainty in the position of the atoms lets them literally swap places without having to experience the inconvenience of going around each other. This ability of atoms to swap places in a structure is a characteristic property of a liquid – it’s how liquids change shape so easily. And this quantum swapping of places causes both types of helium to remain liquid to the lowest temperatures investigated at normal pressures, and they are expected to remain liquid even at absolute zero.
The properties of 3He and 4He can be exploited in what’s called a dilution refrigerator which uses the superfluidity of 4He to allow 3He to behave like a gas – effectively ‘evaporating’ into a 4He ‘vacuum’. With this set-up, matter can be cooled to below 0.001 K, which led to further discoveries. Tungsten became superconducting at 0.012 K and 3He itself became a superfluid at just 0.003 K. Clearly significant physical changes are still taking place within materials even at these ultra-cold temperatures.
The quest for lower temperatures in large pieces of material has stalled on the fact that the thermal conductivity and heat capacity of all materials plummet as temperature falls. This means it takes longer and longer to remove even tiny amounts of heat from a substance. Also, any experimental technique you use to study the properties of a substance will warm it up. If a butterfly happened to find itself in a refrigerator containing a cubic centimetre of copper at 0.001 K, the very act of the butterfly falling 10 centimetres would raise the copper’s temperature 100-fold.
For smaller amounts of material – up to just a million atoms or so – we can cool them atom by atom using laser light. This has slowed atoms from moving at around 1 metre per second at 1 millikelvin to roughly 1 millimetre per second at 1 nanokelvin. Although applications of such technology seem unlikely at the moment, given the last century of progress we would be unwise to bet against future widespread application.
This state-of-the-art technique can get us very close to absolute zero, and I don’t doubt that we will eventually get colder still. Which raises the most common question asked of cryogenic scientists: why can’t we reach absolute zero? The impossibility of cooling an object to absolute zero is the essence of the Third Law of Thermodynamics, and there is no way around this.
Here’s one way to understand why: conventional fridges work by placing a target to be cooled in ‘thermal contact’ with a cooler substance, typically a recirculating fluid. We know that the fluid must be colder than the target so that heat can flow from the target. By the same principle, to get heat flowing out of a target that you want to reach absolute zero, the fluid coolant would have to be colder than 0 K to begin with! Being below absolute zero is – of course – nonsense: it is clearly impossible to make molecules move slower than not moving at all.
Techniques such as laser cooling seem to overcome the limitation of conventional cooling by simply damping the motion of atoms, but in fact all that has changed is the level of sophistication of the coolant. Even at 1 nanokelvin, atoms are moving at about 1 millimetre per second – slow, but still a long way from stationary.
It might seem odd that a century after Kamerlingh Onnes took us to 4.2 K, we are still investigating what happens in those few degrees above absolute zero. But this is perhaps because slowing down the vibration of atoms creates the equivalent of a quiet room in which one can hear tiny noises, and the logarithmic scale of the decibel – which we use to measure sound levels – could also describe the realm of cryogenic investigation. We shouldn’t think about the single degree between 1 K and absolute zero, but about the factor 1,000 difference in temperature between 1 K and 1 milliKelvin. Cooling through this range, one encounters as many changes in properties as in the change from 1 K to 1,000 K.
For each factor of 10 we cool a substance, we probe atomic interactions at a new level of subtlety. So even at 1 nanokelvin, there is plenty of room for further cooling – to picokelvin, femtokelvin and beyond. And we really have no idea what we will find when we get there!
If you want to read more about the strange world around absolute zero, go to ‘The world of superstuff’ on page 207.
Need something to do? Why not try watching paint dry or grass grow: at least they’re better than doing nothing. You think that’s a joke? Intrepid reporter Valerie Jamieson set off to discover how tedious these activities really are. In the process she discovered a whole new field of science.
Rain clouds roll ominously overhead, the wind plasters my hair across my face and I wonder what I have done to deserve this. I am slowly sinking into a muddy field just outside the Welsh seaside town of Aberystwyth. I have tied plastic bags round my feet to keep my shoes clean. I am cold, tired and, to be honest, a little bit bored. But that’s the point: this is the first stop in my quest to find the most boring thing on Earth.
From the warmth and comfort of the New Scientist office, it all sounded like a bit of light-hearted fun. Just how tedious is watching paint dry? Does ditchwater deserve its dreary reputation? How I laughed when some smarty-pants called it boringology. Little did I know that I would be the one to draw the short straw, but here I am in a field at the Institute of Biological, Environmental and Rural Sciences (IBERS), watching grass grow.
As soon as Danny Thorogood, a turf-grass breeder here, leads me into the middle of the field I realise that not all grass is equal. Stretching in front of us are rows of different grasses that Thorogood and his colleagues have bred to be more nutritious for cows, to resist droughts, or simply to stay green. In the distance I spot giant miscan-thus waving in the wind, a hybrid grass whose dry, leafless stems are a promising biofuel. Miscanthus grows at the impressive rate of 4 metres a year. ‘You can even hear it growing,’ says Mervyn Humphreys, a plant breeder at IBERS. ‘It crackles.’
I don’t know what comes over me. Suddenly I am on my hands and knees stroking the plants, examining their length, texture and colour. The diversity is remarkable: the AberNile ‘stay green’ grass is lush without the slightest hint of brown, while the bluegrass favoured by North American gardeners is dark green and bushy. ‘For parks and lawns, you want dense coverage that doesn’t grow too fast,’ says Thorogood. ‘But for grazing, you want a grass that grows quickly.’
There are more than 9,000 known species of grass, but they are united in one thing, Thorogood tells me: how they grow. Unlike many other plants that sprout new shoots from the tops of mature stems, grass grows from the bottom up. Grass’s growth happens near ground level in embryonic tissue called the meristem. As the plant absorbs nutrients and water, the meristematic cells divide and multiply. The cells expand as they mature, pushing older ones upwards like toothpaste squeezing out of a tube. That’s why mowing your lawn doesn’t stop it growing – unless you scalp it to within a centimetre high and damage the meristem.
Of course, none of this means that it’s interesting to watch grass grow. But I’m already suspecting that the people here find it far from dull. In fact, some of them have invented a way to measure just how fast the growth happens. ‘You can’t just lie in a field and measure it with a ruler,’ says plant scientist Helen Ougham. ‘That would be silly.’
Nearly 27 years ago, Ougham and her colleagues at what was then the Welsh Plant Breeding Station needed a controlled way to study how cooling and heating the meristem affects growth. To do this, they plucked a grass seedling from the greenhouse and sandwiched its meristem between brass plates heated or cooled with ethylene glycol, an ingredient in antifreeze. Next they clamped the youngest leaves between the jaws of a crocodile clip attached to a string looped round a pulley. To keep the string taut, they tied a counterweight to the end of it.
In the warmth of the laboratory, I get the chance to try it out for myself with a darnel grass seedling. As the plant grows, the dangling counterweight descends an equivalent distance. To measure this fall, we tie an iron cylinder halfway along the string and place it inside a ‘displacement transducer’ that converts imperceptible movements into voltages.
Then we wait. And wait. I stifle a yawn and glance surreptitiously at my watch. Surely nothing is going to happen. The darnel grass seedling has a different idea. Within minutes, the digital voltmeter flickers into life. Grass is growing in front of my very eyes.
Every hour, it grows another 3.5 millimetres. If the temperature stays steady, my seedling will be standing over 17 millimetres taller by the time I get home tonight. Bizarrely, I am brimming with pride.
Yesterday I spent ten hours on a train, just for the chance to watch grass grow – and I don’t regret a moment. How can ditchwater measure up to that? At first, Jane Fisher is not very confident that it can. ‘Ditches are not very glamorous,’ she admits.
I am at the Centre for Ecology and Hydrology in Wallingford, near Oxford. Fisher, a freshwater ecologist and a specialist in algae who has since moved to John Moore’s University in Liverpool, has already done the dirty work for me. She has filled two jars with water taken from ditches that run into the river Thames.
I begin to sense that Fisher is warming to the boring-ology challenge. The air is filled with the powerful stench of manure but that, she says, is partly what makes ditches so fascinating: nutrients from the fields leach into the water, making them a rich food source for all sorts of flora and fauna. ‘The diversity per millilitre is huge,’ she enthuses.
And it turns out she’s right: I can already see movement in the first jar of ditchwater. Aside from a few roots and the odd dead leaf, the water is surprisingly clear. This water comes from a ditch that runs through woodlands, and trees soak up many of the nutrients. But there is still plenty of food left over for the ‘ditchlife’. A water snail inches up the side of the jar and a white streak zips past. It is probably a cyclops, a type of zooplankton. These strange creatures swim around grazing on green, soupy algae, removing nutrients and squirting out pellets of excrement that sink to the bottom of the ditch as sediment. Zooplankton are the reason the water is so clear.
I am really hoping to see a water bear, the toughest animal on earth. Water bears can withstand crushing pressures, shrug off lethal radiation and survive being boiled alive or chilled to near-absolute zero. They do this by completely shutting down their metabolism and then coming back to life. And I’ve heard that they look quite cute as they swim by, thrashing the water with their eight paws.
We place a droplet under the microscope and I peer into an alien world. Treading water is a Keratella rotifer, a transparent microscopic creature shaped like a rectangle with spiny corners. Its mouth is covered in rotating hairs called cilia that draw water in and strain it for algae. More than 2,000 species of rotifer have been identified and they come in all shapes and sizes. But it’s not just their strange appearance that makes them so fascinating to biologists.
Most species are all-female. Bdelloid or ‘leech-like’ rotifers give birth to their young without ever having sex, so they have no need for males in their lives. How rotifers have survived so long without sex has baffled evolutionary biologists: most species that spawn offspring that are clones of themselves become extinct within a few hundred thousand years, but rotifers have been around for 70 million years.
Watching the whirr of the rotifer’s cilia and seeing its internal organs is mesmerising. But my concentration is broken by blobs of algae that are bouncing around the field of view like manic ping-pong balls. These microscopic plants, Trachelomonas, are swimming around anxiously trying to move into the light to photosynthesise, Fisher tells me.
But to be honest, I’m not really listening. A nematode has just swum into view. This roundworm is less than a millimetre long and is feeding on invisible bacteria from rotting leaves. I shouldn’t be surprised to see one, apparently: nematodes turn up anywhere moist, and there’s even a species that loves beer mats.
There’s no water bear, however, and I move on to the other jar of ditchwater. This one is much murkier and full of detritus. Under the microscope, Fisher points out the culprits, long strands of cyanobacteria and colonies of four green plant cells stuck together called Scenedesmus. They are there because the water has drained from a cow field, and is rich in nutrients. Not only do the cows’ hoofs churn up the earth and release extra nutrients from the soil, but the cow dung is a rich source of phosphorus that seeps into the water.
The water is also full of diatoms, single-celled algae that convert light and nutrients into elaborate glassy shells. The ones I am staring at look like transparent coffee beans. Although they are plants, they move through the water by oozing slime from the slit in their shells. Next time you slip on a rock, you can blame it on diatoms’ shiny shells and excretions.
When planning this week’s excursions, I gave myself a day off in the middle, just in case I needed to recover from the tedium. So far, however, boringology has failed to bore me.
That may be about to change. I’m heading for the Oxford office of Infinitesima, a specialist imaging company whose press releases exclaim ‘We really can watch paint dry!’ To me, that sounds like one long bore-fest. But when I told Celia Taylor, formerly of AkzoNobel paints, what I’m going to Oxford to do, she warned me to pay attention from the start. ‘Most of the exciting stuff is done in twenty minutes,’ she says.
Exciting stuff? What she means by that is the process of forming a film. Emulsion paint, for instance, consists mainly of binder, millions upon millions of acrylic polymer particles dispersed in water. As the water evaporates, the particles merge, packing together like a stack of oranges on a fruit stall, with water filling the gaps. As the paint dries further, the particles squash together until they coalesce into a film.
Chemists like Taylor use all sorts of techniques to watch processes like these. They are always looking for ways to make paint tougher, more environmentally friendly and sport new finishes, and it all comes down to understanding paint chemistry and what happens when the water (or organic solvents, in the case of gloss paints) evaporates. Taylor’s toolkit includes mass spectrometers, which sniff the molecules given off as gloss dries, and magnetic resonance imaging, which measures the amount of water left in emulsion.
But to actually see the all-important paint surface, you need something special: an atomic force microscope (AFM). This works in much the same way that a record player’s needle runs along the grooves on a vinyl disc. The AFM builds up an image of individual molecules on a surface by feeling its way around with a sharp tip less than 10 nanometres wide. Making such images is a painstaking task, though. And I could miss some of the action in the minute or so it takes.
Which is why I’m here: Infinitesima’s VideoAFM works at 1,000 times the speed of traditional machines, producing video images at 15 frames per second. Paint Drying: The movie may not be this year’s Christmas blockbuster, but plenty of people pay good money to watch it.
Next time you are in the bath, ponder this. Your thumbnails are growing approximately a tenth of a millimetre a day. You can thank the late American physician William Bean for that nugget of information.
Born in 1909, Bean should perhaps be crowned the founding father of boringology. His study of his own fingernails culminated in a paper published in 1980 called ‘Nail growth: 35 years of observation’.1
Bean began his analysis when he was 32 by filing a horizontal line just above the cuticle of his left thumbnail. He then recorded how long it took for the mark to reach the tip of his finger. From this, he worked out that his nail grew on average 0.123 millimetres a day. Or, if you prefer, 1.4 nanometres a second.
As head of the department of internal medicine at the University of Iowa, Bean dutifully marked his thumbnail and jotted down his measurements for the next 35 years and published papers after the first 25 and 30 years. It didn’t matter where Bean was – his nails grew at the same steady rate all year round.
Only two factors slowed down the growth of his talons: fungal infections and advancing years. By the age of 61, his thumbnail had slowed to 0.100 millimetres a day. And in his final paper on the subject six years later, his nails had decelerated by another 0.005 millimetres a day.
Sadly, though, when I arrive at Infinitesima, no one is watching paint dry. Instead, the VideoAFM is being used to study a molten polymer crystallising. But it gives me a flavour for what molecular-scale movies of paint drying would look like. Before my eyes, I see molecules creeping across the display. OK, so it isn’t Harry Potter and the Goblet of Fire, and there’s not much more I can say about it. But I am watching molecules move around on a surface, for crying out loud. That’s pretty cool.
Maybe there’s something wrong with me, but this really hasn’t been the dullest week of my life. In fact, I’m rather stimulated and looking forward to regaling my friends with fascinating facts this weekend. They’ll surely be mesmerised by the fact that cows fed on clover produce milk bursting with healthy polyunsaturated fats. And there’s a species of parasitic nematode that can grow more than 10 metres long in sperm whales, while another species lives only in vinegar. And paint continues to harden for a whole week after it dries . . . Hang on, I’m not boring you, am I?
Here’s a conundrum. How do you find an element that does nothing – that doesn’t interact with anything else? It’s like trying to solve the perfect murder when there are no witnesses, almost no forensic evidence, and no body. In such cases you need two things: an alert mind and dogged determination. These ingredients led to the discovery of not one element but six. Cosmochemist David E. Fisher elaborates.
Noble gases are so called because, like the nobility, they do nothing. You might also call them rare gases, because they are so rare on Earth as to be nearly non-existent. The one exception is argon, which we inhale as 1 per cent of every breath, though it has no effect on our bodies whatsoever. Helium, neon, argon, krypton, xenon and radioactive radon are odourless, tasteless, practically non-reactive wisps of unconnected atoms. In this material universe, they amount to just about nothing at all.
And yet . . . it would be hard to make a case that any other group of elements has had a greater impact on our understanding of the universe. For example, Darwin’s theory of evolution needs an Earth many millions of years old in order for it to have had time to work. Yet the Bible placed a limit on Earth’s age at a mere 6,000 years. How was this argument resolved? The answer was helium, which is generated in rocks containing uranium and thorium.
When these elements undergo radioactive decay they release alpha particles, which are really just helium nuclei that easily pick up electrons to create the gas. In 1906, armed with this idea and the rate of production of alpha particles by uranium, thorium and their decay products, Ernest Rutherford and Frederick Soddy dated several rocks at up to 500 million years; Earth would have to be at least that old. (Later work with lead isotopes pinned down the age to around 4.5 billion years.) Not only did Rutherford and Soddy create the concept of radioactive dating, they also kick-started our modern understanding of the cosmos and its great age.
What if you want to probe the interior of the sun? The answer is to use argon, as physicist Ray Davis did. He focused on solar neutrinos – ghostly particles created by nuclear fusion in the sun’s core – as a way to test models of nuclear reactions in stars. Neutrinos reverse the natural decay of the radioactive isotope argon-37 into chlorine-37. So in 1958, Davis set up a huge vat of cleaning fluid containing chlorine-37 deep in a mine in South Dakota and used a Geiger counter to detect any argon created. Davis’s pioneering work revealed much about not only the sun but also the peculiar nature of neutrinos. It won him the Nobel prize for physics 44 years later.
Xenon, meanwhile, can tell you about the formation of the solar system. Xenon-129 is an isotope produced by the radioactive decay of iodine-129, which is created in quantity only in supernovas and has the relatively short half-life, in cosmological terms, of 16 million years. The discovery of unexpectedly large amounts of xenon-129 in meteorites was the first evidence that the solid bodies of the solar system formed within the surprisingly short time of a hundred million years after a nearby supernova seeded the material that made them. That shocked theorists who thought it could never have happened so quickly.
Even though the noble gases are rare on Earth, they are not rare in the universe as a whole. This tells us that Earth’s atmosphere must have formed after the planet itself. As Earth formed it was too small to retain gases, which drifted away into the cosmos. The main components of today’s atmosphere – nitrogen, oxygen, water and carbon dioxide – must have been locked away in non-volatile forms. Water was trapped in hydrated minerals, carbon dioxide in carbonates, and so on. Only as the Earth and its gravitational attraction grew did these gases, escaping from volcanic eruptions, create the atmosphere.
They may be lazy loners, but the noble gases have found useful roles. Just think how dull it would be downtown without the red glow of neon lights or the blue-white of krypton. They play more profound roles, too. Superconductivity, for example, was discovered while searching for the coldest temperatures on Earth using liquid helium.
In the Second World War, the Allies wanted to know how Hitler’s attempts to build an atomic bomb were going. So they attached a trap beneath a bomber and flew it over suspect German sites in search of xenon-133. This is a fission product of uranium that doesn’t react with anything else and has a half-life of five days, so should hang around long enough to be detected. A positive result would have been definitive, but the negative result they obtained meant that they were looking at the wrong sites, or the experiment was somehow flawed, or – as proved to be the case – Hitler didn’t have the bomb.
Xenon-133 is also valuable in medicine. It is used as a radioactive marker to identify pulmonary embolisms, and xenon gas is an excellent anaesthetic, and is used today in Russia and Germany.
So these wisps of nearly nothing reveal much about Earth and its place in the universe. Yet for me the most fascinating aspect of the noble gases is how they were discovered. By the 1860s more than fifty elements had been found, often revealing themselves when subjected to the actions of other chemicals, heat or even electricity. We now know that the noble gases are, in the main, stubbornly non-reactive because they contain a full outer shell of electrons – a prerequisite for stability. But back in mid-Victorian times their aloofness meant the noble gases had completely eluded detection.
The first hint of their existence appeared in 1868 as a faint line in the spectrum of light from the sun, indicating the presence of an element not known on Earth. This was given the name helium, after Helios, the Greek god of the sun. At the time, it raised speculation about elements in the stars being different from those on Earth, but a few years later the same line was found when a uranium mineral called cleveite was heated, and the Earth and sun were once more united.
Nothing happened for a while, until the trail was picked up from a different direction with a different end in view, when the British physicist John William Strutt – himself a noble, Lord Rayleigh – began to wonder why the atomic weights of the elements seem to be nearly whole-number multiples of hydrogen. Why whole numbers? And even more puzzling, why only ‘nearly’ whole numbers?
His attitude was, if you don’t understand something, measure it. He spent ten years making precise measurements of the densities of the gases, from which their atomic weights could be calculated, starting with hydrogen, oxygen and then nitrogen. No reason to expect a breakthrough here; it was a routine experiment. Rayleigh bubbled air through liquid ammonia, NH3, and then passed it through a tube containing red-hot copper. That stripped the air of its oxygen, which combined with hydrogen from the ammonia, leaving just nitrogen.
Rayleigh did what a good scientist does: he carried out this experiment again and again to check the results. He then repeated it with a difference. Initially, some of his nitrogen would have come from the ammonia he used; this time he got rid of the ammonia so all the nitrogen came from air. ‘To my surprise and disgust the densities of the two methods differed by a thousandth part,’ he wrote.
Nitrogen from air was apparently heavier than that from ammonia by just 0.1 per cent. I would have put it down to experimental error and moved on. But as Rayleigh said, ‘It is a good rule in experimental work to seek to magnify a discrepancy when it first appears rather than to follow the natural instinct to trying [sic] to get quit of it.’
That’s just what he did, this time replacing air with oxygen so that the nitrogen he collected came only from ammonia. He found that the discrepancy was indeed magnified: it was now 0.5 per cent. Something real was happening, but what? He wrote a letter to the journal Nature, asking for help. It began, ‘I am much puzzled by some recent results as to the density of nitrogen, and shall be obliged if any of your chemical readers can offer suggestions as to the cause.’
First suggestion: nitrogen in air is nothing but nitrogen, while nitrogen in ammonia is chemically combined with hydrogen. So perhaps he had nitrogen in two different chemical states which affected their atomic weights. But how?
No answer. Bad idea. Start again.
Finally, after other suggestions led nowhere, came the idea that a heavier gas might be mixed in with nitrogen from air. This contradicted Occam’s razor, which in colloquial terms means ‘keep it simple’. Invoking an unknown substance, a cryptic gas heavier than nitrogen, to explain the results had shades of phlogiston and the ether – illusory substances invented in other contexts as a fig-leaf for our lack of understanding.
But there is an even more hallowed tenet of science: test your ideas, experiment and observe. So in 1894, together with William Ramsay, Rayleigh passed electrical sparks through air augmented with pure oxygen to produce nitrogen oxides. They removed these by dissolving them in a weak alkali solution. Lo and behold, when all the nitrogen and oxygen were gone, a small amount of colourless gas remained, which they named argon. That comes from the ancient Greek for a lazy thing, since the gas wouldn’t react with anything. It showed a pattern of emission lines never seen before, so argon was not only a previously unsuspected component of air, it was an entirely new element.
Ramsay moved on to investigate whether the gas seeping out of uranium-bearing rocks was argon, but in 1895 identified it as helium. Arguing from his understanding of the then-primitive periodic table, he suggested that helium and argon might represent a new family of elements. He went so far as to predict another such element with a mass of 20. He soon discovered it and named it neon. Krypton and xenon followed several years later, and in 1904 both men received a Nobel prize, Rayleigh in physics and Ramsay in chemistry. This is the only time an element or column of elements has been the basis for these two prizes in the same year.
In 1910, Ramsay collected the full set by producing and characterising radon. This nasty radioactive gas had been noticed before, but it was Ramsay who proposed and then demonstrated that it was another noble gas.
The discovery of the noble gases fascinates me because it is about the whole fabric of science and the roots of discovery. Rayleigh was not looking for a new element, he was trying to solve the riddle of nearly whole-number atomic weights. In this he failed: the explanation awaited the discovery of both protons and neutrons. The discovery of argon, which opened the door to the other noble gases, was serendipitous, the result of chance combined with careful experimentation and an open, inquiring mind. Like many other important scientific advances, it happened not as a result of purposeful planning, but while trying to understand something else.
So, if you want to succeed in science, keep in mind the advice offered by cosmochemist Michael Lipschutz to his students: ‘Obey the Biblical injunction: seek and ye shall find. But seek not to find that for which ye seek.’
Impossible reaction
If there is one half-remembered chemical fact that most of us carry from our schooldays, it is that the inert or noble gases do not react.
The early history of these elements, which are ranged in the right-hand column of the periodic table, provided ample support for that view. Just after the noble gas argon was discovered in 1894, the French chemist Henri Moissan mixed it with fluorine, the viciously reactive element that he had isolated in 1886, and sent sparks through the mixture for good measure. Result: nothing. In 1924, the Austrian Friedrich Paneth pronounced the consensus. ‘The unreactivity of the noble gas elements belongs to the surest of all experimental results,’ he wrote. The theory of chemical bonding explained why. The noble gases have full outer shells of electrons, and so cannot share other atoms’ electrons to form bonds.
The influential chemist Linus Pauling was one of the chief architects of that theory, yet he didn’t give up on the noble gases immediately. In the 1930s, he managed to get hold of a rare sample of xenon and persuaded his colleague Don Yost at the California Institute of Technology in Pasadena to try to get it to react with fluorine. After much cooking and sparking, Yost succeeded only in corroding the walls of his supposedly inert quartz flasks.
After that, it was a brave or foolish soul who still tried to make noble-gas compounds. The late British chemist Neil Bartlett, working at the University of British Columbia in Vancouver, was not trying to defy conventional wisdom, he was just following common logic.
In 1961, he discovered that the compound platinum hexafluoride (PtF6), first made three years earlier by US chemists, was an eye-wateringly powerful oxidant. Oxidation, the process of removing electrons from a chemical element or compound, bears oxygen’s name because oxygen has an almost unparalleled ability to perform the deed. But Bartlett found that PtF6 could even oxidise oxygen, ripping away its electrons to create a positively charged ion.
Early the next year, Bartlett was preparing a lecture and happened to glance at a textbook graph of ‘ionisation potentials’. These numbers quantify the amount of energy required to remove an electron from various substances. He noticed that xenon’s ionisation potential was almost exactly the same as oxygen’s. If PtF6 could oxidise oxygen, might it oxidise xenon, too?
Mixing red gaseous PtF6 and colourless xenon supplied the answer. The glass vessel was immediately covered with a yellow material. Bartlett found it to have the formula XePtF6 – xenon hexafluoroplatinate, the first noble-gas compound.
Other compounds of xenon and then krypton followed. Some are explosively unstable: Bartlett nearly lost an eye studying xenon dioxide. Radon, a heavier, radioactive noble gas, forms compounds too, but it wasn’t until 2000 that the first argon compound, argon fluorohydride, was reported to exist at low temperatures by a group at the University of Helsinki.1 Even now, the noble gases continue to produce surprises. Nobel laureate Roald Hoffmann of Cornell University in Ithaca, New York, admits to being shocked when, also in 2000, chemists in Berlin reported a compound of xenon and gold – the metal gold is supposed to be noble and unreactive too.
So don’t believe everything you were told at school. Noble gases are still the least reactive elements out there; but it seems you can coax elements to do almost anything.
Philip Ball
To learn more about ‘things’ that do nothing, just carry on reading!
In 1966, Gregg Hill took the world’s laziest summer job. First he was poked and prodded and had his fitness assessed by every technique then known to medicine. Then, for 20 days, he and four other student volunteers became the ultimate couch potatoes, confined to bed – not even allowed to walk to the toilet. The goal was to investigate how astronauts would respond to space flight, but when Hill and his fellows finally staggered to their feet, their drastic deterioration helped spark a revolution in medical care here on Earth. As Rick A. Lovett explains, before the experiment took place, bed rest was recommended for people with weak hearts. Afterwards, doctors knew that it made them worse.
The five men were the image of American youth, circa 1966: well groomed and confident. America was racing for the moon, but these young men were looking beyond, doing their bit for astronauts in orbiting space stations and perhaps eventually on a trip to Mars.
They had volunteered for what is now known as the Dallas Bed Rest and Training Study. The goals were twofold: to simulate the effects of weightlessness on astronauts and to determine how quickly the body recovered when normal life resumed. As an aside, the scientists who were monitoring the effects of such slothfulness hoped to find out why hospital patients feel as weak as kittens after lengthy stays in bed. Speculation at the time focused on extended inactivity causing blood to pool in the limbs, producing a dizzying drop in blood pressure when you stood up. But maybe it was something more insidious, such as changes in the heart or lungs. In 1966, nobody knew.
One of the volunteers was Gregg Hill, a college student with an interest in exercise physiology. He was also a runner who could do the mile in 4 minutes 45 seconds – admittedly not Olympic standard, but no slouch either.
Initially, the study leader, Carleton Chapman of the University of Texas Southwestern Medical School, had signed up six volunteers: three athletes and three less active students, to see how they compared. But one of the athletes, ‘a big handsome hunk of a swimmer’, backed out, Hill says, when he discovered how many needles would be stuck in his body.
The tests were in part inspired by the findings of Archibald Hill, the British pioneer of biophysics. Forty years earlier he had discovered that during exercise the body reaches a state of maximal oxygen uptake which cannot be exceeded no matter how hard you work. If you try to go faster, you fall into what athletes call ‘oxygen debt’, in which you can briefly sprint, but must then stop to recover.
Maximal oxygen uptake is referred to as VO2max. The standard test monitors your oxygen consumption as you run on a treadmill, with a technician gradually turning up the grade until you have to pack it in. For competitively inclined people who try to ‘beat’ the test, it’s a brief but brutal workout.
In addition to VO2max, Chapman’s team wanted to know everything possible about the students’ hearts, lungs and overall fitness. They took chest X-rays to determine the volume of their hearts. They measured lung capacity by making them exhale into a device called a spirometer. They weighed them underwater to calculate how much body fat they were carrying.
The tests that frightened off the swimmer were designed to measure the amount of blood pumped by each beat of the heart (the ‘stroke volume’) and the fraction of oxygen removed from it by the leg muscles. Today, there are non-invasive ways to measure these, but in 1966, one needle had to be stuck into a vein in the right arm and another into an artery in the left one. Squirts of green dye were injected into the vein, and the amount by which the dye was diluted when it appeared in the artery revealed the volume of blood with which it had mixed in the heart. Measuring how much oxygen the legs were using required yet another needle to extract samples from a leg vein – all while running on a treadmill. Definitely not a test for the squeamish.
Preliminaries completed, Hill and his friends went to bed. Their diets were monitored so that they wouldn’t gain weight, but exercise was strictly forbidden. The only concession was a single brief shower halfway through the experiment.
Bored, Hill and his wardmates read a lot, watched TV and listened to music – although that sometimes caused energy-consuming arguments. ‘I like classical,’ Hill says, ‘but I tolerated pop for the sake of peace.’
When they were finally released, the men were placed on trolleys and wheeled to the sports lab for a repeat of the initial tests. The results were stunning. Chapman’s team found that a mere three weeks of inactivity had cut VO2max by 28 per cent and stroke volume by 25 per cent – more than 1 per cent per day. Overall, their hearts had shrunk 11 per cent, and two of the non-athletes fainted during their first efforts on the treadmill.
As word filtered out, hospital doctors began prodding surgical patients out of bed as soon as possible and cardiologists began prescribing exercise rather than bed rest for heart patients. Hill’s boring summer job had changed the face of medicine.
Back on his feet, however, Hill now had to work harder, as the study entered its ‘training’ phase. For the next 55 days, he endured intense workouts, including time trials on the track. ‘That was rough,’ he says. Early on, he even had trouble driving because his legs were so sore from the training that they trembled when he pushed the pedals. By the end, though, he and the other athletes had fully recovered, and the three non-athletes were in better shape than at the start of the study.
An academic paper, published in 1968, reports these findings in dozens of pages of charts and dry language. Hill puts it more succinctly. ‘The heart is a tremendously flexible organ,’ he says. ‘It remodels itself to meet changing conditions very quickly – much more quickly than muscles respond to weightlifting.’
In later years, Hill maintained his interest in exercise physiology but went on to become a college instructor in computer science. Then, in 1996, he received a phone call. Was he willing to be part of a follow-up study? This time there would be no needles and no need for bed rest.
The follow-up was the brainchild of Darren McGuire and Benjamin Levine from the University of Texas Southwestern Medical Center in Dallas. Their interest wasn’t in bed rest, but in the effect of age on cardiovascular fitness.
Few had looked at this before and those who had tended to focus on professional athletes, making it difficult to separate the effects of ageing from those of retirement from sport. Hill’s group provided a unique opportunity because no such group of relatively ordinary people had ever been so comprehensively studied for so long.
A few months later, all five men were back on the treadmill – this time minus the needles. Today, sophisticated imaging techniques show how well the heart is functioning. Then they were put on tough training routines.
The follow-up findings, published in 2001, were nearly as spectacular as the original ones. First, McGuire and Levine discovered that 30 years of ageing had taken less of a toll on Hill and the others than 20 days of bed rest. Even though all five men had lost condition (and gained weight), the decades had reduced their VO2max only half as much as their stints in bed.
That was interesting, but more important was what happened when the men were put on exercise programmes building up to about three to five hours per week. Within six months, their VO2max levels rebounded all the way to what they had been at the end of the 1966 study. ‘We reversed thirty years of ageing with six months of training,’ Levine said.
This did not, however, give Hill back his ability to run a 4:45 mile, most probably, he suspects, because his ageing tendons have lost elasticity. Still, he likes the fact that doctors and nurses often tell him he has the vital signs of a teenager. ‘There is a fountain of youth,’ he says. ‘It’s just that you have to work hard to drink from it.’
If you’re interested in more things that do nothing, try ‘The workout pill’ on page 198.