THE COLDEST place on earth is not, as you might imagine, the south pole, but somewhere in the middle of the east Antarctic ice sheet, nearly 1,300 kilometers from the pole. There, winter temperatures routinely plummet to many tens of degrees Celsius below zero. The lowest temperature ever measured on earth, −89.2°C, was recorded there on 21 July 1983, earning the region the title of “the Southern Pole of Cold.” In temperatures this low, steel shatters and diesel fuel has to be cut with a chainsaw.
The extreme cold freezes any moisture out of the air, which, together with the strong winds that blow unceasingly across the frozen plains, probably makes the east Antarctic the most inhospitable place on the planet.
But it wasn’t always such a hostile place. The landmass that forms Antarctica was once part of the supercontinent known as Gondwanaland and was in fact located near the equator. It was covered by a thick vegetation of seed ferns, ginkgo trees and cycads that were grazed on in turn by dinosaurs and herbivorous reptiles, such as the rhino-like Lystrosaurus. But about eighty million years ago the landmass started to break up and a fragment drifted southward, eventually settling over the south pole to become Antarctica. Then, about sixty-five million years ago, a massive asteroid hit the earth, wiping out all the dinosaurs and giant reptiles and leaving ecological space for warm-blooded mammals to become dominant. Despite being very far from the impact site, Antarctica’s fauna and flora were radically altered as ferns and cycads were replaced by deciduous forests. These were inhabited by now extinct marsupials, reptiles and birds, including giant penguins. Fast-flowing rivers, and deep lakes teeming with bony fish and arthropods, filled the valleys.
But as greenhouse gas levels dropped, so did the temperature in Antarctica. Circulating ocean currents encouraged further cooling, and about thirty-four million years ago the surface waters of the rivers and inland lakes started to freeze in the winter. Then about fifteen million years ago the winter ice finally failed to melt in the summer, locking the lakes and rivers beneath a solid frozen roof. As our planet continued to cool, massive glaciers marched over Antarctica, extinguishing all its terrestrial mammals, reptiles and amphibians, and burying the land, lakes and rivers beneath gigantic sheets of ice several kilometers thick. Antarctica has remained locked in a deep freeze ever since.
It was only in the nineteenth century that the American sealer Captain John Davis became the first human known to have set foot on the continent, and only in the twentieth that permanent settlement began, as several countries raced to establish their territorial claims by building research stations on the continent. The first Soviet Antarctic station, Mirny, was established near the coast on February 13, 1956, and it was from here, two years later, that an expedition left for the interior of the continent with the aim of setting up a base at its geomagnetic pole. The expedition was dogged by snowstorms, loose snow, extreme cold (−55°C) and lack of oxygen, but finally arrived at the geomagnetic south pole on December 16, during the southern hemisphere’s summer, and established the Vostok station.
Since then, that research base has been manned nearly continuously with a team of between twelve and twenty-five scientists and engineers who make geomagnetic and atmospheric measurements. One of the main purposes of the station is to drill into the underlying glacier to capture a frozen record of past climates. In the 1970s the engineers drilled a set of cores up to 952 meters deep, reaching ice laid down in the last ice age, tens of thousands of years ago. New rigs arrived in the 1980s, allowing the researchers to reach a depth of 2,202 meters. By 1996 they had managed to drill down to 3,623 meters: a hole in the ice over two miles deep to a level laid down as surface ice 420,000 years ago.
But then the drilling was stopped, because something odd had been detected lying not far beneath the bottom of the borehole. In fact, the discovery that something unusual lay beneath the Vostok station had been made a couple of decades earlier, in 1974, when a British seismic survey of the region had revealed anomalous readings for a large area covering 10,000 square kilometers and lying about 4 kilometers below the ice. The Russian geographer Andrey Petrovich Kapitsa suggested that the radar anomaly was caused by a huge lake trapped beneath the ice and kept warm by the underlying geothermal energy, with the extreme pressure preventing freezing. Kapitsa’s proposal was eventually confirmed by satellite measurements of the area in 1996, which revealed a subglacial lake up to 500 meters deep (from the top of its liquid surface to its bottom) and the size of Lake Ontario. The team named it Lake Vostok.
With an ancient lake buried beneath the ice, the drilling operations at the Vostok station took on a wholly different significance as the borehole approached a unique environment. Lake Vostok had been locked away from the earth’s surface for hundreds of thousands, if not millions, of years*1—a lost world. What had happened to all those animals, plants, algae and microbes that thrived in the lake before it was shut off, trapping any surviving organisms in absolute darkness and cold? Had all life been extinguished, or could some creatures have survived and even adapted to life several kilometers beneath the surface of the glacier? Such hardy organisms would have had to cope with an extreme environment: bitterly cold and totally dark, in water compressed by the weight of the thick ice sheet to more than three hundred times the pressure of any surface lake. However, surprisingly diverse life does manage to eke out a living in other unlikely places, such as the scorching sulphurous edges of volcanoes, acid lakes and even deep, dark submarine trenches thousands of meters below the ocean surface. Perhaps Vostok too could support its own ecosystem of extremophiles.*2
The discovery of a lake under the deep ice acquired even greater significance thanks to another discovery nearly half a billion miles away in 1980, when the Voyager 2 spacecraft photographed the surface of Jupiter’s moon Europa, revealing an icy surface with tell-tale signs of a liquid ocean lying beneath it. If life could survive for hundreds of thousands of years in waters buried kilometers beneath an Antarctic glacier, then maybe Europa’s submerged oceans could support alien life. The search for life within Lake Vostok became a rehearsal for the even more thrilling hunt for life beyond our planet.
The drilling was halted in 1996, just 100 meters above the surface of the lake, to prevent its pristine waters from coming into contact with the kerosene-saturated drill bit, potentially contaminated with plants, animals, microbes and chemicals from the surface. However, Lake Vostok’s water had already been studied from previously extracted ice cores. Thermal currents drive the water in the lake so that just beneath its icy ceiling it is going through a continual cycle of freezing and thawing. This process has continued ever since the lake was sealed off, so its roof is made up not of glacier ice, but of frozen lake water—known as accretion ice—that extends to tens of meters above the liquid surface of the lake. The cores extracted from the earlier drilling operations had penetrated down to this level of ice and, in 2013, the first detailed study of the Vostok accretion ice cores was published.1 The conclusion of the work was that the ice-locked lake contains a complex web of organisms, including single-celled bacteria, fungi and protozoa, along with more complex animals such as molluscs, worms, anemones and even arthropods. Scientists have even managed to identify what kind of metabolisms were used by these creatures, as well as their likely habitats and ecology.
What we want to focus on in this chapter is not the undeniably fascinating biology of Vostok, but the means by which any ecosystem could survive, locked away, for thousands or even millions of years. Indeed, Vostok can be considered to be a kind of microcosm of the earth itself, which has been virtually locked away from inputs, apart from solar photons, for four billion years and yet has maintained a rich and diverse ecosystem in the face of challenges from massive volcanic eruptions, asteroid impacts and climate shifts. How does the vast complexity of life manage to thrive and endure through extreme shifts in its environment for thousands or even millions of years?
A clue can be found in some of the material that was studied by the Vostok biology team: a few micrograms of a chemical extracted from frozen lake water. This chemical is crucial to the continuity and diversity of all life on our planet and contains the most extraordinary molecule in the known universe. We call it DNA.
The group that performed the Vostok DNA study is based at Bowling Green State University in the United States. To read the sequence of millions of fragments of Vostok DNA molecules recovered from the lake water, they used the kind of DNA sequencing technology that had previously been used to decipher the human genome. They then compared the Vostok DNA to databases packed full of gene sequences read from the genomes of thousands of organisms collected from around the globe. What they discovered was that many of the Vostok sequences were identical or very close matches to genes from bacteria, fungi, arthropods and other creatures that live above the ice, particularly those inhabiting cold lakes and deep, dark marine trenches—environments that are probably a bit like Lake Vostok. These gene similarities allowed them to make educated guesses about the likely nature and habits of the kind of organisms that had left their DNA signatures under the ice.
But remember that the Vostok organisms have been locked under the ice for many hundreds of thousands of years. The similarity of their DNA sequences to those of organisms that live above the ice is thus a consequence of shared ancestry from organisms that must have lived among the flora and fauna of Antarctica before the lake and its inhabitants were locked away beneath the ice. The gene sequences of those ancestral organisms were then copied, independently, both above and below the ice, for thousands of generations. Yet despite this long chain of copying events, the twin versions of the same genes have remained nearly identical. Somehow, the complex genetic information that determines the shape, characteristics and function of the organisms that live both above and beneath the ice has been faithfully transmitted, with hardly any errors, over hundreds of thousands of years.
This ability of genetic information to replicate itself faithfully from one generation to the next—what we call heredity—is, of course, central to life. Genes, written into DNA, encode the proteins and enzymes that, via metabolism, make every biomolecule of every living cell, from the photosynthetic pigments of plants and microbes to the olfactory receptors of animals or the mysterious magnetic compasses of birds, and indeed every feature of every living organism. Many biologists would argue, indeed, that self-replication is life’s defining feature. But living organisms could not replicate themselves unless they were capable of first replicating the instructions for making themselves. So the process of heredity—high-fidelity copying of genetic information—makes life possible. You may remember from chapter 2 that the mystery of heredity—how genetic information can be transmitted so faithfully from one generation to the next—was the puzzle that convinced Erwin Schrödinger that genes were quantum mechanical entities. But was he right? Do we need quantum mechanics to account for heredity? This is the question to which we will now return.
We tend to take for granted the ability of living organisms to replicate their genomes accurately, but it is in fact one of the most remarkable and essential aspects of life. The rate of copying errors in DNA replication, what we call mutations, is usually less than one in a billion. To get some idea of this extraordinary level of accuracy, consider the one million or so letters, punctuation marks and spaces in this book. Now consider one thousand similarly sized books in a library and imagine you had the job of faithfully copying every single character and space. How many errors do you think you would make? This was precisely the task performed by medieval scribes, who did their best to hand-copy texts before the invention of the printing press. Their efforts were, not surprisingly, riddled with errors, as shown by the variety of divergent copies of medieval texts. Of course, computers are able to copy information with a very high degree of fidelity, but they do so with the hard edges of modern electronic digital technology. Imagine building a copying machine out of wet, squishy material. How many errors do you think it would make in reading and writing its copied information? Yet when that wet squishy material is one of the cells in your body and the information is encoded in DNA then the number of errors is less than one in a billion.
High-fidelity copying is crucial for life because the extraordinary complexity of living tissue requires an equally complex instruction set, in which a single error may be fatal. The genome in our cells consists of about three billion genetic letters that encode about fifteen thousand genes, but the genomes of even the simplest self-replicating microbes, such as those that live under the Vostok ice, consist of several thousand genes written into several million genetic letters. Although most organisms tolerate a few mutations at each generation, allowing more than a handful into the next generation can lead to severe problems, which we humans experience as genetic diseases, or even nonviable offspring. Also, whenever the cells in our body—blood cells, skin cells, etc.—replicate, they must also replicate their DNA to insert into daughter cells. Errors in this process lead to cancer.*3
But to understand how quantum mechanics is central to heredity we must first visit Cambridge in 1953 where, on February 28, Francis Crick rushed into the Eagle pub and declared that he and James Watson had “discovered the secret of life.” Later that year they published their landmark paper,2 which unveiled a structure and described a set of simple rules that provided the answers to two of the most fundamental mysteries of life: how biological information is encoded and how it is inherited.
What tends to be emphasized in many accounts of the discovery of the genetic code is arguably a feature of secondary significance: that DNA adopts a double-helical structure. This is indeed remarkable, and the elegant structure of DNA has rightly become one of the most iconic images in science, reproduced on T-shirts and websites and even in architecture. But the double helix is essentially just a scaffold. The real secret of DNA lies in what the helix supports.
As we outlined briefly in chapter 2, the helical structure of DNA (figure 7.1) is provided by a sugar–phosphate backbone that carries the actual message of DNA: the strings of nucleic acid bases, guanine (G), cytosine (C), thymine (T) and adenine (A). Watson and Crick recognized that this linear sequence formed a code—and this, they proposed, was the genetic code.
In the last line of their historic paper, Watson and Crick suggested that the structure of DNA also provided a solution to the second of life’s great mysteries: “It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.” What hadn’t escaped their notice was a crucial feature of the double helix, that the information on one of its strands—its sequence of bases—is also present as an inverse copy on the other strand: an A on one strand is always paired with a T on the other and a G is always paired with a C. The specific pairing between the bases on opposite strands (an A:T pair or a G:C pair) is actually provided by weak chemical bonds, called hydrogen bonds. This “glue” holding two molecules together is essentially a shared proton and is central to our story, so we will be considering its nature in more detail shortly. But the weakness of the bonding between the paired DNA strands immediately suggested a copying mechanism: the strands could be pulled apart and each could act as a template on which to build its complementary partner to make two copies of the original double-strand. This is precisely what happens when genes are copied during cell division. The two strands of the double helix with their complementary information are pulled apart to allow an enzyme called DNA polymerase access to each separated strand. The enzyme then attaches to a single strand and slides along the chain of nucleotides, reading each genetic letter and, with almost unerring accuracy, inserting a complementary base into the growing strand: whenever it sees an A it inserts a T, whenever it sees a G it inserts a C, and so on until it has made a complete complementary copy. The same process is repeated on the other strand, giving rise to two copies of the original double helix: one for each daughter cell.
Figure 7.1: The structure of DNA: (a) shows Watson and Crick’s double helix; (b) shows a close-up of the paired genetic letters A and T; (c) shows a close-up of the paired genetic letters G and C. In both instances, the hydrogen bonds—shared protons—that link the two bases are indicated as dotted lines. In this standard (canonical) Watson and Crick base pairing the bases are in their normal, non-tautomeric form.
This deceptively simple process underpins the propagation of all life on our planet. But when Schrödinger insisted in 1944 that the extraordinarily high degree of fidelity of heredity could not be accounted for by classical laws—genes, he insisted, were just too small for their regularity to be based on the “order from disorder” rules—he proposed that genes must instead be some kind of aperiodic crystal. Are genes aperiodic crystals?
Crystals, such as salt grains, tend to have distinctive shapes. Sodium chloride (common salt) crystals are cubes, whereas water molecules in ice form hexagonal prisms that grow into the marvelously diverse forms of snowflakes. These shapes are a consequence of the ways molecules can pack together inside the crystal, so, ultimately, they are determined by the quantum laws that determine the shape of molecules. But standard crystals, although highly ordered, don’t encode much information because each repeated unit is the same as all the others—a bit like a tessellated wallpaper pattern—so a simple rule can describe the entire crystal. Schrödinger proposed that genes were what he called aperiodic crystals: that is, crystals with a similar repeated molecular structure to standard crystals, but modulated in some way, for example with different intervals or periods (hence “aperiodic”) between the repeats or different structures in the repeats—more like complex tapestry than wallpaper. He proposed that these modulated repeated structures encode genetic information, and that, like crystals, their order would be encoded at the quantum level. Remember that this was a decade before Watson and Crick: years before the structure of a gene, or even what genes were made of, was known.
Was Schrödinger right? The first obvious point is that the DNA code is indeed made of a repeated structure—the DNA bases—that is aperiodic in the sense that each repeating unit can be occupied by one of four different bases. Genes really are aperiodic crystals, just as Schrödinger predicted. But aperiodic crystals don’t necessarily encode information at the quantum level: the irregular grains on a photographic plate are made of silver salt crystals, and they aren’t quantum. To see if Schrödinger was also right about genes being quantum entities we need to take a deeper look at the structure of DNA bases, and in particular at the nature of the complementary base-pair bonding, T to A and C to G.
The DNA pairing that holds the genetic code is rooted in the chemical bonds that hold the complementary bases together. As we have already mentioned, these bonds, called hydrogen bonds, are formed by single protons, essentially nuclei of hydrogen atoms, which are shared between two atoms, one in each of the complementary bases on opposite strands: it is these that hold the paired bases together (figure 7.1). Base A has to pair with base T because each A holds protons at precisely the right positions to form hydrogen bonds with a T. An A base cannot pair with a C base because the protons would not sit in the right places to make the bonds.
This proton-mediated pairing of nucleotide bases is the genetic code that is replicated and passed on at each generation. And this isn’t just a one-off transfer of information—like a coded message written on a “one-time” pad that is destroyed after use. The genetic code has to be continuously read throughout the life of the cell to direct the protein-making machinery to make the engines of life, enzymes, and thereby orchestrate all the other activities of the cell. This process is performed by an enzyme called RNA polymerase that, like DNA polymerase, reads the positions of those coding protons along the DNA chain. Just as the meaning of a message or the plot of a book is written into the position of letters on a page, so the positions of protons on the double helix determine the story of life.
The Swedish physicist Per-Olov Löwdin was the first to point out what seems obvious in hindsight: that the protons’ position is determined by quantum, not classical, laws. So the genetic code that makes life possible is inevitably a quantum code. Schrödinger was right: genes are written in quantum letters, and the fidelity of heredity is provided by quantum rather than classical laws. Just as the shape of a crystal is determined ultimately by quantum laws, so the shape of your nose, the color of your eyes and aspects of your character are determined by quantum laws operating within the structure of a single molecule of DNA that you inherited from one or other of your parents. As Schrödinger predicted, life works via order that goes all the way down from the structure and behavior of whole organisms to the position of protons along its DNA strands—order from order—and it is this order that is responsible for the fidelity of heredity.
But even quantum replicators make the occasional mistake.
Life couldn’t have evolved on our planet and adapted to its many challenges if the process of copying the genetic code was always perfect. For example, the microbes swimming in those temperate Antarctic lakes many thousands of years ago would have been well adapted to life in a relatively warm and bright environment. When the ice roof sealed their world, those microbes that copied their genomes with 100 percent fidelity would almost certainly have perished. But many microbes made a few mistakes in the copying process and generated mutant daughters slightly different from themselves. Those daughters whose differences better equipped them to survive in a colder, darker environment would have thrived and gradually, over thousands of not-quite-perfect copying events, the descendants of those trapped microbes would have become well adapted to life in the submerged lake.
Once again, this process of adaptation through mutation (DNA replication errors) within Lake Vostok is a microcosm of the process that has been taking place around the globe for billions of years. The earth has suffered many major catastrophes throughout its long history, from huge volcanic eruptions to ice ages and meteor impacts. Life would have perished if it hadn’t adapted to change via copying errors. Just as important, mutations have also been the driver of the genetic changes that turned the simple microbes that first evolved on our planet into the hugely diverse biosphere of today. A little infidelity goes a long way, given sufficient time.
As well as proposing that quantum mechanics was the source of the fidelity of heredity, Erwin Schrödinger made another bold suggestion in his 1944 book, What Is Life?. He speculated that mutations may represent some kind of quantum jump within the gene. Is this plausible? To answer this question we need first to explore a controversy that goes to the heart of evolutionary theory.
It is often stated that evolution was “discovered” by Charles Darwin, but the fact that organisms have changed over geological time had been familiar to naturalists for at least a century before Darwin through the study of fossils. Indeed, Charles’s grandfather, Erasmus Darwin, had been a keen evolutionist. But probably the most famous pre-Darwinian evolutionary theory was put forward by a French aristocrat with the impressive title of Jean-Baptiste Pierre Antoine de Monet, Chevalier de Lamarck.
Born in 1744, Lamarck was trained as a Jesuit priest, but on his father’s death inherited just enough money to buy a horse on which he rode off to become a soldier and fight in the Pomeranian War against Prussia. His soldiering career was cut short when he was wounded, and he returned to Paris to work as a bank clerk, while studying botany and medicine in his spare time. He eventually found a job as a botanical assistant at the Jardin du Roi (the King’s Garden), until the revolution removed the head of his employer. But Lamarck thrived in postrevolutionary France, gaining a chair at the University of Paris, where he switched the focus of his studies from plants to invertebrates.
Lamarck is one of the most underappreciated of the great scientists, at least in the Anglo-Saxon world. As well as coining the term “biology” (from the Greek bios, life) he came up with a theory of evolution that did at least provide a plausible mechanism for evolutionary change, half a century before Darwin. Lamarck pointed out that organisms are able to modify their bodies in response to the environment during their lifetimes. For example, farmers accustomed to hard physical toil generally develop more muscular bodies than bank clerks. Lamarck then claimed that these acquired changes could be inherited by offspring and descendants, and thereby drive evolutionary change. His most famous and most mocked example is that of the imaginary antelope that stretched its neck to graze on the highest leaves in the tree. Lamarck proposed that the antelope’s descendants inherited the acquired characteristic of the elongated neck and their progeny went through the same process until they eventually evolved into giraffes.
The Lamarckian theory of inherited adaptive change was generally ridiculed in the Anglo-Saxon world, as there was abundant evidence that characteristics acquired during an animal’s lifetime were not generally inherited. For example, fair-skinned northern Europeans who migrated to Australia several hundred years ago are generally suntanned if they spend a lot of time outdoors but, out of the sun, their children will be just as pale as their ancestors. The adaptive change in response to strong sunshine, a suntan, is clearly not inherited. So, after the publication of On the Origin of Species in 1859, Lamarckian evolutionary theory was eclipsed by Darwin’s theory of natural selection.*4
It is Darwin’s version of evolution that is emphasized today—the notion of the survival of the fittest, with an unforgiving nature honing the well adapted from its less perfect progeny. But natural selection is only half the story of evolution. For evolution to be successful, natural selection needs a source of variation on which to cut its teeth. This was a great puzzle for Darwin because, as we have already discovered, heredity is characterized by a remarkably high degree of fidelity. This may not be immediately apparent in sexual organisms that appear to be different from their parents, but sexual reproduction only reshuffles existing parental traits to generate offspring. In fact, in the early nineteenth century it was generally believed that the mixing of traits in sexual reproduction proceeds rather like the mixing of paint. If you take several hundred tins of paint of varied colors and mix half a tin of one with half a tin of another and repeat this process thousands of times, then you will eventually end up with several hundred tins of grey paint: the individual variation will be blended toward a population average. But Darwin needed variation to be continually maintained and indeed added to, if it was to be the source of evolutionary change.
Darwin believed that evolution proceeded very gradually by natural selection acting on tiny heritable variation:
Natural selection can act only by the preservation and accumulation of infinitesimally small inherited modifications, each profitable to the preserved being; and as modern geology has almost banished such views as the excavation of a great valley by a single diluvial wave, so will natural selection, if it be a true principle, banish the belief of the continued creation of new organic beings, or of any great and sudden modification in their structure.3
But the source of this raw material for evolution—the “infinitesimally small inherited modifications”—was a great mystery. Oddities or “sports” with heritable characteristics were well known to nineteenth-century biologists: for example, a sheep with extremely short legs was born on a New England farm in the late eighteenth century and was bred from to produce a short-legged variety called Ancon sheep that are easier to manage because they cannot jump fences. However, Darwin believed that these sports couldn’t be the drivers of evolution because the changes involved were too big, generating often bizarre creatures that would be very unlikely to survive in the wild. Darwin had to find a source of smaller, less dramatic, heritable changes to provide the infinitesimally small variations needed for his theory to work. He never really resolved this problem in his lifetime. Indeed, in later editions of the Origin of Species he even resorted to a form of Lamarckian evolutionary theory to generate heritable minor variation.
Part of the solution had already been discovered during Darwin’s lifetime by the Czech monk and plant breeder Gregor Mendel, whom we met in chapter 2. Mendel’s experiments with peas demonstrated that small variations in pea shape or color were indeed stably inherited: that is—crucially—these traits did not blend but bred true generation after generation, though often skipping generations if the character was recessive rather than dominant. Mendel proposed that discrete heritable “factors,” what we now call genes, encode biological traits and are the source of biological variation. So instead of seeing sexual reproduction in terms of tins of paint being mixed, think of pots of marbles of an immense variety of colors and patterns. Each mixing generation swaps half of the marbles from one pot with half from another pot. Crucially, even after thousands of generations, the individual marbles retain their distinct colors, just as traits may be transmitted without change for hundreds or thousands of generations. Genes thereby provide a stable source of variation on which natural selection can act.
Mendel’s work was mostly ignored during his lifetime and forgotten after it; so, as far as we know, Darwin was not aware of Mendel’s theory of “heritable factors” and its potential solution to the blending puzzle. So the problem of finding the source of the heritable changes that drive evolution led to a decline in support for Darwinian evolutionary theory toward the end of the nineteenth century. But as the century turned, Mendel’s ideas were revived by several botanists studying plant hybridization who discovered laws governing the inheritance of variation. Like all good scientists who think they have found something new, they searched through the existing literature before publishing their results; and they were astonished to discover that their laws of inheritance had been described several decades earlier by Mendel.
The rediscovery of Mendelian factors, now renamed “genes,”*5 provided a solution to Darwin’s blending puzzle, but they didn’t immediately solve the problem of finding the source of novel genetic variation needed to drive long-term evolutionary change, since genes appeared to be inherited without alteration. Natural selection can act to change the mix of the gene marbles at each generation but, on its own, it doesn’t make any new marbles. This impasse was broken by one of the botanists who rediscovered Mendelian genetics, Hugo de Vries, who was walking through a potato field when he spotted a completely novel variety of the evening primrose, Oenothera lamarckiana, taller than the usual plant and with oval-shaped petals rather than the familiar heart-shaped petals. He recognized this flower as a “mutant”; and, more important, he showed that the mutant traits were passed on to the plant’s progeny, so they were inherited.
The geneticist Thomas Hunt Morgan took the study of de Vries’s mutations into the laboratory at Columbia University in the early 1900s, working with the ever-amenable fruit fly. He and his team exposed the flies to strong acids, X rays and toxins in an effort to create mutants. Finally, in 1909, a fly emerged from its pupa with white eyes and the team demonstrated that, as with de Vries’s oddly shaped primroses, the mutant trait bred like a Mendelian gene.
The marriage of Darwinian natural selection with Mendelian genetics and mutation theory eventually led to what is often known as the neo-Darwinian synthesis. Mutation was understood to be the ultimate source of heritable genetic variations that are mostly of little effect and sometimes even harmful, but occasionally make mutants fitter than their parents. The process of natural selection then kicks in to weed out less-fit mutants from a population while allowing the more successful variants to survive and proliferate. Eventually, fitter mutants become the norm and evolution proceeds by “the preservation and accumulation of infinitesimally small inherited modifications.”
A key component of the neo-Darwinian synthesis is the principle that mutations occur randomly; variation is not generated in response to an environmental change. So when the environment changes, a species has to wait for the right mutation to come along—through random processes—in order to track that change. This is in contrast to the Lamarckian idea of evolution, which proposed instead that heritable adaptation—the giraffe’s longer neck—arises in response to an environmental challenge and was thereafter inherited.
In the early twentieth century it wasn’t yet clear whether heritable mutations occurred randomly, as the neo-Darwinians believed, or were generated in response to environmental challenges, as the Lamarckians believed. Remember that Morgan treated his flies with noxious chemicals or radiation to generate mutations. Perhaps, in response to these environmental challenges, the flies generated novel variations that helped them survive the environmental challenge. Like Lamarck’s giraffe, they might have metaphorically stretched their necks, and then passed this adaptive trait on to their descendants as a heritable mutation.
Classic experiments performed by Salvador Luria, James Watson’s PhD supervisor, and Max Delbrück at the University of Indiana in 1943 set out to test the rival theories. By this time bacteria had replaced fruit flies as the favored subjects of evolutionary studies because of their ease of growth in the laboratory and fast generation times. It was known that bacteria could be infected with viruses, but if repeatedly exposed would rapidly evolve resistance by acquiring mutations. This offered an ideal situation in which to test the rival neo-Darwinian and Lamarckian theories of mutation. Luria and Delbrück set out to discover whether bacterial mutants able to resist viral infection already existed in the population, as predicted by neo-Darwinism, or arose only in response to an environmental challenge by a virus, as predicted by Lamarckism. The two scientists found that the mutants occurred at pretty much the same rate whether the virus was present or absent. To put it another way, the mutation rate was not affected by the selective pressure of the environment. Their experiments earned them a Nobel Prize in 1969 and established the principle of the randomness of mutation as a cornerstone of modern evolutionary biology.
Yet when Luria and Delbrück were performing their experiments in 1943, still no one knew what these gene marbles were made of, even less what the physical mechanisms were that were responsible for generating mutations—changing one marble into another. That all changed in 1953 when Watson and Crick unveiled the double helix. The gene marbles were shown to be made of DNA. The principle that mutations were random then made perfect sense, since well-established causes of mutation, such as radiation or mutagenic chemicals, would tend to damage the DNA molecule randomly along its entire length, causing mutations in whatever genes they affected, irrespective of whether or not the change provided an advantage.
In their second paper on the structure of DNA,4 Watson and Crick suggested that a process called tautomerization, which involves the movement of protons within a molecule, could also be a cause of mutation. As I am sure you are well aware by now, any process that involves the movement of fundamental particles, like protons, can be quantum mechanical. So, was Schrödinger right? Are mutations a kind of quantum jump?
Take another look at the bottom half of figure 7.1. You will see that we have drawn the hydrogen bond—which, remember, is a shared proton—as a dotted line between two atoms (oxygen, O, or nitrogen, N) on the paired bases. But isn’t a proton a particle? Why, then, is it drawn as a dotted line rather than a single dot? The reason is, of course, that protons are quantum entities that have both particle and wave character: so the proton is delocalized, behaving like a smeared-out entity or a wave that sloshes between the two bases. The position of the H in figure 7.1—denoting the most likely position of the proton—is not halfway between the two bases, but rather is offset to one side: closer to either one strand or the other. This asymmetry is responsible for an extremely important feature of DNA.
Figure 7.2: (a) A standard A–T base pair with the protons in their normal positions; (b) here the paired protons have jumped across the double helix to form the tautomeric form of both A and T.
Let’s consider one possible base pair, such as A–T, with the A on one strand and the T on the other, held together by two hydrogen bonds (protons) where one proton is closer to a nitrogen atom in A and the other is closer to an oxygen atom in T (figure 7.2a), allowing the formation of the A:T hydrogen bond. But remember that “closer than” is a slippery concept in the quantum world where particles don’t have fixed positions but inhabit a range of probabilities of being in many different places at once, including those that can only be reached by tunneling. If the two protons that hold the genetic letters together were each to jump to the other side of their respective hydrogen bonds, then they would each end up closer to the opposite base. This results in the formation of alternative forms of each base called tautomers (figure 7.2b). Each of the DNA bases can therefore exist both in its common canonical form, as seen in Watson and Crick’s double helix structure, and in the rarer tautomer, with its coding protons shifted across to new positions.
But remember that the protons forming the hydrogen bonds in DNA are responsible for the specificity of base-pairing that is used to replicate the genetic code. So, if the pair of coding protons move (in opposite directions), they are effectively rewriting the genetic code. For example, if a genetic letter in a DNA strand is a T (thymine) then in its normal form it pairs, correctly, with A. However, if a double proton swap occurs then both T and A will adopt their tautomeric forms. Of course, the protons may jump back again but if they happen to be in their rare tautomeric forms*6 at the time the DNA strand is being copied then the wrong bases may be incorporated into the new DNA strands. The tautomeric T can pair with G, rather than A, so G will be incorporated into the new strand where there was an A in the old strand. Similarly, if A is in its tautomeric state when the DNA is being replicated then it will pair with C, rather than T, so the new strand has C, where the old strand had T (figure 7.3). In either case, the newly formed DNA strands will carry mutations—changes in the DNA sequence that will be inherited by progeny.
Figure 7.3: In its tautomeric (enol) form, indicated by T* in the figure, T can pair incorrectly with G, rather than its usual partner, A. Similarly, the tautomeric form of A (A*) can pair incorrectly with C, rather than T. If these errors are incorporated during DNA replication then a mutation will result.
Although this hypothesis is entirely plausible, it has been difficult to obtain direct evidence for it; but, in 2011, nearly sixty years after Watson and Crick published their paper, a group based at Duke University Medical Center in the United States managed to demonstrate that incorrectly paired DNA bases with protons in the tautomeric position can indeed fit into the active site of DNA polymerase (the enzyme that makes new DNA), so are likely to be incorporated into newly replicated DNA to cause mutations.5
So tautomers—with alternative proton positions—appear to be a driver of mutation, and thereby of evolution; but what makes protons move to the wrong position? The obvious “classical” possibility would be that they are occasionally “shaken” across by the constant molecular vibrations going on all around them. However, this requires the availability of sufficient thermal energy to provide the impetus, the “shake.” Just as in the enzyme-catalyzed reactions discussed in chapter 3, the proton has to overcome quite a steep energy barrier to make the move. Alternatively, the protons may be knocked across by a collision with nearby water molecules; but there aren’t many water molecules close to the coding protons in DNA to provide them with such a kick.
But there is another route—one that was found to play an important role in the way enzymes transfer electrons and protons. One of the consequences of the wave-like nature of subatomic particles such as electrons and protons is the possibility of quantum tunneling. The fuzziness in the position of any particle allows it to leak through an energy barrier. We saw in chapter 3 how enzymes utilize quantum tunneling of electrons and protons by bringing molecules close enough together for tunneling to take place. A decade after Watson and Crick published their seminal paper, the Swedish physicist Per-Olov Löwdin, whom we met earlier in this chapter, proposed that quantum tunneling could provide an alternative way for protons to move across hydrogen bonds to generate the tautomeric, mutagenic, forms of nucleotides.
It is important to emphasize that DNA mutations are caused by a variety of different mechanisms, including damage caused by chemicals, ultraviolet light, radioactive decay particles, even cosmic rays. All of these changes take place at a molecular level and so are bound to involve quantum mechanical processes. As yet, however, there is no indication that the weirder aspects of quantum mechanics play a role in these sources of mutations. But if quantum tunneling is shown to be involved in the formation of DNA base tautomers, then quantum weirdness could be playing a role in the mutations that drive evolution.
However, tautomeric forms of DNA bases account for about 0.01 percent of all natural DNA bases, potentially leading to errors of the same scale. This is a far higher rate than the one in a billion or so rate of mutation we find in nature, so if tautomeric bases are indeed present in the double helix then most of the resulting errors must be removed by the various error correction (“proofreading”) processes that help to ensure the high fidelity of DNA replication. Even so, those errors promoted by quantum tunneling that escape the correction machinery may be a source of the naturally occurring mutations that drive the evolution of all life on earth.
Discovering the underlying mechanisms of mutation is not only important for our understanding of evolution; it may also provide insight into how genetic diseases arise or how cells become cancerous, as both of these processes are caused by mutations. However, the problem with testing whether quantum tunneling is involved is that, unlike other known causes of mutations such as chemical mutagens or radiation, it cannot be simply turned on or off. It is therefore not easy to measure mutation rates with and without tunneling to see if they are different.
But there may be an alternative way of detecting a quantum mechanical origin to mutation, one that goes back to the difference between classical information and quantum information. Classical information can be read and reread over and over again without changing its message, whereas quantum systems are always perturbed by measurement. So when the DNA polymerase enzyme scans a DNA base to determine the position of coding protons, it is carrying out a quantum measurement, no different in principle from when a physicist measures the position of a proton in the laboratory. In both processes, the measurement is never innocuous: according to quantum mechanics, any measurement, whether performed by the DNA polymerase enzyme inside a cell or by a Geiger counter in a laboratory, inevitably changes the state of the particle being measured. If the state of that particle corresponds to a letter of the genetic code, then measurement, particularly frequent measurement, would be expected to change that code and potentially cause a mutation. Is there any evidence for this?
Although our entire genome is copied during DNA replication, most readings of our genes take place not during DNA replication but during the processes whereby genetic information is used to direct the synthesis of proteins. The first of these two processes, known as transcription, involves the copying of DNA-encoded information into RNA, a chemical cousin of DNA. The RNA then travels to the protein synthesis machinery to make proteins: this is the second process, known as translation. To distinguish these processes from the copying of genetic information during DNA replication we will refer to them as reading DNA.
A key characteristic of this process is that some genes are read much more often than others. If reading the DNA code during transcription constitutes a quantum measurement, then the more frequently read genes would be expected to be subject to more measurement-induced perturbations, leading to higher rates of mutation. This is indeed what is claimed to have been found in some studies. For example, Abhijit Datta and Sue Jinks-Robertson, from Emory University in Atlanta in the United States, manipulated a single gene in yeast cells such that it was read either just a few times to make small quantities of protein in the cell or lots of times to make loads of protein. They discovered that the rate of mutation in the gene was thirty times higher when it was read at the higher levels.6 A similar study in mouse cells found the same effect,7 and a recent study of human genes concluded that those of our genes that are read at the highest levels tend to be mutated the most.8 This is at least consistent with a quantum mechanical measurement effect, but of course it does not prove that quantum mechanics is involved. The reading of the DNA involves biochemical reactions that may disturb or damage the molecular structure of genes in many different ways, causing mutations, without any recourse to quantum mechanics.
To test whether quantum mechanics is involved in a biological process we require evidence that is hard or impossible to make sense of without quantum mechanics. In fact, it was a puzzle of this sort that first got the two of us interested in the role that quantum mechanics might play in biology.
In September 1988, a paper on bacterial genetics written by a very eminent geneticist called John Cairns, working at the Harvard School of Public Health in Boston, was published in Nature.9 The paper appeared to contradict that fundamental tenet of neo-Darwinian evolutionary theory: the principle that mutations, the source of genetic variation, occur randomly and that the direction of evolution is supplied by natural selection—the “survival of the fittest.”
Cairns, an Oxford-educated British physician and scientist, worked in Australia and Uganda before taking a sabbatical in 1961 at the world-famous Cold Spring Harbor Laboratory in New York State. From 1963 to 1968 he served as director of the laboratory, which was a hotbed for the emerging science of molecular biology, particularly in the 1960s and 1970s when the scientists working there included figures such as Salvador Luria, Max Delbrück and James Watson. Cairns had actually met Watson many years earlier, when the rather disheveled future Nobel laureate delivered a rambling presentation at a meeting in Oxford, and had not been hugely impressed; in fact, Cairns’s overall impression of one of science’s immortals was: “I thought he was a complete nutter.”10
At Cold Spring Harbor, Cairns carried out several landmark studies. For example, he demonstrated how DNA replication starts at a single point and then moves along the chromosome, rather like a train running on a track. He must also have eventually warmed to James Watson, because in 1966 they jointly edited a book on the role of bacterial viruses in the development of molecular biology. Then, in the 1990s, he took an interest in that earlier Nobel Prize–winning study by Luria and Delbrück that appeared to have proved that mutations occur randomly, before an organism is exposed to any environmental challenge. Cairns reckoned that there was a weakness in Luria and Delbrück’s experimental design, through which they were supposed to have proved that bacterial mutants resistant to a virus are preexisting in the population, rather than arising in response to the exposure to the virus.
Cairns pointed out that any bacteria that weren’t already resistant to the virus wouldn’t have had time to develop new mutations adaptively in response to the challenge because they would have been killed very quickly by the virus. He came up with an alternative experimental design that gave the bacteria a better chance of developing mutations in response to a challenge. Instead of looking for mutations that conferred resistance to a deadly virus, he instead starved the cells and looked for mutations that would allow bacteria to survive and grow. Like Luria and Delbrück, he saw that a few mutants managed to grow straight away, showing that they were preexisting in the population; but, in contrast to the earlier study, he observed many more mutants appearing much later, apparently in response to starvation.
Cairns’s result contradicted the well-established principle that mutations occurred randomly; his experiments appeared to demonstrate that mutations tended to occur when they were advantageous. The findings appeared to support the discredited Lamarckian theory of evolution—the starved bacteria weren’t growing long necks but, just like Lamarck’s imaginary antelope, they appeared to be responding to an environmental challenge by generating heritable modifications: mutations.
Cairns’s experimental findings were soon confirmed by several other scientists. Yet the phenomenon had no explanation within contemporary genetics and molecular biology. There was simply no known mechanism that would allow a bacterium, or indeed any creature, to choose which genes to mutate and when. The finding also appeared to contradict what is sometimes called the central dogma of molecular biology: the principle that information flows only one way during transcription, from DNA out to proteins to the environment of a cell or organism. If Cairns’s results were right, then cells must also be capable of reversing the flow of genetic information, allowing the environment to influence what is written in DNA.
The publication of Cairns’s paper unleashed a storm of controversy and an avalanche of letters to Nature attempting to make sense of the finding. As a bacterial geneticist, Johnjoe was profoundly puzzled by the phenomenon of “adaptive mutations,” as they came to be known. At the time, he was reading a lay account of quantum mechanics, John Gribbin’s popular In Search of Schrödinger’s Cat,11 and couldn’t help pondering whether quantum mechanics, particularly that enigmatic process of quantum measurement, could provide an explanation of the Cairns result. Johnjoe was also familiar with Löwdin’s claim that the genetic code is written in quantum letters; so, if Löwdin were right, the genome of Cairns’s bacteria would have to be considered as a quantum system. And if that were true, then inquiring whether a mutation was present would constitute a quantum measurement. Could the perturbing influence of quantum measurement provide an explanation of Cairns’s odd result? To explore this possibility we need to take a closer look at Cairns’s experimental setup.
Cairns had introduced millions of cells of the gut bacterium E. coli*7 onto the surface of a gel in dishes containing only lactose sugar as food. The particular strain of E. coli that Cairns used had an error in one of its genes that made it incapable of eating lactose, so the bacteria starved. But they didn’t die; they just hung around on the surface of the gel. What surprised Cairns and caused all the controversy was that they didn’t stay that way for long. After several days he observed colonies appearing on the surface of the gel. Each colony was composed of mutants descended from a single cell in which a mutation had corrected the error in the DNA code of the defective lactose-eating gene. The mutant colonies continued to appear over several days, until the plates eventually dried out.
According to standard evolutionary theory, as exemplified by the Luria–Delbrück experiment, evolution of the E. coli cell should have required the presence of preexisting mutants in the population. A few of these did indeed appear early on in the experiment, but they were far too few to account for the abundant lactose-eating colonies that quickly appeared several days later, after the bacteria were placed in the lactose environment (in which the mutations could provide an adaptive advantage to the cells—hence the term “adaptive mutations”).
Cairns ruled out trivial explanations of the phenomenon, such as a generally increased rate of mutation. He also demonstrated that adaptive mutations would occur only in environments where the mutation provided an advantage. Yet his results could not be accounted for by classical molecular biology: mutations should occur at the same rate irrespective of whether lactose was present or not. However, if, as Löwdin argued, genes are essentially quantum information systems, then the presence of lactose would potentially constitute a quantum measurement as it would reveal whether or not the cell’s DNA had mutated: a quantum-level event dependent on the positions of single protons. Could quantum measurement account for the difference in mutation rates that Cairns observed?
Johnjoe decided to offer his ideas up for scrutiny in the Physics Department at the University of Surrey. Jim was in the audience and, although skeptical, was nevertheless intrigued. We decided to work together to investigate whether the idea had any quantum legs and eventually came up with a “hand-wavy”*8 model that we proposed could account for adaptive mutations; this we published in the journal Biosystems in 1999.12
The model starts from the premise that protons can behave quantum mechanically; so, that those in the DNA of the starving E. coli cells will occasionally tunnel over into the tautomeric (mutagenic) position, and can just as easily tunnel back again to their original positions. Quantum mechanically, the system must be considered to be in a superposition of both states, tunneled and not tunneled, with the proton described by a wave function that is spread over both sites, but which is asymmetrical—giving a much greater probability of finding the proton in the nonmutated position. Here, there is no experimental measuring device or apparatus to record where the proton is; but the measurement process we discussed in chapter 4 is carried out by the surrounding environment. This is taking place all the time: for example, reading of DNA by the protein synthesis machinery forces the proton to “make up its mind” on which side of the bond it is sitting—either in the normal (no growth) or in the tautomeric (growth) position; and mostly it will be found in the normal position.
Let us imagine Cairns’s plate of E. coli cells as a box of coins, with each coin representing the proton in the key nucleotide base in the lactose utilization gene.*9 This proton can exist in one of two states: “heads,” corresponding to the normal, nontautomeric position, or “tails,” corresponding to the rare tautomeric position. We start off with all the coins being heads-up, corresponding to the start of the experiment with the proton in the nontautomeric position. But, quantum mechanically, the proton is always in a superposition of both normal and tautomeric positions, so our imaginary quantum coins will similarly be in a superposition of heads and tails, with most of the probability wave favoring the heads-up, normal state. But the proton position will eventually be measured by its surrounding environment within the cell, forcing it to choose where it is, which we can imagine as a kind of molecular coin toss, with overwhelming probability of throwing a head. The DNA may be occasionally copied,*10 but any new strand will encode only the genetic information that’s there, which nearly always encodes only the defective enzyme—so the cell will continue to starve.
But remember that the coin represents a quantum particle, a proton in the DNA strand; so even after measurement it is free to slip back into the quantum world to reestablish the original quantum superposition. So after our coin has been tossed and landed on heads, it will be tossed again, and again and again. Eventually, it will land on tails. In this state, the DNA may again be copied, but now it will make the active enzyme. In the absence of lactose, this will still not make any difference because, without lactose, the gene is useless. The cell will continue to starve.
However, if lactose is present then the situation will be very different, because the corrected gene made by the cell will allow the cell to consume lactose, grow and replicate. A return to the quantum superposition state will no longer be possible. The system will be irreversibly captured into the classical world as a mutant cell. We can conceive of this as—only in the presence of lactose—taking those rare coins that fall on tails out of the box and placing them in another box, marked “mutants.” Back in the original box, the remaining coins (the E. coli cells) will continue to be tossed and, whenever tails turns up, the coin will be scooped out and transferred to the mutant box. Gradually, the mutant box will accumulate more and more coins. Translated back into the experiment, the mutants able to grow on lactose will continuously appear in the experiment, precisely as Cairns discovered.
We published our model in 1999, but it did not gather many converts. Undeterred, Johnjoe went on to write the book Quantum Evolution,13 claiming a wider role for quantum mechanics in biology and evolution. But remember, this was before the role of proton tunneling in enzymes was widely accepted, and quantum coherence hadn’t yet been discovered in photosynthesis, so scientists were rightly skeptical about the idea of weird quantum phenomena being involved in mutation; and in truth, we skipped over several issues.14 Also, the phenomenon of adaptive mutations became messy. It was discovered that the starved E. coli cells in Cairns’s experiment were eking out a living from the trace nutrients of dead and dying cells and would occasionally replicate and even exchange DNA. Conventional explanations of adaptive mutations started to appear, which claimed to account for the raised mutation rates by a combination of several processes: a general increase in the mutation rate of all genes; cell death and release of the dead cells’ mutated DNA; and, finally, selective uptake and amplification of the mutated lactose gene by surviving cells that managed to incorporate it into their genome.15
Whether these “conventional” explanations can fully account for adaptive mutations remains unclear. Twenty-five years since Cairns’s original paper appeared, the phenomenon remains puzzling, as evidenced by the continued appearance of papers investigating its mechanism,16 not only in E. coli but also in several other microbes. As things currently stand, we don’t exclude the possibility of quantum tunneling being involved in adaptive mutations; but, at this time, we cannot claim that it is the only explanation.
In the absence of a strong need to implicate quantum mechanics in adaptive mutations, we recently decided to take a step back and investigate the more fundamental question of whether quantum tunneling plays a role in mutation at all. As you will remember, the case for quantum tunneling being involved in mutation was first made on theoretical grounds by Löwdin and has since been supported by several theoretical studies,17 and also by experimental studies of what are called “model base pairs,” which are chemicals designed to have the same base-pairing properties as the bases in DNA, but are more amenable to experimentation. However, no one has yet proved that proton tunneling causes mutation. The problem is that it has to compete with several other causes of mutations and mutation repair mechanisms, which makes unraveling its role, if it exists, all the more difficult.
To investigate this issue, Johnjoe has borrowed ideas from the enzyme experiments described in chapter 3, where you may recall the involvement of proton tunneling was inferred after discovering “kinetic isotope effects.” If quantum tunneling is involved in speeding up an enzyme reaction, then replacing a hydrogen nucleus (a single proton) with a deuterium nucleus (consisting of a proton and a neutron) should slow the reaction since quantum tunneling will be highly sensitive to doubling the mass of the particle trying to tunnel. Johnjoe is currently attempting a similar approach for mutation, investigating whether rates of mutation are different in deuterated water: D2O, rather than H2O. As we write, it seems the rates are indeed changed by the substitution; but much more work needs to be done to be sure that the effect is indeed due to quantum tunneling, as replacing hydrogen with deuterium could affect many other biomolecular processes without recourse to any quantum mechanical explanation.
Jim has focused on investigating whether quantum tunneling of protons in the DNA double helix is feasible on theoretical grounds. When a theoretical physicist tackles a complex problem such as this, he or she tries to create a simplified model that is mathematically tractable while still retaining what are thought to be the most important features of the system or process. Such models can then be ramped up in sophistication and complexity as more of the details are added in order to get ever closer to mimicking the real thing.
The model chosen as the starting point for mathematical analysis in this case can be pictured as a ball (representing the proton) held in place by two springs attached to walls (figure 7.4), one on each side, pulling the ball in opposite directions. The ball tends to rest at the position where the pull from both springs is the same; so if one spring were slightly stiffer (less stretchy) than the other, then the ball would sit closer to the wall to which the stiffer spring is attached. However, there would still have to be some “give” in this spring, such that it would also be possible for the ball to settle in a less stable position closer to the other wall. This then corresponds to what in quantum physics is called a double potential energy well and maps to the situation of a coding proton in the DNA strand, with the left-hand well in the diagram corresponding to the normal position of the proton, whereas the right-hand well corresponds to the rarer tautomeric position. Considered classically, although the proton will be found mostly in the left-hand well, if it receives an energetic enough kick from an outside source, it can occasionally be knocked over to the other (tautomeric) side. But it will always be found in one well or the other. However, quantum mechanics allows the proton to spontaneously tunnel through the barrier, even if it has insufficient energy to clamber over the top: it doesn’t necessarily need a kick. Not only that, but the proton can therefore be in a superposition of two position states (left and right wells) simultaneously.
Figure 7.4: The proton of a hydrogen bond linking two DNA base sites can be regarded as being on two springs, such that it can oscillate from side to side. It has two possible stable positions modeled here as a double energy well. The left-hand well (corresponding to the unmutated position) is slightly deeper than the right-hand well (the tautomeric position), and so the proton prefers to sit in the left one.
Of course, drawing a picture is much easier than writing down a mathematical model that accurately describes the situation. To understand the proton’s behavior we need to map the shape of this potential well, or energy surface, very accurately. This is no trivial matter, as its precise shape depends on many variables. Not only is the hydrogen bond typically part of a large and complex DNA structure consisting of hundreds or even thousands of atoms, it is also immersed in a warm bath of water molecules and other chemicals inside the cell. Moreover, molecular vibrations, thermal fluctuations, chemical reactions initiated by enzymes and even ultraviolet or ionizing radiation can all affect the behavior of the DNA bond both directly and indirectly.
One approach to tackling this level of complexity adopted by Adam Godbeer, a PhD student of Jim’s, involves using a powerful mathematical approach, currently popular with physicists and chemists for modeling complex structures, called density functional theory (DFT). This allows the shape of the hydrogen bond’s energy well to be calculated very accurately by taking into account as much of the structural information of the DNA base pair as is computationally possible. Think of DFT’s job as providing a map of all the forces acting on the hydrogen bond due to the pulling, pushing and wobbling of the surrounding atoms of the DNA. This information is then used to calculate the way proton tunneling behaves over time. An added complication is that the presence of the surrounding atoms in the DNA, and of water molecules, is continuously affecting the proton’s behavior and ability to quantum tunnel across from one strand of DNA to the other. But this constant influence of the external environment can also be included in the quantum mechanical equations. Godbeer’s calculations18 suggest that, although it is possible for the two protons to tunnel across to their tautomeric positions in the A–T bond, the probability of their doing so is rather small; however, further computational modeling work is necessary. What the theoretical models do show, however, is that the action of the surrounding environment within the cell actively assists, rather than hinders, the tunneling process.
What, then, can we surmise at the present moment about the link between quantum mechanics and genetics? We have seen that it is fundamental to heredity, since our genetic code is written in quantum particles. Just as Erwin Schrödinger predicted, quantum genes encode the classical structure and function of every microbe, plant and animal that has ever lived. This is not an accident, nor is it irrelevant, because high-fidelity copying of genes simply would not work if they were classical structures: they are too small not to be influenced by quantum rules. The quantum nature of genes allowed those Vostok microbes to faithfully replicate their genome over thousands of years, just as it allowed our ancestors to copy their genes over the many millions, indeed billions, of years that stretch back to the dawn of life on our planet. Life could not have survived and evolved on earth if it hadn’t, billions of years ago, “discovered” the trick of encoding information in the quantum realm.*11 On the other hand, whether quantum mechanics plays an important and direct role in genetic mutations—that infidelity in the copying of genetic information that is so vital for evolution—remains to be seen.
*1 The bottom of the glacier that sits on the lake today was laid down more than four hundred thousand years ago but the lake may have been frozen for a lot longer. It isn’t clear whether the current glacier replaced earlier glaciers or the lake experienced ice-free periods between ice ages.
*2 Organisms that live in extreme (from our perspective) environments.
*3 Cancers are caused by mutations in genes that control cell growth, leading to uncontrolled cell growth and thereby tumors.
*4 Of course, it could as easily be called Wallace’s theory of natural selection, after the great British naturalist and geographer Alfred Russel Wallace who, during a bout of malarial fever while traveling in the tropics, came up with virtually the same idea as Darwin.
*5 The term “genetics” was coined in 1905 by William Bateson, an English geneticist and a proponent of Mendel’s ideas; the term “gene” was suggested four years later by Danish botanist Wilhelm Johannsen to distinguish between the outward appearance of an individual (its phenotype) and its genes (its genotype).
*6 The alternative tautomeric forms of guanine and thymine are known as enol or keto, depending on the position of the coding protons; whereas cytosine and adenine tautomers are known as keto or amino forms.
*7 Escherichia coli.
*8 By which we mean one lacking a rigorous mathematical framework.
*9 In reality there will be more than one hydrogen bond holding the base pair together, but the argument holds equally well if we simplify the picture to just one.
*10 Starved and stressed cells may continue to attempt to copy their DNA, but the replication is likely to be aborted because of the limited resources available, so only short stretches corresponding to a few genes are made.
*11 This is currently a hot issue in quantum biology—namely, did life discover its quantum advantages, or is quantum mechanics just along for the ride?