11
Thinking Molecularly, Anything Goes
From Mummies to Oil Spills, Doubts to New Directions

To venture an idea is like moving a chess piece forward on the game board; it may be defeated, but it initiates a game that will be won.

—Johann Wolfgang von Goethe, 1749–1832, German writer, philosopher, and scientist
From Maximen und Reflexionen, Kunst und Altertum (1821)

We shall never cease from exploration And the end of all our exploring Will be to arrive where we started And know the place for the first time.

—T. S. Eliot, 1888–1965, British (U.S.-born) critic, dramatist, and poet
From “Little Gidding” (1942)

Though the biomarker saga began with attempts to understand the ancient provenance of petroleum and with the concept of “fossil molecules” and search for early forms of life, the explorations of the past 50 years have led organic geochemists far afield of these first endeavors. As Geoff and Max Blumer recognized back in the 1960s, and as microbiologists began to realize in the early 1980s, the usefulness of the biomarker concept is not restricted to geologic time. Most organic geochemists have, at one time or another, applied their techniques and expertise to the resolution of environmental problems, or found a way to address some archaeological mystery. One of the Bristol group’s most vibrant research programs now has its chemists brushing shoulders not with geologists and oceanographers, but with archaeologists and anthropologists concerned with the evolution of human civilizations and societies.

Much of the impetus for the application of biomarker concepts to archaeologists’ questions in the 1970s and 1980s came from petroleum geochemists, not least from Arie Nissenbaum, a geochemist at Israel’s Weizmann Institute of Science who developed a keen interest in the role that geological events and circumstance might have played in the history of civilizations in the Fertile Crescent region. Nissenbaum was fascinated by the bizarre geology and chemistry of the Dead Sea Basin area, where oil seeps and impressive raftlike chunks of asphalt floating on the surface of the lake had long tempted oil prospectors to no avail. Renewed interest in the area’s oil potential in the early 1980s attracted a wave of geochemical studies, and Israeli geochemists scrambled for laboratory resources and funding from abroad. Jürgen, still with Dietrich Welte’s group, did a detailed biomarker study at the behest of an Israeli colleague, and when Nissenbaum saw the results he suggested to Jürgen that they apply Jülich’s considerable GC-MS capability to solving an entirely different sort of mystery.

Excavations of archaeological sites in the vicinity of the Dead Sea had turned up solid chunks of black, sticky material that was used as early as 3000 B.C., either in materials used for construction or as a glue to attach tool heads to wooden handles. Nissenbaum attempted the first geochemical analyses of this material in 1984, using the biomarker techniques that petroleum geochemists employed for correlating source rocks and oils to ascertain that it derived from the Dead Sea asphalt. Some form of asphalt use is evident in early civilizations throughout the Fertile Crescent, and indeed, the material that ranks lowest on the scale of coveted petroleum products in the modern world appears to have been a highly valued commodity for earlier civilizations. That there was an active trade in this commodity was apparent from some of the earliest written records: carved into a stone tablet by a merchant named “Lukulla” in 2039 B.C. is a list of prices for different types of asphalt, which range from “raw bitumen” to some form of processed, ready-to-use building material. But archaeologists had no way of knowing if the black, sticky stuff they often found on the surface of ancient artifacts was in fact asphalt or whether it was some sort of plant resin or wax. Both the extent of asphalt use and the ancient trade routes were subjects of much debate, as Jürgen learned when Nissenbaum contacted him in the mid-1980s and suggested that they do a biomarker analysis of samples from, of all things, Egyptian mummies.

Mummies have been the subject of conjecture and myth throughout the ages, the embalmer’s art shrouded in mystery even while it was still being practiced, with secret potions developed by individual embalmers and passed down to a chosen few, but never recorded. It was an art intended only for the eyes of the gods, and, indeed, the embalmers succeeded in protecting many of their secrets for more than three millennia, notwithstanding frequent human scrutiny of their handicraft. Twentieth-century archaeologists and chemists determined that the bodies were first thoroughly dehydrated with various mixtures of salts, and then treated with some sort of organic balm that sealed them off from moisture and protected them from microbial invasion—but the nature of this balm was unclear, even in the 1980s. The most extensive historical account, written in the fifth century B.C. by the Greek Herodotus, mentions that myrrh, cassia palm wine, and cedar oil were used. The Sicilian historian Diodorus later reported that asphalt from Palestine’s Dead Sea was a major component, and it was often assumed that this was why the mummies often appeared black. But archaeologists were unclear about the extent of trade between the region and Egypt, and many were convinced that wood ash and pitch, rather than asphalt, had been used to seal the bodies and were the source of the black color. Nissenbaum had obtained samples from the black balm on the coffins, baseboards, and bodies of four mummies in the British Museum and wanted Jürgen to compare their sterane and hopane fingerprints with those from the Dead Sea asphalts.

Their first analyses revealed that fresh plant materials and asphalt had been used in preparing the bodies. The samples contained n-alkanes with a typical leaf wax distribution, on the one hand, and steranes and triterpanes typical of mature geochemically altered material, on the other. The mass fragmentograms of steranes and hopanes from three of the samples, dating from 200 B.C. to 150 A.D., were nearly identical to those from extracts of Dead Sea asphalts and oils—high in gammacerane and lacking diasteranes, as is typical of oils formed in highly stratified, low-oxygen and high-salt environments. The balm from the oldest mummy, prepared around 900 B.C., clearly contained asphalt, but it hadn’t come from the Dead Sea. Later analyses indicated that it may have come from a seep that was on the north shore of the Red Sea. More extensive studies by Jacques Connan, a petroleum geochemist with the French oil company Elf Aquitaine, showed that Dead Sea asphalt was used for the preparation of Egyptian mummies as early as 1100 B.C., indicating that some sort of trade between Egypt and Palestine must have been operating at the time. But his work, like Nissenbaum’s and later analyses by Richard Evershed and his students in Bristol, also revealed that asphalt was not a requisite component of the preservative balms, which contain relatively large amounts of abietic acid and the related tricyclic diterpenoid compounds typical of conifer resins. Fatty acids with a distribution typical of plant oils are also common ingredients, as are the particular series of wax esters found in beeswax, and it appears that there may have been trends in mummification technology over the centuries: plant oils, resins, and beeswax were apparently à la mode during the heyday of mummification between 1350 and 1000 B.C., with asphalt coming into use toward the end of this period and gaining popularity during Roman times. The use of asphalt for more mundane purposes, however, clearly extends far back into prehistory, and far beyond the borders of ancient Egypt.

In an attempt to trace the development of asphalt technologies in early civilizations, Connan analyzed scrapings of black, potentially asphaltlike materials from hundreds of artifacts in the Louvre Museum’s extensive collection. These studies revealed that some groups may have used asphalt to glue handles to their stone tools as early as the Paleolithic period, some 40,000 years ago, and such use was quite common in Neolithic times. From Neolithic through Roman times, extensive use of asphalt occurred primarily in areas where building materials such as stone and wood were scarce, and the asphalt was mixed with straw and sand to create a type of brick. The magnificent Mesopotamian city of Babylon was constructed of such bricks, but in Egypt, where stone was plentiful, asphalt seems to have been an exotic luxury item, reserved for the mummifiers’ balms or for waterproofing palace baths—though according to the Bible, the basket where the Israelite slave Jochebed left her baby Moses was also lined with heimar, usually translated as bitumen or asphalt. Whatever the case, Connan’s investigations indicate that the asphalt used in Egypt was mostly imported from the Dead Sea, while more local sources went undeveloped. By comparing hydrocarbon fingerprints and δ13C values of asphalt scraped from artifacts found at different sites around Mesopotamia with those of petroleum from seeps in the region, Connan determined that a lively trade in this material had been established by 5500 B.C. The Ubaid and Uruk civilizations that thrived in what is now Iraq from the end of the Neolithic into the Bronze Age were apparently heavy users of the material, and Connan has been able to track the changes in their regional trade routes between 5800 and 3500 B.C.: first to nearby seeps in Iran, distinguished by their oleanane content and distinct sterane and hopane fingerprints, then several hundred kilometers north to seeps on the Tigris River, where oleanane is lacking and δ13C values are distinctly negative; and finally, based on a subtle change in the hydrocarbon fingerprints, to seeps on the Euphrates River to the northwest.

Around the same time that Jürgen and Nissenbaum were trying to resolve the origin of the black stuff on mummies, members of the Bristol lab developed a sudden interest in maritime history. The wreck of Henry VIII’s prize flagship, the Mary Rose, had just been recovered from the floor of the English Channel where it had lain since 1545, when, newly refitted with the most modern guns, it set sail from Portsmouth Harbor and, as the king watched from shore in horror, heeled over and sank with the first small breeze. The British media reported on the ship’s contents in intimate detail, and when they mentioned that among the bows and arrows, guns, and other Tudor period relics, there were large barrels full of some sort of gooey black pitch, Geoff decided to join the fun. The black stuff coated the seams of the ship and some of the relics, and there was speculation that it was partially responsible for their excellent preservation. One couldn’t tell from looking at it whether it was asphalt or some sort of wood resin, but it was an easy enough task to figure this out in the laboratory—just the thing for an undergraduate student’s research project. Steranes, terpanes, and the generally unresolved hump of hydrocarbons that one typically found in asphalt were missing from the tar, but it did contain large concentrations of telltale tricyclic terpenoids. These were derivatives of the compounds that Guy Ourisson had studied so extensively in the 1950s, of interest for their antibacterial and insect-repelling qualities, and for their often genus-specific distributions—in the case of the Mary Rose tars, dominated by the abietic and pimaric acids indicative of a pine origin. The fourteenth-century manufacturers had apparently known how to distill and collect the volatile compounds from the wood tar, and they had produced a concentrated mixture of terpanes and mono-, di-, and triaromatic diterpenoids that had excellent waterproofing and preservative qualities. Comparative analysis of the tar on an Etruscan ship indicated that such technologies were known for thousands of years before Henry’s ill-fated warship sank to its resting place on the floor of the English Channel.

The Mary Rose study inaugurated what would turn out to be a career-encompassing interest in the chemical analysis of archaeological relics—not for the undergraduate who prepared the extracts of tar, but for Richard Evershed, the postdoc who supervised her. Evershed had started out his career in natural products but spent several years in the Bristol lab developing analytical techniques for porphyrins and, as it turned out, analyzing the black goo from the Mary Rose. In 1984, the year after the Mary Rose was recovered, the body of the so-called “Lindow Man” was found in a bog near Manchester, again making headline news in England … and Evershed, then with a position in Liverpool, ostensibly developing analytical methods in biochemistry, somehow found himself extracting and analyzing lipids from a 2,000-year-old dead man. “I didn’t actually know why I was analyzing this body,” Evershed tells me. “It was sort of a nightmare.” He admits that he got the idea while sitting in a pub in Liverpool, where Lindow Man was the main topic of conversation and everyone was rehashing the latest rumors about his identity, which ran the gamut from a recent murder victim to a Celtic druid who had been sacrificed to placate the gods and keep invading Romans at bay. However frivolous its inception, Evershed’s study led to a quite rigorous analysis of the chemistry of tissue preservation and decomposition … and from there to analyses of the waterlogged, anoxic peat bogs where such bodies were so well preserved, and then to the application of the analytical techniques he’d developed—a mix of those learned in the Bristol lab and those of the natural products chemist—to the organic chemistry of soils, and eventually, in the late 1990s, to a call from someone at the English Heritage Foundation wondering if Evershed could do anything with the broken bits of pottery that archaeologists carefully sifted from their sites, but usually couldn’t make heads or tails of.

The ceramic vessels left by former civilizations are one of the traditional mainstays of archaeological information, but determining how they were used is not always an easy task, especially when all that’s left is a tiny fragment. Evershed and his colleagues found that significant amounts of lipids remained absorbed in the pores of the clay and could provide some clues. Pottery fragments found among the remains of late Saxon and early medieval settlements contained leaf wax lipids with a distinctive pattern—a simple trio of the C29 n -alkane and the corresponding mid-chain ketone and alcohol—that bore witness to the longstanding prominence of cabbage in northern European diets. Small ceramic cups found among the remains of the Minoan civilization that thrived in Crete between about 2700 and 1450 B.C. were used, according to the archaeologists, as portable lamps, and contained an array of wax esters that was clearly indicative of beeswax. Archaeologists had thought that the first portable lamps marked the early exploitation of the olive—but there was no sign of the fatty acids associated with olive oil in the cups. Here, in these artifacts that are only a few thousand years old, one can still glean some information from fatty acid distributions and wax esters. In some cases, even the ester bonds are preserved, and Evershed and his group have started using a system of high-temperature gas chromatography and mass spectroscopy that allows them to detect the intact long-chain esters of glycerol fats, providing even more information about the nature of the original waxes and fats. But just as diagenesis and thermal heating can transform the biological molecules over geologic time, cooking and the “unnatural” mixtures of substances created by even the most primitive cuisines can often obscure the sources of lipids found in archaeological remains. Evershed’s team has done extensive analyses of potential foodstuffs and the isomerization and condensation reactions that occur during heating—but, as in geochemical studies, some of the most valuable molecular information in archaeological remains is obtained by combining compound-specific isotope analysis with structural information and homologue distributions.

The distributions of fatty acids and intact glycerol fats, when considered together with the compound-specific δ13C values for the fatty acids, have allowed the Bristol team to distinguish the fats of ruminant animals such as cattle, sheep, and goats from those of pigs and other nonruminants, which have an entirely different diet and system of processing their food intake. The animals that a civilization consumed or made use of can thus be determined from analysis of the traces of fat in a broken pot, even when bones and other indicators are absent. The initiation of dairying is a pivotal development in human prehistory, and yet it has proven difficult to ascertain when and where humans started making use of animal milk in different parts of the world. Egyptian and Mesopotamian pictorial records from 4000–2900 B.C. offer the oldest clear evidence of dairying, but it appears that sheep were domesticated as early as 9000 B.C., and cattle and goats around 7000 B.C., so milking may have been practiced much earlier. Though the fatty acid distributions of milk and cooked meat cannot be readily distinguished, Evershed and crew have found a way to recognize traces of milk fats in pottery shards based on a subtle difference in the biosynthetic processes that animals use to produce milk fats and body fats. Animals make both their milk fats and their body fats from a mixture of fatty acids that they biosynthesize and that they obtain prefabricated from their diets. In milk fats, however, a larger proportion of the C18 acid comes from de novo biosynthesis, and as this generates compounds that are significantly more depleted in 13C than those obtained from plants, C18 fatty acids from milk fats and body fats have different isotopic signatures. In the hope of determining when and where dairying was first practiced in the British Isles, Evershed’s group is using this distinction to identify traces of milk fats in pottery fragments from the Celtic period.

Archaeologists’ attempts to understand the development and extent of early agricultural practices have long been frustrated by a lack of structural evidence in ancient soils. But Evershed and his cohorts have been able to identify medieval farmlands where manure or some sort of compost was used from the “unnaturally” high concentrations of common lipid components left in the ancient soils. An excess of sitosterol and leaf wax alkanes indicates that plant manures were applied, whereas the reduced stanols typically produced by intestinal microbes indicate that manure was applied, with 5β-stigmastanol pointing to feces from ruminant animals that eat grasses and plants, and 5β-cholestanol pointing to feces from omnivorous animals like pigs or humans. More specific and reliable information about manure comes from the so-called bile acids, di- and trihydroxy steroid acids that are biosynthesized from sterols in animals’ livers. Ruminant animals, such as cows, and omnivorous animals, such as humans, produce different sets of homologues. Bile acids are not produced by microbes or by diagenesis in the environment, and in dry conditions they are quite persistent and make excellent biomarkers on the archaeological timescale. They have been identified in the tissue of a 4,000-year-old Nubian mummy, used to map the latrines and sewage culverts in Roman ruins, and they prove that 2,000-year-old feces found in a North American cave came from human beings … among other things.

One potentially powerful but little developed method for tracking human land use patterns in prehistoric and historic time involves the same multiproxy biomarker techniques that organic geochemists have used on marine sediments to elucidate changing paleoenvironments. The thick layers of river deposits in coastal regions or the sediments at the bottom of lakes can often provide high-resolution records of the past 10,000 years, and distributions of leaf wax hydrocarbons, sterols, and triterpenoids, along with compound-specific isotope analyses, may reflect forest clearing, burning, fertilizer application, and settlement or urbanization in an associated drainage basin. Phil Meyers and his group at the University of Michigan have found that sediment cores from lakes in North America’s Great Lake region contain a record of the region’s changing human populations and their effects on lake ecologies since the onset of European settlement. The distributions and relative amounts of long-chain leaf wax alkanols and alkanes; algal sterols and C15-C19 alkanols and alkanes; total organic matter, total hydrocarbon and phytol concentrations; compound-specific δ13C values; and carbon to nitrogen ratios tell a story of changing vegetation type, erosion, and lake ecosystems over the past 200 years. Sediments deposited during the early nineteenth century, when large tracts of forest were cleared for agriculture, register a pronounced increase in the amount of erosion from land and algal productivity in the lakes, presumably due to the large input of nutrient-rich soils and organic matter. Between 1950 and 1975, a period that saw rapid industrial and population growth in the region’s cities and widespread use of chemical fertilizers on its farms, the algae went haywire and produced so much organic matter that the bottom water became anoxic. And, finally, lake sediments from the last 30 years bear witness to the success of environmental controls imposed in the late 1970s, namely a marked decrease in algal productivity and return to oxicity in the deep water, with accordant improvement in the health of lake ecosystems.

Sediment records of changing land use patterns—and of sewage, fertilizer, or petroleum product residues—are not, of course, just of interest to archaeologists and historians. Biomarker techniques have long found application in the assessment of pollution problems and environmental management and protection, and in the past decade they have become one of the scientific mainstays of “environmental forensics,” used in litigation for and against polluters. Some of these applications grew naturally out of the molecule collecting of the 1970s. Others derived from concerted efforts on the part of organic geochemists who were concerned about the spread of contaminants in the environment and noted that their techniques for identifying traces of organic compounds in complex mixtures also worked well for tracing pollutants. The studies of dust that Bernd Simoneit did while he was a student in Geoff’s lab indicated that PAHs from fires, automobiles, and factories could be carried long distances on the wind and deposited in the middle of the Atlantic Ocean, just like the leaf waxes. Simoneit’s recent work uses more specific biomarkers to gauge the contributions from specific sources of air pollution in populated areas: diterpenoids like abietic acid might come from a fire in a coniferous forest or from the heavy use of wood stoves in an urban area; flowering plant triterpenoids like β-amyrin or lupeol might point to a fire in a deciduous forest or to agricultural burning; the particular fingerprint of hopanes and steranes might point to the emissions of a particular petroleum-burning industrial source; and so forth.

As it happened, the development of the biomarker concept in the 1960s and 1970s had coincided with the growing public concern about environmental issues. For his part, Geoff says it was Rachel Carson’s 1962 book Silent Spring, about the dangers of DDT, that really got him to thinking, as a chemist, about the long-term fate of environmental pollutants. But it wasn’t until around 1970, after he set up shop in Bristol, that he found a chance to apply his laboratory’s analytical prowess to such problems. The Severn Estuary, where he and his students did their first extensive studies of sterol diagenesis in sediments, was not only Bristol’s gateway to the sea but also the outlet for waste from all the factories and cities that line the rivers of the industrial English Midlands and the coast of South Wales. So along with analyzing the steroids in the estuary’s marshes, the Bristol researchers tried to analyze the petroleum hydrocarbons in the sludge, sediments, and outflow from a sewage treatment plant, and, of course, they used their GC-MS to look for DDT and other man-made chemicals. Indeed, Geoff says, some of these studies had to be curtailed because local industries got wind of what they were doing and limited their access or put pressure on the university for them to stop. Nevertheless, these and similar studies on the Clyde Estuary made it clear that the conversion of sterols to stanols during early diagenesis might be of more than academic interest. The microbes in the surface sediments began generating small amounts of stanols after a few months, and these gradually gave way to sterenes as the alcohol group was eliminated. In a natural aquatic sedimentary environment, the stanols never dominated the steroid distributions, and their existence was so ephemeral that most geochemists came to think of them as diagenetic intermediates. In the sewage sludge, however, certain stanols were present in anomalously large amounts, particularly 5β-cholestanol. As it turned out, microbes in the human intestine also reduced the sterols that the body was eliminating, mostly cholesterol, to stanols—and, unlike the sediment microbes, which produced a mixture of 5a and 5β forms, the intestinal microbes produced only 5β-stanols.

GC-MS made it easy to analyze large numbers of samples for 5β-cholestanol, and the compound, often known as coprostanol, quickly came into use as an indicator of sewage pollution. In 1990, Joan Grimalt and the environmental science group in Barcelona suggested that complete steroid distributions and ratios provided a more foolproof and source-specific indicator of sewage contamination, and Evershed’s group has shown that bile acids can also provide rather specific information about sources of waste pollution, be they sewage treatment plants, a local pig farm, or an overpopulation of seagulls.

Another of the founders of the biomarker concept, Max Blumer, turned its nascent petroleum applications upside down in 1969, when he used it to track the fate of an oil spill off the coast of Cape Cod. Blumer’s studies showed, for the first time, that petroleum hydrocarbons lurked in the coastal sediments long after visible signs of the oil had disappeared from the water and the beaches and tidal flats were clear of dead birds, fish, worms, and clams. Moreover, using the rudimentary GC techniques available at the time, he could determine which compounds evaporated or were broken down by microbes, and which persisted. In the years that followed, John Farrington continued and expanded on this work at Woods Hole, as did Ian Kaplan at UCLA, learning to distinguish between hydrocarbons from natural sources, like the seeps off the coast of Southern California, and those from specific types of oil pollution. Their work made it possible to track the spread of oil pollution—from an oil spill, a harbor, a shipping lane, a city, or a contaminated river—into the marine environment. Ironically, their methods developed in tandem with those that the petroleum geochemists were using for exploration: ever more sophisticated mass fragmen-tograms of steranes and terpanes, compound-specific and group-specific isotopic analyses, and various indices of maturity. These were the techniques that Joan Albaigés developed in his environmental chemistry group at the University of Barcelona in the late 1970s together with Pierre Albrecht in Strasbourg, that Keith Kvenvolden and others used in their analyses of Gulf of Alaska sediments after the catastrophic Exxon Valdez oil spill in 1989, and that Albaigés’ group have employed to track the fate of oil from the Russian tanker Prestige, which sank off the north coast of Spain in 2002.

Chris Reddy, keeper of the long legacy of environmental research at Woods Hole, went a step further when he and his research team returned to the scene of the 1969 Cape Cod oil spill that Blumer and Farrington had studied so extensively, now equipped with a cutting-edge new instrument. They used a method that involves linking two capillary columns in sequence—so-called comprehensive two-dimensional gas chromatography, developed in the mid-1990s—and can separate many of the compounds in the unresolvable “hump” that has frustrated every chemist who ever tried to characterize the hydrocarbons in petroleum. Reddy’s team obtained a core from a salt marsh that was particularly hard-hit by the 1969 spill and found that there was just as much oil present in 2003 as there had been at the time of the last study in 1975, albeit now covered by a fresh layer of uncontaminated sediments and relatively healthy marsh. Though the earlier studies indicated that a significant amount of the oil had degraded or evaporated in the first five years and the n-alkanes had been completely degraded by microorganisms, the new analyses revealed that some groups of branched alkanes, which earlier studies hadn’t detected, as well as the cycloalkanes and aromatic compounds, remained. Apparently, the microbes had found better things to eat, or the oil degraders hadn’t liked the anoxic sediment conditions that prevailed after the first few years, or they had depleted some key nutrient or energy source. Whatever the case, degradation seemed to have hit a plateau or been damped out within five years after the spill.

By the start of the twenty-first century, it would seem that the natural history of molecules had finally come of age, but it was and is still prone to remarkable spurts of growth—and self-doubt—usually inspired by the deployment of some new analytical tool, like Reddy’s two-dimensional GC. The introduction of compound-specific carbon-13 measurements in the late 1980s was, as it turns out, just a first step toward including the isotopic dimension in biomarker analyses. In the late 1990s, Tim Eglinton and his colleagues found a way to measure the amount of 14C—the radioactive isotope of carbon—in individual compounds … and soon found themselves mounting a new attack on one of the most insidious problems of the earth sciences. Radiocarbon dating, originally developed for use on fossil bones and archaeological relics and useful for dating anything younger than 60,000 years, had long been used to determine the ages of Pleistocene sediments. But these ages were based on the carbon in the calcium carbonate of foram shells—no one had ever tried to date individual organic compounds in the sediments. 14C in the environment is less than a billionth as plentiful as 13C, and there was no way to detect it with a GC-irm-MS. Instead, Tim and his group reverted to a glorified version of what Kate Freeman and John Hayes did before they developed the instrument, and what Geoff and others had done before the first GC-MS was invented: they collected the compounds from the end of a GC column until they had enough for analyses. Now, however, they used a wide-bore capillary GC column that afforded an excellent separation and allowed them to inject more sample, and after burning each compound to CO2, they reduced the CO2 to graphite, which could be introduced into an accelerator mass spectrometer designed for 14C analysis.

The first compound-specific 14C analyses showed that compounds in the same layer of sediment could have decidedly different ages, depending on whether they came from the algae in the surface waters directly above the sediments, from land plants, or from old eroded continental sediments—in other words, depending on how far they had to travel before deposition. Tim’s student Ann Pearson did extensive compound-specific 14C and 13C analyses of recent sediments that showed the remarkable power of the combined isotopic and structural information to reveal not only the sources of biomarkers, but also the metabolic processes of recently discovered, difficult-to-study microorganisms. Pearson took advantage of an inadvertent, man-made twist in the isotopic makeup of atmospheric CO2: intensive nuclear weapons testing in the 1950s and 1960s raised the abundance of 14C in atmospheric CO2 significantly above natural levels. Perhaps the only positive outcome of such testing is that it has provided scientists with global tracers for natural processes. Oceanographers had been tracking this pulse of so-called “bomb 14C” as it moved from the atmosphere into the oceans, using it to determine the rate of uptake of atmospheric CO2 in different regions and then to track the slow movements of deeper water masses in the great ocean conveyor. For Pearson’s purposes, it was enough to know that the bomb 14C had generally infiltrated the photic zone but had not yet arrived in the intermediate or deep layers of water in the Pacific where she obtained her sediment samples. She compared the 14C abundance in compounds extracted from very recent sediments near the sediment-water interface with those extracted from the older, “prebomb” layers. Dinosterol, alkane diols, fatty acids, C24 to C30 n -alcohols, and various hopanols were significantly more enriched in 14C in the contemporary sediments than in the prebomb sediments, clearly indicating that their main source organisms— algae, zooplankton, bacteria—had lived near the surface and utilized some form of carbon derived directly or indirectly from the contemporaneous atmospheric CO2. The odd-carbon-number long-chain n-alkanes were likewise enriched in the postbomb sediments, indicating their provenance from contemporary land plant detritus, whereas n-alkanes with even carbon numbers showed no such enrichment. The latter had a distribution that indicated they derived from a geochemically altered source and may have come from pollution or from the seafloor petroleum seeps that pepper the region off the coast of California, where the samples were taken. But it was the combined 13C and 14C measurements in the iso-prenoid ethers—specifically in the biphytanyl components of crenarchaeol—that gave the most surprising and provocative result. The marine crenarchaea apparently utilized a distinct source of carbon: they had not incorporated the dissolved CO2 or bicarbonate from the surface waters, like the phytoplankton, nor did they recycle organic matter generated by other organisms in the surface water, like the bacteria. Rather, these marine crenarchaea made use of “old” bicarbonate from the deeper water beneath the photic zone. It was the first solid, in situ evidence that the organisms are autotrophic, as hypothesized by the NIOZ group, and it supported the observations that Ed DeLong’s group in Monterey had been making, that they were most plentiful at intermediate depths.

The range of potential applications for such multidimensional biomarker analyses in both recent and ancient sediments is vast, and their utility in identifying the provenance of biomarkers would seem to foretell a new growth spurt for the molecular lexicon. But in 2002 when Nao Ohkouchi, a postdoc in Tim Eglinton’s group at Woods Hole, determined the 14C ages of alkenones in a thick layer of sediments from the Bermuda Rise in the northwest Atlantic, all hell broke loose in the geochemical community, and it seemed instead that the lexicon might end up buried in the sediments that gave rise to it. The most precise dates for late Pleistocene and Holocene marine sediments are obtained from the 14C ages of planktonic foraminifera, which are relatively large and sink to the sediments quickly, recording the time of deposition of a given sediment layer. The premise was that other microfossils and biomarkers in the same layer had been deposited at around the same time, and the validity of correlations between proxies— between Image and foram δ18O proxy temperatures, for example—obviously depended on this fact. The Bermuda Rise was one of the few open ocean sites where the rate of deposition was high enough that one could obtain an uninterrupted, high-resolution record of temperature change over the past ice age cycle. But Ohkouchi’s results indicated, unequivocally, that the alkenones and other organic constituents were thousands of years older than the forams in the same sediment horizon. The alkenones had not been formed by coccolithophores living side by side with the foraminifera in the surface waters above the site of deposition, as supposed. They had, rather, been made thousands of years earlier by coccolithophores that lived, apparently, hundreds of kilometers to the north … in much colder waters. Currents or turbulence in the deep waters must have carried the tiniest particles—organic detritus that remained suspended in the water or fine clay particles with organic molecules adsorbed onto their surfaces—far from their homes and deposited them in a great pile on the Bermuda Rise, like a sort of sand dune on the seafloor.

Were biomarkers leading oceanographers and climatologists astray? Was the Image proxy trustworthy? Tim Eglinton found himself in the unenviable position of questioning some of his father Geoff’s most cherished contributions to earth science. But Ohkouchi’s finding did not put an end to biomarker use or multiproxy studies, any more than exposure of the insidious nature of hydrocarbon contamination did 40 years earlier. Rather, it raised a flag to oceanographers and geochemists to take care with their interpretations: the geochemical proxies are not mere tools to be used the way one might use a computer or an automobile or even a plastic stool, without understanding how it operates and how it was made. Context is everything, and in the case of the biomarker, the context is biological, geological, chemical, and physical—there can be no segregation of disciplines. In Pleistocene sediments, at least, 14C measurements now allow one to directly determine the age of the different organic components, as well as the carbonate, in a given sediment horizon. But one must also look for other, more indirect signs of diverse timescales of organic matter deposition—physical evidence of lateral sediment transport, anomalous signs of diagenetic maturity for different compounds, and other clues to a distant origin.

Julian Sachs and his students at MIT recently made use of yet another isotopic dimension of the alkenones to gain insight into where the coccolithophores that made them had lived. Like 16O and 18O, natural abundances of the two stable isotopes of hydrogen, 1H and 2H, vary in natural waters. Alex Sessions, one of John Hayes’s last Indiana students, had studied hydrogen fractionation in plants and algae and adapted the GC-irm-MS for measuring the relative amount of the rare 2H, commonly known as deuterium, in individual lipids. Variations in deuterium enrichment, δD, reflect both biochemical and environmental effects but are also strongly dependent on the δD values of the hydrogen in the water the organism uses. Hydrogen fractionation occurs during evaporation, precipitation, and freezing, so the water in different regions of the ocean has varying δD signatures, and once biosynthetic fractionation is accounted for, these are mirrored by the δD values of lipids produced by aquatic organisms. Sachs’s analyses of alkenones from coccolithophores grown in water with different δD signatures and from suspended organic matter collected in different parts of the Atlantic were consistent with this prediction, and analysis of the alkenones in the Bermuda Rise sediments pointed to a main source in the waters off the coast of Nova Scotia. Not only does the method provide a way to ensure the geographic consistence of proxy measures at a given site, but it may also offer clues about ancient current regimes. Compound-specific δD analyses of leaf wax and algal lipids in marine and lake sediments are also turning out to be a treasure trove of information about changing global patterns of rainfall and evaporation, a particularly elusive aspect of past climate systems.

Like the inclusion of isotopic dimensions in biomarker analyses, the combination of gene and lipid studies that opened the door to understanding microbial ecosystems has only just begun to make its mark. The ability to amplify and analyze tiny traces of nucleic acid fragments extracted from sediments and natural waters has allowed chemists to revisit the question of whether the ultimate biomarkers, DNA and RNA, might, after all, be of some direct use in paleontology or paleoenvironmental studies. Nucleic acids don’t have nice carbon skeletons that can persist in the sediments and they do break down quickly, but even a small fragment of a DNA or RNA molecule can provide as much information as the most complex of hydrocarbons, and the new techniques can amplify and detect the tiniest trace of a fragment. Of course, they also magnify contaminants from modern organisms, and there have been many false reports of DNA preserved in ancient fossils. But in recent years, techniques for analyzing fossil DNA have been rigorously constrained, and it is now apparent that under very cold, dry, dark conditions, fragments of DNA in fossil bones can survive up to a million years, offering paleontologists and evolutionary biologists a real-time view of genetic change and its relationship to environmental change. For example, widespread extinctions of large mammals at the end of the Pleistocene epoch have been attributed to the expansion of Homo sapiens and increased predation, but the DNA of fossil mammoth and bison bones indicates that genetic diversity among these large mammals decreased much earlier, at the onset of the last ice age and long before the human expansion. On more recent timescales, DNA extracted from fossil fecal material has provided information about the dietary diversity of ancient humans, and DNA from fossil bones, plants, and grains from archaeological sites chronicles the spread of agriculture and domestication of crops and animals since the end of the last ice age.

Marco Coolen, now at Woods Hole, and microbiologist Jörg Overmann in Munich have been able to amplify fragments of fossil DNA and RNA extracted from Holocene and Pleistocene marine sediments. Their combined nucleic acid and lipid biomarker analyses—using the complementary information from the species-specific nucleic acids, on the one hand, and group-specific lipid biomarkers, on the other—can provide relatively reliable, detailed accounts of paleoenvironments and are viable in sediments up to about 200,000 years old, depending on conditions. Coolen and Jaap Sinninghe Damsté’s NIOZ group have used such methods to track the changing populations of algae and bacteria in Antarctica’s Ace Lake and were able to document the shifts in the lake’s ecology over the past 10,000 years: from freshwater lake to open marine basin, and eventually to the heavily stratified enclosed basin, saline bottom water and active methane cycle of the contemporary lake. In similar studies on Mediterranean sapropels, they examined the distributions of isorenieratene and chlorobactene, and of 16S rRNA sequences from green sulfur bacteria. Surprisingly, the genes they detected were most closely related to those of green sulfur bacteria that, in contemporary times, live only in fresh and brackish water, raising the possibility that bacterial cells and debris washed into the open sea from anoxic coastal regions—in which case, the Mediterranean might not have been as anoxic during these periods of sapropel deposition as the presence of isorenieratene would otherwise seem to imply. In another study, the DNA sequences of haptophyte algae and alkenone distributions in Pleistocene sediments were used to assess the relative contributions of alkenone producers other than Emiliania huxleyi, a method that allows one to gauge the reliability of UK37 temperatures in regions and epochs where Emily may not have been the dominant alkenone-producer.

The persistence of fossil gene fragments in the geological record appears to be limited to a million years and to fossils preserved in permafrost or sediments formed in anoxic and hypersaline environments. Genetic analyses of living organisms may actually make a greater contribution to tracing the activities of organisms over geologic time in that they provide information about the sources and biochemistry of lipids that leave persistent fossil molecules. Historically, natural products methods for determining biomarker source organisms have been somewhat random, and their success has been overly dependent on serendipity. But the hundreds of microbial genomes that are now available permit one to recognize genes that encode for particular enzymes and identify organisms that have the basic wherewithal to produce a given type of lipid. This facilitates a more judicious choice of organisms for lipid analyses, on the one hand. And, on the other, it allows for a more educated guess about the general nature of source organisms for fossil lipids even when specific taxonomic groups can’t be identified. In other words, it makes it easier to determine who is making what. The genes can also help in elucidating the physiological roles of key lipids and provide another layer of information for interpretation of fossil lipid records. Ann Pearson’s group at Harvard has begun applying some of these ideas to the study of hopanoids and steroids and found that, despite the fact that hopanoids are relatively rare constituents of anoxic sediments and absent from most of the anaerobic bacteria that had been analyzed to date, a number of groups of anaerobic bacteria that are common in anoxic sediments have the wherewithal to synthesize hopanoids—just as Volker Thiel and his colleagues in Walter Michaelis’s lab began to suspect when they found 13C-depleted hopanoids near a Black Sea methane seep, and Sinninghe Damsté predicted when he found they were major constituents of the anammox lipids.

A couple of years ago, sometime around his 77th birthday, Geoff was going through a presentation he’d been preparing with Richard Pancost, trying to edit it down to size, and he said to me, “A while back I was a bit dispirited, thinking that there wasn’t much more to be found. I thought that we’d reached the limits.” He was being awarded the Wollaston Medal, the British Geological Society’s highest honor—and a sure sign that geologists had finally recognized the value of his “stamp collecting”—and the geologists had asked him to give a talk on “the future of biomarkers” as part of the award ceremony. Geoff had taken the assignment seriously and spent a couple of months talking to young scientists, brushing up on the newest developments—but the speech was only supposed to take a half hour, and now he was dismayed because he had to leave a few things out of his densely packed, 100-slide presentation.

Working with biomarkers means thinking about molecular architecture as an information source in time and space, about molecular design and structure as nature’s persistent trademarks of biochemical function and biological source. But how much time? What sort of environment and what kinds of sources? Organic geochemistry’s entrepreneurs have taken the concept and run with it, in every imaginable direction: as we try to complete this book in 2007, our worry is not that there is nothing left to discover, or that the book should be short and readable, but rather, that we can’t keep ahead of this avalanche of new science.