Name now our names, praise us. We
are your mother, we are your
father. Speak now:
“Hurricane,
Newborn Thunderbolt, Raw
Thunderbolt
Heart of Sky, Heart of Earth,
Maker, Modeler,
Bearer, Begetter …”
—Popol Vuh, The Mayan
book of Creation
Thunderbolt steers all things.
—Heraclitus
Ever since Darwin—since well before Darwin, actually—biological evolution has been portrayed as a pageant of steady progress from simpler to more complex life forms, with human intelligence representing its ultimate attainment. So viewed, the biosphere resembles a marvelously smooth-running machine, its wheels grinding slowly to spin blue-green algae into carp, carp into Therapsid and so forth, until an intelligent creature finally rolls off the assembly line. Understandably, the doctrine that evolution progresses toward intelligence has long been popular among Homo sapiens, whose chosen species name means “wise” and who naturally were pleased to imagine that billions of years of evolution, rather than being blindly purposeless, could be vindicated for having finally built something as magnificent as the human brain. As Alexander Pope put it, in the first epistle of his Essay on Man:
Far as Creation’s ample range extends,
The scale of sensual, mental pow’rs ascends;
Mark how it mounts, to Mans imperial race …
The fossil record, however, provides scant support for a gradualist and progressive view of evolution. It presents a picture, not of orderly ascent toward ever higher forms, but of unpredictable tumult and change. Consider the ice ages. Any inquiry into the origins of human intelligence—the only example from which we can as yet extrapolate in theorizing about how extraterrestrial intelligence might have emerged—must take into account that the period during which our ancestors’ brains abruptly enlarged, quadrupling in size in less than three million years, coincided with an epoch of remarkable climatic oscillations, when glaciers gripped a quarter of the earth every one hundred thousand years or so. Unless this was a coincidence, it appears that our intelligence, at least, emerged not from gradual evolution in a stable environment, but from the impetus of unforeseeable, large-scale changes in the environment.
If we enlarge the time scale and look back over hundreds of millions of years, we find evidence of even more traumatic changes rending great holes in the evolutionary tapestry. Long periods of relative stasis, it appears, were punctuated by spasms of extinction that cleared the way for sudden bloomings of new species. It is becoming clear that repeatedly throughout history something terrible has happened to our planet, something that altered the global environment and thus doomed the majority of the exquisitely well-adapted creatures that had flourished in the old climate. Novelty, it seems, most often arises from the ruins of what had been a virtually perfect world, and we owe our existence not entirely to the excellence of our ancestors, but also to their destruction.
Missing from the progressive paradigm was an appreciation of the significance of catastrophic change. The historical reasons for this oversight are not difficult to identify. The nineteenth-century evolutionists (of whom Darwin was only one; he contributed not the idea of evolution but the mechanism, natural selection, by which it works) were opposed by catastrophists, who maintained that the earth was only a few thousand years old. The catastrophists argued that the earth’s brief history was fraught with floods, earthquakes, and volcanic eruptions capable of depositing thousands of layers of geological strata almost overnight. The evolutionists, for their part, rallied around the standard of gradualism, which held that the strata had been laid down evenly over billions of years, by gentle rains, easygoing rises and falls in sea level, and other incremental processes much like those we still see operating today. The victory of Darwinism thereby became a victory, too, of gradualism over catastrophism. As Darwin writes, in the conclusion of his Origin of Species, “We may feel certain that the ordinary succession by generation has never once been broken, and that no cataclysm has desolated the whole world…. And as natural selection works solely by and for the good of each being, all corporeal and mental environments will tend to progress towards perfection.”
Only in recent years have geologists and paleontologists begun to grasp the importance of catastrophe in our planet’s history. The theory of “punctuated equilibrium,” postulated by the paleontologists Niles Eldredge and Stephen Jay Gould in 1971, asserts that evolution proceeded not gradually, but in fits and starts, with long periods of stasis interrupted by explosions of new life forms. The geological record suggests that sudden proliferations of new species were made possible by massive dieouts, which in turn may have been caused by devastating impacts of extraterrestrial objects. The result of all this research has been the rise of a new catastrophism. Its implications go to the very question of human origins, and thus bear on considerations of where and how often intelligence may have arisen on other planets.
I discussed this issue over lunch at Berkeley one October afternoon with the geologist Walter Alvarez. Originally we’d planned to get together the previous Wednesday, but on Tuesday a major earthquake had struck the Bay area. Now, a week later, San Franciscans were still walking around in a daze, as if seeking their sea legs, while aftershocks every few hours set the ground to rolling like rangy swells in a sluggish ocean. Buildings had collapsed during the quake, as had the upper deck of a freeway, and sixty-two persons had died. But the effects of the earthquake had not all been bad. People talked of pulling together, helping their neighbors, and coming to realize how much they felt a part of the city in which they lived. “For cultures to be mature there has to be some internalization of a tragic metaphor,” Kevin Starr, a historian, told Robert Reinhold of The New York Times. Starr suggested that the earthquake “gives a certain depth to a culture in San Francisco that was in terminal pursuit of the trivial.”
It was in these circumstances that Walter and I got to talking about the efficacy of unwelcome change. “When I was a student I read a lot about the ancient Greeks,” Walter said. “It really surprised me to learn that many of their great cultural contributions coincided with the collapse of their society during the Peloponnesian War.”
“There’s a lot to that,” I replied. “It reminds me of the monologue Graham Greene wrote for Orson Welles in The Third Man. Welles, you’ll recall, plays Harry Lime, a racketeer who’s selling adulterated penicillin on the black market in Vienna just after the war. It’s bad business; children are dying because they’ve been given this penicillin. Anyway, Harry is confronted by his old friend, played by Joseph Gotten, in the scene on the Ferris wheel, and Harry says something like, In Italy for thirty years under the Borgias they had warfare, terror, murder, and bloodshed, but they produced Michelangelo, Leonardo da Vinci, and the Renaissance. In Switzerland they had brotherly love. They had five hundred years of democracy and peace. And what did that produce? The cuckoo clock.’”
Walter had become something of an expert on catastrophes. In the early 1970s his work as a geologist took him to a gorge in Italy, near the city of Gubbio in the Umbrian Apennines. Here strata laid down over the aeons have been reared up above ground and tilted conveniently on their side, so that simply by walking down a path one can examine a geological record spanning tens of millions of years. During the weeks that he worked near Gubbio those summers, Walter became increasingly interested in a particular set of strata that dated from the so-called KT boundary—the break between the Cretaceous period and the Tertiary, 65 million years ago, when the dinosaurs died out. (“KT” stands for Kreide, the German word for “Cretaceous.”) Not only the dinosaurs but many other species met their fate at that time: The majority of species were rendered extinct.
The shadow of death is marked in the Gubbio cliffs by a half-inch-thick layer of red clay. Walter chiseled out a piece of this KT clay, took it back to Berkeley, and showed it to his father, the physicist Luis Alvarez. An inventive man—he held forty patents, on everything from a bifocal spectacles to a stabilizing system for videocameras, and once whiled away a stay in hospital by designing a new piece of medical equipment—Alvarez père had been working with his son on developing ways of age-dating rock samples. He suggested they try to determine how long it had taken to deposit the clay by examining its content of the element iridium.
Iridium is classified as a “precious” metal, meaning that it is rare. It is rare because it bonds readily with iron: When the earth was still molten most of its iron sank to the core, carrying the planet’s original iridium with it, and consequently little iridium remains in the crust. Such was not the case with comets, asteroids, and meteors. Because these objects are small they lack much of a gravitational field, and consequently their heavy metals, rather than being drawn to the core when they formed, remained homogenized throughout them.
The earth accumulates tons of meteorite debris every day, most of it grains of microscopic size; the earth’s surface is constantly being bathed in iridium-bearing cosmic dust. The Alvarezes’ idea was to use the iridium as a clock. Assuming that the rain of meteorite dust had fallen more or less steadily over the centuries, a slightly higher concentration of iridium in a given stratum would indicate that it had taken longer to form than had another, similar stratum that was poorer in iridium.
The idea seemed promising, but when Walter Alvarez’s sample of the KT layer was painstakingly analyzed by two Berkeley chemists, Frank Asaro and Helen Michel, the results were startlingly different from what anyone had expected. The layer of clay coinciding with the death of the dinosaurs turned out to contain not a little more or less iridium, but hundreds of times as much iridium as did the layers above and below it. So high an iridium abundance could not have been created by a constant drizzle of cosmic dust over any plausible amount of time. Instead, there must have been a sudden downpour.
After a year of pondering the matter, the Alvarezes hypothesized that a large extraterrestrial object struck Earth 65 million years ago, triggering the KT dieouts. Either a comet or an asteroid (many Earth-approaching asteroids are the hulks of exhausted comets) would suffice. A comet nucleus six miles in diameter that impacted the earth at a velocity of forty-five thousand miles per hour would unleash far more force than an all-out nuclear war. Authorities in mass destruction, of whom there are many in our troubled times, say that while so formidable an explosion would have killed virtually everything within sight, dispatching three hundred mile-per-hour winds across whole continents and sending tsunamis racing through the oceans of the world, its most dolorous effects would have manifested themselves over the ensuing weeks and months. The fireball, they calculate, lofted tons of debris into orbit, producing millions of ballistic missiles, the heat from which, on reentry, set forests ablaze all over the globe. Dust sucked into the upper atmosphere turned the earth dark and cold, while soot from thousands of fires touched off by the blast compounded the damage; the air became sufficiently opaque to eclipse a brontosaur’s view of its own feet. If the comet happened to hit an ocean—as is likely, given that four-fifths of the surface of the earth is covered by water—huge quantities of water vapor were injected into the atmosphere, so that impact winter was followed by greenhouse summer. Nitrogen and oxygen combined in the heat of the explosion to produce a nitric acid rain fatal to marine invertebrate plants and animals, whose calcium carbonate shells are acid-soluble. These and other drastic changes in the climate presumably would suffice to doom many species of plants, killing off some directly and others through starvation.
After lunch I examined a piece of KT strata under a stereo microscope at Walter’s lab. At the bottom of the sample was a thick white limestone layer rich in “forams”—Foraminifera, the skeletons of one-celled animals shaped like little spiral galaxies, the same organisms whose shells are found in the White Cliffs of Dover. (Forams proliferated so abundantly in the age of the dinosaurs that an entire geological period is named for them: Cretaceous means “chalky.”) Bisecting Walter’s sample was the iridium-bearing layer, a turbulent streak of frozen trouble, undulating grays going to brick red. Above that came a bland, salmon-colored red stratum; it was inhabited by only a few forams, all members of a single species. “All existing forams are descendants of just a few guys that made it through,” Walter said, looking over my shoulder.
A good scientific theory has at least two things in common with a good work of literature. First, it aspires to be accurate (or, more ambitiously, “true,” as one might say of a poem or a novel). Second—and this is less widely appreciated outside of science—the theory should be evocative. The impact theory may well be accurate—the preponderance of evidence supports it—but it also did a good job of stirring up scientists’ emotions. People care about the history of life on our planet, and when you present a new idea of how millions of species met their doom, you get their attention. It was the emotional force of impact theory, as much as its intellectual rigor, that got astronomers, paleontologists, volcanologists, atmospheric scientists, geologists, physicists, and chemists working and thinking about it, producing fresh ideas and evidence pro and con.
Geologists in particular had reason to be grateful for the upheaval. So long as our planet’s past had been presumed placid, theirs had been a tedious science, shunned by undergraduates as dull and dirty. “Geology was dull in the fifties,” Walter Alvarez conceded, as we talked over lunch. “I’m a little bit embarrassed that I started out as a geologist.” Then plate tectonics, the once reviled and later resurrected theory that the continents rest on floating plates that move, began to revitalize the field. The relentless grinding of the plate that supports the Pacific Ocean against the one that holds up North America had produced the recent San Francisco earthquake and put geologists on the six o’clock news, their research suddenly perceived as having immediate bearing on questions of life and death. Impact theory brought new excitement to the discipline that had once lulled students to sleep with recitations about dripstone deposits and the chemical formula for hornblende.
The geologists, some of them inspired by impact theory and others scandalized by it, went to work looking for iridium excesses in KT boundary strata at sites elsewhere than Gubbio. (This was an important test: Were the iridium excess limited to a single site, it presumably would have come from too local an event to have caused global extinctions.) They found them—KT strata dug up in Spain, Denmark, New Zealand, on dry land and on the floor of the Pacific north of Hawaii yielded high iridium concentrations—and they found additional evidence of impact as well, in the form of shocked quartz, grains of soot, and tiny spheres of fused silicates like those left behind by nuclear weapons tests.
Meanwhile, others searched for a “smoking gun” crater that would provide direct evidence of the impact itself. So old a crater would long since have been buried in sediment, but could be located by studying deviations in the gravitational and magnetic field—sediments are lighter and less metal-rich than the stone removed by the impact—then dated by taking core samples. In 1990 two geophysicists identified a promising candidate in the Yucatan peninsula, where circular gravitational and magnetic anomalies had long attracted the attention of scientists and mystics alike. Dubbed Chicxulub in honor of the Mayans whose civilization flourished in the Yucatan a millennium ago, this 110-mile diameter crater would be the largest yet found on Earth. Even so, it could account for only half the energy estimated to have been released in the KT catastrophe. Researchers hypothesized that the Chicxulub crater was gouged out by one among two or more objects—a fragmenting comet nucleus, perhaps—that hit the earth simultaneously, and they began looking for other craters. By 1991, signs of candidate craters had been found near Haiti, in Iowa, and in the Soviet Union.
Evidence continued to mount suggesting that impacts may have been responsible, not only for the KT catastrophe, but for scores of the dieouts that dot the earth’s long and troubled history. Iridium or other hints of bombardment from space were identified in strata that coincide with the Frasnian catastrophe, 367 million years ago, when giant meteorites punched holes in Sweden and Canada and tsunamis devastated shallow-water ecosystems around the world; at the Permian-Triassic mass extinction, 250 million years ago, when nine tenths of all sea-dwelling species perished; at the Turonian-Coniacian boundary, 90 million years ago, when the oceans all but drowned the continents and the sea urchins were especially hard hit; at the Eocene-Oligocene transition, 35 million years ago, when winter descended and the polar ice caps grew, while the departure of numerous families of tree-dwelling and burrowing mammals cleared the way for the rise of rabbits, dogs, squirrels, gophers, and shrews; and at the mid-Miocene, some 12 million years ago, when the Antarctic ice cap expanded and mammals died out in great numbers.
Critics of the Alvarez impact theory argued that mass extinctions seem to have transpired “stepwise,” over the course of millions of years, and they emphasized that a single comet impact cannot produce a gradual dieout. A number of impacts spread over a few million years could have done the job, but no astronomical mechanism was then generally known that could produce such an effect. Certainly one asteroid or comet may hit the earth from time to time, but why might there have been showers of such objects?
In search of an answer to this question let us raise our sights from the fossil-bearing strata and eroded craters of Earth, and look out to the limits of the solar system. There, far beyond the orbit of Neptune and Pluto and the hypothetical Planet X, lies the habitat of the comets. Ten trillion billion comets are believed to reside there, in a spherical assemblage called the Oort cloud (after Jan Oort, the Dutch astronomer who postulated its existence). The Oort cloud is big: It begins at about a thousand times the distance of Neptune from the sun, and extends a third of the way to the nearest star. The comets there are not the glowing apparitions we see in astronomical photographs; it takes proximity with the sun to make them sprout tails. Comets in the Oort cloud are naked and unglamorous, each a dirty iceberg a mile or so in diameter, inky as lampblack, a clutch of dirt and snow that has remained frozen since the solar system was born.
Normally the comets of the Oort cloud plod in stable orbits around the distant sun. But once in a while something—the gravitational pull of a passing star, perhaps—tugs at the cloud and perturbs their orbits. Billions of comets, shaken loose by the disturbance, then fall toward the sun. Most of these never make it past Jupiter and Saturn; the gravitational fields of these giant planets either herd them into new orbits in the inner Oort cloud or fling them out of the solar system altogether. But some settle into new, smaller orbits that intersect those of Earth and the other inner planets. Just how many comets are thus escorted into a threatening position is uncertain, owing in part to our ignorance of the true population of the Oort cloud and of all the dynamic variables involved, but as a rough estimate we might expect such a shower to send as many as a billion comets into Earth-crossing orbits, of which anywhere from two to several dozen would hit the earth.
Envision the fatal passacaglia. It takes two or three million years for the perturbed comets to fall into the inner solar system. During this long prelude the night skies of earth very gradually bloom with comets; eventually scores are visible, like some mysteriously multiplying species of celestial paramecia. Nearly all of these comets wander harmlessly past, first growing brighter as they approach the sun and then fading away as they depart a few months later. But then comes one that does not fade, that instead keeps getting larger and brighter, night after night, a dreadful milk-white eye that grows until it embraces the sky, putting the stars to flight and banishing the darkness. This comet is coming straight at you, doomed creature.
When it hits, the heavens deliver up hell on earth: Waves shatter undersea coral reefs, dust and soot blanket the sky, water vapor sucked into the air sets off greenhouse heating, acid rain defoliates the forests, and flora and fauna die in wholesale lots, of starvation or poisoning or sheer disaccommodation. Nor is that the end of it. There is likely to be another impact soon, and another, an average of ten or so during the next million years. Creatures that survive one catastrophe expire in the next; by the time the shower abates, the lands of Earth are nearly as sterile as a bacterial colony bathed in penicillin.
This grim scenario fits the fossil record pretty well. Repeated impacts occasioned by a comet shower can produce the “stepwise” patterns that some scientists think they see in the geological record of each extinction event. At the Eocene-Oligocene boundary, for instance, species of plankton appear to have become extinct in four distinct and sudden episodes that took place over some one to three million years. This would be expected, if indeed the extinctions were caused by four or more comet impacts occurring within a comet shower that lasted three to four million years.
Left unexplained at that stage in the development of impact theory was the triggering mechanism that touched off each comet shower in the first place. An obvious candidate was the chance encounter of the sun with another star, but such random celestial flybys occur infrequently. Stars are scarce where we live, out near the edge of the galactic disc—scatter a few grains of sand across all of North America and you have a pretty good representation of the enormous volumes of space surrounding the average star in our precinct of the Milky Way galaxy—and dieouts have occurred more often in the earth’s history than chance stellar encounters would permit. Something was missing.
A clue came in 1984, when two University of Chicago geologists, David Raup and John Sepkoski, published a seminal paper suggesting that cosmic bombardments occur periodically. Raup and Sepkoski analyzed data on the tenures of some 3,500 families of marine animals. Charted on a graph, the extinction rates of these species showed sharp peaks—dieouts—coming in cycles of every 26 million years or so. “The implications of periodicity for evolutionary biology are profound,” the two geologists wrote. “The most obvious is that the evolutionary system is not ‘alone’ in the sense that it is partially dependent upon external influences more profound than the local and regional environmental changes normally considered.”
The suggestion that mass extinctions had been visited upon the earth at regular intervals, as if by the tolling of a cosmic bell, was so outlandish that Luis Alvarez himself did not believe it at first. He stormed into the office of his prize student, the physicist Richard Muller. “Rich,” he said, as Muller recalls their conversation, “I just got a crazy paper from Raup and Sepkoski. They say that great catastrophes occur on the earth every 26 million years, like clockwork. It’s ridiculous. I’ve written them a letter pointing out their mistakes. Would you look it over before I mail it?” Muller reviewed the evidence, but found it more likely that Raup and Sepkoski were right and Alvarez wrong. When he said so, Alvarez reacted with indignation. The extinctions, Alvarez maintained, were due to asteroid or comet impacts, and astronomers know that such impacts occur at random intervals. Muller held his ground. There ensued one of the more interesting combinations of scientific and philosophical discussion in the recent history of science.
“Suppose someday we found a way to make an asteroid hit the earth every 26 million years,” Muller suggested to Alvarez. “Then wouldn’t you have to admit that you were wrong?”
“What is your model?” Alvarez demanded. He wanted concrete hypotheses, not airy speculations.
“It doesn’t matter!” Muller replied. “It’s the possibility of such a model that makes your logic wrong, not the existence of any particular model.”
“How could asteroids hit the earth periodically?” Alvarez repeated, his voice quavering with anger. “What is your model?” he repeated.
Muller writes that he thought, “‘Damn it!…. If I have to, I’ll win this argument on his terms. I’ll invent a model.’ Now my adrenaline was flowing. After another moment’s thought, I said: ‘Suppose that there is a companion star that orbits the sun. Every 26 million years it comes close to the earth and does something, I’m not sure what, but it makes asteroids hit the earth.’
“I was surprised by Alvarez’s thoughtful silence,” Muller recalled. “He seemed to be taking the idea seriously and mentally checking to see if there was anything wrong with it. His anger had disappeared.” Alvarez decided not to send in his letter after all.
Muller’s further work on the idea, much of it in collaboration with the astronomers Marc Davis and Piet Hut, evolved into what he dubbed the “Nemesis hypothesis.” The idea was that the sun is a double star, its companion—Nemesis—a dwarf in a highly elliptical orbit that brings it close to the sun every 26 million years. When Nemesis swings past the sun its gravitational field perturbs the Oort cloud, touching off a comet shower. The triggering mechanism behind the dieouts had been found—maybe.
To test the hypothesis, Muller and his colleagues set out to find Nemesis. They reckoned that it would have a mass of only about one tenth that of the sun (were it more massive it would be brighter, and astronomers would have discovered it already) and that it must now be near its maximum distance from the sun, inasmuch as the last dieout, that of the mid-Miocene, came some 12 million years ago, roughly half of Nemesis’ putative 26-million-year orbital period. They equipped an old thirty-inch telescope in the hills near Berkeley with three computers, consulted a catalog of red dwarf stars, and started taking pictures of their positions with a CCD electronic imaging system. Two images were made of each star, on nights six months apart, when the earth was at opposite sides of its orbit around the sun. If one of the dwarfs were Nemesis, it would lie only 2.4 light years from the sun, much closer than any previously known star, and so would betray itself by a marked shift in its apparent position against the background stars every six months. This displacement, which astronomers call parallax, is caused by the changing position of the earth in its orbit around the sun, and is the basis for measurement of the distances of all stars in our celestial neighborhood.
I talked with Muller one afternoon in 1989 in a redwood grove at the Lawrence Berkeley Lab. He was close to completing his survey of the northern skies; if he found nothing, the next step would be an all-sky survey from the southern hemisphere. “I doubt that the Nemesis hypothesis is taken seriously by most astronomers,” he said. “I have mixed feelings about that,” he added. On the one hand, Muller said, he would prefer that his theory found adherents among the astronomers, but “on the other hand I’m not really sad that the crowd isn’t out there looking for Nemesis, because we’d really like to find it.” If an inventory of all the red dwarfs in both hemispheres failed to identify Nemesis, he added, he would abandon the theory and begin thinking anew about what might cause periodic comet showers.
Personally, I was less pessimistic than Muller about the ease with which his hypothesis could be disproved. Our galaxy is known, from dynamical studies, to harbor twice as much mass as the visible stars can account for, and this “dark matter” might take the form of brown dwarfs, stars too dim to be seen in visible light at all. If Nemesis is such a star, it would require a special sort of telescope—an infrared telescope in space would do nicely—to detect it. So Nemesis might exist even if Muller’s initial search failed to locate it.
After bidding Muller good-bye I strolled in the grove, watching the sun dapple through the redwood branches. If Rich is right, I thought, the sun will never look the same again. No longer will we look at it and say, “There is our sun.” Instead we will say, “There is one of our suns, the bright one, the giver of life. Out in space lurks another one, a dark star, the star of death.”
Impact theory is still the subject of active controversy, and there is no shortage of capable critics who question whether impacts caused any of the great dieouts, whether they have occurred periodically, and whether they could have been caused by a dwarf star orbiting the sun. But whatever may be its fate, the theory has worked to call fresh attention to the old lesson that life owes a debt to death. The prospect of a celestial mechanism behind the dieouts has opened up a radical—and to me, refreshing—new way of thinking about the history of life on Earth.
In Darwin’s day, when evolution was thought of as gradual and progressive, the ascent of each new species was viewed as an upwardly mobile process in which ever-better life forms came, however slowly, to take up their rightful place as superior to the less well-adapted forms of life that they displaced. Extinct creatures represented failures, and surviving species—notably ourselves—represented success. To succeed is better than to fail, and so species that survived were thought of as somehow better than those that perished. (Herbert Spencer’s term “survival of the fittest” encouraged this assumption; strictly speaking it referred simply to organisms that better “fit” their environment, but to the general public, especially in Victorian England, it connoted physical fitness and superiority.) And so evolution came to be characterized in the schoolbooks as an ascending staircase of ever-better organisms—in other words, as progress. From fish to mammals, from little horses to big horses, and—most satisfyingly of all—from apes to primitive man to Homo sapiens, nature was thought to be patiently improving her handiwork.
All this changes utterly once we entertain the notion that evolution dances to a jazzier rhythm than the stately waltz that Darwin and his contemporaries imagined. The key to understanding the new outlook is to appreciate that massive dieouts spell deliverance from the tyranny of the extant. What was catastrophic for the species that expired becomes wonderful for those that survive: They are presented with a Garden of Eden, a clean slate, a frontier of opportunity.
Consider the economics of survival. Our planet offers only so many ways for a creature to make a living. Given a reasonable amount of time in a fairly stable environment, various species will arise and adapt until they have filled every ecological niche. The end result is a steady-state situation in which all the available jobs are held by creatures expert at doing that job. Interlopers need not apply; the chance is small that a new species will appear that can do the job better than the existing species do and thus find an opening in a saturated ecologic market.
Stasis is especially uncomfortable for the freaks—the mutations within each species. These are much more common than is generally assumed: On the rare occasion when a species is given an opportunity to flourish without significant competition, mutations are found in surprising numbers. This was the case, to cite one instance, when Bairdiella, a small marine fish from the Gulf of California, were introduced to the Salton Sea in 1952. With food abundant and competition nonexistent, Bairdiella flourished—the mutations as well as the normal fish. Nearly a quarter of the first spawning of bairdiella were visibly deformed. Some were blind, or had no lower jaw, or were hunchbacked, or had two or three spines. As soon as food supplies began to run short, however, the incidence of mutated fish dropped to a few percent; less fit for the environment, the mutants could not compete successfully for food.
But when the environment changes radically, the freaks, some of them, have a chance. If, say, plankton disappear and only fish scales are available for food, a fish that has a deformed jaw may turn out to be better able to feed than are those with normal jaws; the tribe of deformed fish will then multiply, until a freak jaw becomes the norm. Now the formerly normal varieties will tend to die out, so that wholesale disappearances of taxa are succeeded by an explosion of strange new forms. Catastrophe sets the stage for the revenge of the nerds.*
All the high-paying, big-animal jobs in the Cretaceous were filled by dinosaurs. They held on to them for 130 million years (which is twice as long as all the time that has expired since their demise) and did them so well as to inspire our enduring respect. The mammals, meanwhile, eked out a living at marginal, furtive, small-animal jobs; in the age of the dinosaurs, no mammal grew larger than a house cat.
Then came catastrophe—a comet intruder, we presume—and the beautifully adapted dinosaurs found that they had no home in a newly altered world. They died out, but the several small mammals that made it through found themselves in vastly improved circumstances. Competition was all but non-existent; the extinction of most of the earlier species left an enormous variety of positions open, and many a freak mammal found a paradise in the recently ruined Earth. Wild-eyed insectivores took wing as bats, navigating skies abandoned by the extinct pterosaurs; hoofed ruminates grazed on plains newly freed from dinosaur predation; and primates took to the now-safe trees, eventually to be aided in swinging from branch to branch by their opposable thumbs, which were to come in handy in making hand-axes and radio telescopes.
Such was our Eden, and we live in it still—if, as Ben Franklin said of the new republic, on his way out of a meeting of the Continental Congress, we can keep it—but it arose from the violence of a falling star. “Had it not been for the large comet that hit 65 million years ago, mammals might never have wrested the earth from the dinosaurs,” Rich Muller writes. And if the mammals had not flourished, who is to say whether intelligence would have appeared? The message of the new catastrophism is that biological inventions like the opposable thumb and burgeoning neocortex arise by celestial accident. If so, we owe our existence—and, therefore, that of intelligent life on Earth—to star-engendered violence.
To search for intelligence elsewhere in the universe is, therefore, to look not for the predictable fulfillment of a progressive plan but for the unpredictable aftermath of a disaster. If impact theory is correct, the most probable abodes for intelligent life are not the placid, untroubled planets, but dangerous worlds fraught with catastrophe. In celestial as in mundane life, the place to seek wisdom is where life is hard.
In a nuclear age, such a fable must bear the cautionary moral that if we press the wrong button we will see what catastrophe looks like from the losers’ side. We will not kill off all life on Earth; rhetoric to the contrary notwithstanding, that feat remains beyond our capacity for destruction. We could, however, extinguish many species, especially the land-dwellers highest on the food chain, us in particular. There is a fearful symmetry to the situation, in that the damage we did would harm us more than anyone else. Once the dust had settled, the radiation levels had died down, and the cockroaches had run through their initial rampages, new species would appear. Some of these—the termites, perhaps, gifted with behavioral flexibility and impressive engineering skills—would do quite nicely, their Eden having arisen from the ashes of our demise. Their prayers would have been answered. But would intelligence arise among them? And would they be better off if it did?
*A point that was missed, incidentally, in the film of the same name. When in the movie’s climactic scene a “nerd” takes the microphone at a football rally to protest campus persecution of his kind, he tells the “beautiful people,” the football players and cheerleaders, “There are more of us than there are of you.” The real beauty of nerds, however, is not that they are numerous but that they are different. Because they are different they have skills that may prove valuable in a changing world—when, for instance, they leave school and enter the wider world, where the ability to program a computer may ultimately yield more rewards than the ability to throw a football forty yards.