The Manichean Heresy

Dear Posterity, If you have not become more just, more peaceful, and generally more rational than we are (or were)—why then, the Devil take you.

—Albert Einstein, message for a time capsule

Some races wax and others wane, and in the short space the tribes of living things are changed, and like runners hand on the torch of life.

—Lucretius                                                           



King Belshazzar of Babylon was into some serious partying—he and his several thousand lords, princes, wives, and concubines dancing and carousing and drinking wine from bejeweled goblets and praising false gods of gold and silver and wood and stone—when the moving hand appeared and inscribed the words MENE, MEME, TEKEL, UPHARSIN. Belshazzar’s face fell, his knees knocked together—and that was before he knew what the words meant. His countenance darkened further once the translators interpreted the message: “Thou art weighed in the balance, and art found wanting.” Needless to say, the moving hand knew what it was writing about: Belshazzar was murdered before sunrise, or so says the Book of Daniel. The year was 539 B.C.

These days (to continue the sermon) we in the technologically developed world have been treating ourselves to quite a party, and are beginning to sense that the hour is growing late. We’ve plundered the resources of our little planet as if there were no tomorrow, polluted the air and water, torn holes in the atmosphere, killed off entire species of living things without even bothering to learn they were there, and wired up the world with enough lethal weapons to wreck civilization overnight. It’s hardly surprising that we sometimes wonder whether the human race is being weighed in the balance, and found wanting.

The fossil record offers little solace. Ninety percent of the species that ever lived on Earth eventually vanished, many of them the victims of global catastrophes that in some ways resemble nuclear war, global warming, ozone depletion, and the other unpalatable futures we’re busily making possible for ourselves. We are just one more species; what is to prevent us from joining the silent majority?

As this is a book about science, I will seek to quantify our quandary in terms of a scientific formula. Called the Drake equation, after the astronomer Frank Drake, whom we encountered earlier as the first to run a SETI search, it represents a thumbnail way of estimating the number of intelligent civilizations in the galaxy. It looks like this:

The Drake equation aims at estimating N, the number of communicative worlds in the Milky Way galaxy today. It does this by asking seven questions, represented by the seven terms on the right side of the equation. They are:

N*: How many stars are in the Milky Way galaxy? (About 400 billion.)

fp: How many of these stars have planets? (Perhaps half, but to be conservative, let’s make it ten percent, or 40 billion.)

ne: How many of these planets are suitable for life? (If the solar system is typical, then each star has about ten planets, one of which, like Earth, orbits in a temperate zone where water is found in all three forms—as liquid, solid, and vapor. There life as we know it might exist. Making the perhaps overgenerous assumption that every such system has one such planet, we estimate ne at one hundred percent. If so, the galaxy contains roughly 40 billion fertile planets.)

fl: On how many of the planets where life can develop has life actually appeared? (Life began quite early in Earth’s history, so this fraction might approach one hundred percent. But even if we estimate it at only one in ten, the yield is still four billion life-bearing planets.)

fi: On how many planets does intelligent life develop? (As discussed earlier, the origin of intelligence is not well understood, and may be due in large measure to chance. If the odds against intelligence are a hundred to one, we still would have 40 million planets with intelligent life in the Milky Way galaxy.)

fc: How many of these intelligent species acquire interstellar communications technology? (We went from the stone age to radio telescopes in only ten thousand years, so perhaps the leap from intelligence to communications technology typically occurs rapidly. I’ll estimate fc at one in ten, or four million planets.)

L: How long do technologically adept species typically survive?

It is here, at the last term in the Drake equation, that SETI turns most poignant—for two reasons. The first is that since we know of no civilization other than ours, assigning a value to L amounts to guessing at our own prospects for survival. The second is that the solution to the Drake equation—our best estimate of how many communicative worlds there are in the galaxy today—turns out to be highly sensitive to their average lifetime. If, for instance, technologically advanced societies characteristically endure for ten million years, then our calculations suggest that there are something like four thousand of them in our galaxy right now, and the prospects of finding one via several decades of a SETI effort are not bad. If, on the other hand, technological societies typically last only about ten thousand years, then the same equation concludes that there are only four in the galaxy today, and SETI becomes a forbidding matter of searching for centuries.

The L clock begins running once a society becomes capable of sending and receiving radio signals across interstellar space; we on Earth achieved this ability a few decades ago, so the local value for L is as yet less than a hundred years, If that is typical—if, in other words, we are destined to die out or to collapse into a pre-radiotelescope stage of technology within the next century or so, and if such a fate is typical of communicative worlds—then we are alone in the galaxy, and the prospects for SETI are bleak, a situation summed up in a little SETI ditty:

Of all the sad tales
That SETI might tell,
The saddest would be
A small value for L.

We can imagine a number of nonfatal reasons that alien civilizations might go off the air. They might pollute their surroundings with enough radio noise to block interstellar communication (something like that is beginning to threaten human SETI efforts today), or lose interest in SETI after having listened for a long time and heard nothing, or succumb to a jaded, inward-looking mind-set and turn their backs on the rest of the universe. But what troubles SETI thinkers most deeply is the possibility that advanced civilizations typically fall silent because they self-destruct. If such is the case—if technologically competent species are like cosmic mayflies that live but a day—then SETI is hopeless (and so is just about everything else).

There is something deeply satisfying about delving into doomsday scenarios, and centuries of such scab-picking have produced an abundance of opinions about why there may be no hope for the human race. Four of the more enduringly popular pessimistic scenarios have to do respectively with power, fallibility, aggression, and fate.

The power scenario argues that technology itself promotes self-destruction. To possess high technology is to manipulate power; power can devastate as well as create, and once a species possesses enough power to destroy itself, a single mistake may suffice to doom it to extinction. I call this the Phaëthon syndrome. Phaëthon, you may recall, was the mortal youth who tried to steer the chariot of the sun across the sky, a chore normally handled by his father, the god Apollo. The boy lost his grip on the reins and the sun fell from its appointed orbit, killing him and roasting the earth below. (As Ovid tells the story in his Metamorphoses, a ruined Mother Earth cried in lamentation, “See my charred hair. Ashes are in my eyes, across my face; have I earned this for my Fertility?”)

The more power technology puts in our hands, the greater looms the danger that calamity will result once we lose control of it. Our experience here on Earth to date certainly seems to bear out this point. In less than a century we have increased our energy resources a thousandfold, bringing unprecedented ease to the lives of hundreds of millions of people but so affronting Mother Earth that we have raised the specter of global ecological collapse. During the same period the power of the world’s weaponry has increased by more than a million times, owing principally to the proliferation of thermonuclear weapons—which, apropos the Phaëthon myth, work by virtue of nuclear fusion, the same mechanism that powers the sun. In the woeful catalog of lethal technological dangers, from global warming to chemical and biological warfare, nothing yet approaches the threat posed by nuclear weapons. The detonation of even a fraction of them would result in the greatest catastrophe in human history, one that could press Homo sapiens and many other species to the brink of extinction and perhaps beyond. As the physicist Kosta Tsipis of MIT observes, “We have the power to inaugurate events totally beyond our control.”

At this writing, owing to a welcome thaw in the cold war, the perception is becoming widespread that the threat of nuclear disaster has abated. The correct way to assess a hazard, however, is to multiply the probability of its occurring by the severity of the prospective outcome, and since the severity of nuclear war is for all practical purposes infinite, little comfort is to be taken in marginally reducing the odds that it will happen at a given time and place. For nuclear deterrence to work it must never fail, and never is a long time. Imagine that every day of your life you are required to bet on one spin of a roulette wheel. If a certain number comes up, the world will be destroyed; otherwise nothing will happen, and life will go on for another day. Imagine, further, that twenty years ago there were three fatal numbers, and that today there is but one. That means you are statistically safer than you used to be, in that the daily odds of annihilation have dropped from three in thirty-eight (there are thirty-eight numbers on an American roulette wheel) to only one in thirty-eight. Nevertheless, you cannot expect to keep winning forever; sooner or later your number is going to come up, and when it does the penalty will be horrible enough to overshadow whatever satisfaction you may have garnered from living on borrowed time. That is the predicament in which the human species remains, so long as we have anything like our present arsenals of fifty thousand nuclear warheads.

It is too late, however, to hand the reins back to Apollo. Although we can and should reduce the size of our nuclear arsenals, we cannot unlearn the secret of nuclear fusion, or of genetic engineering, or the other varieties of power that threaten our future. We have no choice but to keep driving the chariot; our only hope is to learn to drive it competently. Here the salutary qualities are skill, foresight, and nerve. It may be worth remembering that Phaëthon crashed because he lost not his strength but his composure:

   When the unlucky Phaëthon looked down
From the top rim of heaven to small and far
Lands under him, he turned weak, pale, knees shaking,
And, in the blazing light, dark filled his eyes:
He wished he had not known his father’s horses,
Nor who his father was, he wished undone
His prayer
….
Then in quick terror he saw the sky’s scattered islands,
Where monsters rise: Scorpion’s arms and tail
Opening, closing across two regions of
The Zodiac itself; he saw the creature
Black, shining with poisoned sweat, about to sting
With arched and pointed tail Then Phaëthon,
Numbed, chilled, and broken, dropped the reins.

That is where we find ourselves today—staring into the frightening blackness of an indifferent universe, face to face with a future in which we must somehow learn how to husband the power of the stars. Do we have what it takes to survive at so dangerous a juncture?

The second pessimistic paradigm, the fatal flaw, proposes that we do not. Arthur Koestler made this point in general terms, without worrying too specifically about what our fatal flaw might be: “Evolution has made countless mistakes,” he wrote.

For every existing species hundreds must have perished in the past; the fossil record is a waste-basket of the Chief Designer’s discarded hypotheses. It is by no means unlikely that Homo sapiens, too, is the victim of some minute error in construction—perhaps in the circuitry of his nervous system—which makes him prone to delusions, and urges him toward self-destruction.

Koestler’s formulation may err insofar as other species are concerned: As we have seen, multitudes perished in global catastrophes not because they were imperfect but if anything because they were too perfectly adapted, to conditions that abruptly changed. But it makes better sense when applied to human beings, in that we, unlike the other animals, have amassed the power required to make ourselves extinct. Maybe we really are too dumb, too shortsighted, too provincial or selfish or loutish or frivolous to manage that power wisely, and are destined to get ourselves into a fix from which not even our vaunted adaptability can deliver us.

If so, I don’t see that there is much to be done about our plight. Conceivably we might get in touch with an older and wiser alien civilization that would teach us how to improve our chances of survival, but I doubt that their sagacity would help us out. We already know what we ought to be doing—we ought to love one another, treat the earth with reverence, act in ways of which our grandparents and grandchildren would approve—but too often we don’t do it. Barring death by cosmic accident, the question of whether we will survive will most likely depend on whether we deserved to survive. If we don’t, we didn’t. In that sense, we are indeed being weighed in the balance.

As the sole species in the universe responsible for the fate of the human species, we should, I think, reject any proffered solution that requires us to surrender our humanity. The application of eugenics—genetically altering humans in order to “improve” their dispositions—is one such alternative. Another is to abdicate our responsibilities to an alien or artificial intelligence, as in the plot of the science fiction film The Day the Earth Stood Still, which depicts an intelligent species that has delivered a measure of its destiny into the hands of robots irrevocably programmed to subdue or destroy anyone who commits a violent act. Solutions like these might improve our chances of success, but at a cost of rendering the victory hollow. Were the world’s best engineers to craft a “perfect” humanoid, a being free from all human error and frailty, and were this gleaming prototype set free to roam the streets, I suspect that a mob would promptly set upon it and tear it to pieces. We may not be perfect, but we are who we are, and survival is worthless if purchased at the price of our identity as human beings. What shall it profit a species to gain the whole universe, and surrender its soul?

Apocalyptic fatalism, the crudest of all imaginable roads to ruin, postulates that our demise is predetermined and so cannot be forestalled by anything we do or think. Fatalism is popular among religious fundamentalists who assert that we need not worry about the long-term future of our species because there isn’t any. Reasonably harmless so long as it is confined to prophets in sandals and sandwich boards, apocalyptic fatalism can be dangerous in the corridors of power. James Watt, President Reagan’s Secretary of the Interior, told a Congressional committee that Americans need not concern themselves with the long-term consequences of their environmental policies; “I don’t know how many future generations we can count on before the Lord returns,” he said. Reagan himself entertained the belief that newspaper headlines fulfilled the prophecies of ancient books predicting the imminent end of the world. Call me a nervous Nellie, but it makes me uneasy when decisions that affect the future of the global environment are being made by people who think the planet won’t last much longer anyway.

Apocalyptic fatalism has its roots in the religious opinion that Man is unworthy of God, a view readily corrupted into the heretical conviction that Man is unworthy of existence itself. (I call this heresy because it permits us to abdicate responsibility for the welfare of our fellow creatures and our descendants, and if that’s not a sin, I don’t know what is.) But fatalistic prognostications have proliferated in secular circles as well. Thirty years ago the English physicist C. P. Snow predicted that nuclear war was imminent, “adding,” as Charles Krauthammer recalls, “that his view was not a matter of opinion or speculation, but scientific certainty.” In 1968, the biologist Paul Ehrlich predicted that by 1983 the American wheat harvest would have dropped below 25 million metric tons, due to pollution, overpopulation, and pesticides, and that food rationing would have been imposed. (The 1983 U.S. wheat crop surpassed 76 million tons.) The Club of Rome, in a widely quoted study based on sophisticated computer models, predicted in 1972 that the world would run out of gold, silver, mercury, and tin by the year 1990; that hasn’t happened, either.

My intention in taking these prognosticators to task is not to belittle their effort, but to point out how hard it is; as the physicist Niels Bohr remarked, “It is very difficult to make an accurate prediction, especially about the future.” Human affairs are hard to predict because human beings are adaptable and creative, and these qualities do not lend themselves to computer forecasts. All rising curves that show unwelcome trends in human affairs—whether of population growth, mineral depletion, or CO2 in the atmosphere—will approach infinity if extended far enough, but it is we who dictate the curves and not vice versa. Intelligence renders the future uncertain for better as well as for worse; it is no more realistic to assert that we are riding hidebound toward inescapable doom than to insist that our species is assured of a bright and cheerful future.

The fourth argument against our long-term success as a species—the most ominous one, in my view—maintains that we are held hostage by our own aggressiveness. It depicts nature as red in fang and claw, as the Victorians used to say, and attributes the ascent of Man to Man’s ruthlessness. Ruthless we indubitably are; neither cottonmouth moccasin nor great white shark nor typhoid bacillus can hold a candle to our status as the most efficient gang of killers, torturers, and exploiters this world has yet produced. And, dangerous though we are as individuals, more threatening yet are our nation states, which arose from tribes that made their mark by making war on one another. Nations have drawn stoutly defended boundaries all over the geometrically unbounded surface of the planet, and their conduct toward one another is lawless and high-handed to a degree that, were they individuals, would warrant their incarceration. Slight is the hope, we may well fear, that lasting peace can be forged among agencies so arbitrary, barbarous, heartless, and hypocritical as are nation states comprised of human beings.

The view of nature as red in tooth and claw can be generalized to the interstellar scale, with dismaying implications. If Homo sapiens owe their domination of Earth to their malevolent genius for violence, then our example suggests that any species that rules its planet is presumptively too vicious to keep the peace. Advanced civilizations therefore may be expected to self-destruct simply because they are in the destruction business; having lived by the sword, they die by the sword. This grim possibility can be elevated (if that is the term) to the level of star wars: One can argue that if there are many civilizations in a galaxy, some hostile and some peace-loving, the hostile ones will have conquered the peaceful ones, so that any society from which we receive a message is by definition suspect of hostile intent. If the Milky Way is ruled by jackboots and war wagons, we’d be lucky to find ourselves alone in the galaxy (though destined to kill ourselves off anyway).

I fear there may be some substance to this argument, but would argue that hope may yet shine through its dark clouds. Here on Earth we find that violent individuals can become more peaceable: When times change the pirate may settle down and buy himself a governorship, the highwayman don a badge, or the drug lord turn his energies to investing in mutual funds and writing checks to pay his daughter’s college tuition. Perhaps something similar can happen to nations, once it becomes clear that relentless militarism is no longer a profitable strategy. Arms races and campaigns of conquest ultimately are like any other growth curve: They cannot go on forever without running into compelling forces that mitigate against them. Where there is life there is hope that peace can come, even to the violent.

If, as I have been saying, SETI presents us with a mirror in which to ponder the potentialities of our fate, it may also provide a way of estimating our chances for survival. A prolonged SETI search that heard nothing would hint that technological development is indeed a high-risk endeavor. If, on the other hand, we found even one alien civilization, its very existence would be cause for optimism: Quite apart from the issue of whether its experience had anything to teach us about survival, the fact that it was out there would indicate that technology, though dangerous, is not invariably lethal.

The literary critic Edmund Wilson used to caution his friends against what he called “The Manichean heresy—giving oneself over to the idea that the fate of the world is in doubt and that the forces of evil can triumph.” Manicheanism, founded by a third-century Persian called Mani, “The Illuminator,” is a dualistic Gnostic religion that divides the moral universe into a kingdom of God, ruled by understanding, reason, music, and peace, and a kingdom of Evil, ruled by disorder, stupidity, noise, and war. The Manichean heresy to which Wilson referred is the belief that the forces of darkness might win—whereas God rules the Christian universe, and Satan survives only by His forbearance (perhaps because we could not otherwise exercise free will, or, less reverently, because a world vouchsafed from evil would be too simperingly boring for God to tolerate).

Theological considerations about the nature of the heretical are given short shrift in SETI circles—understandably so—and the only honest answer to the question of whether the universe of life is Manichean is that we do not know. But when we are ignorant of the answer to an important question, one way to proceed is to ask which path of inquiry promises best to facilitate learning. The British astrophysicist Arthur Stanley Eddington took this position in the 1920s, when confronted with the scientific riddle of the spiral nebulae. Some astronomers believed that each nebula was a galaxy of stars, comparable to our own; others believed that the nebulae were solar system-sized whirlpools of gas, located in our galaxy, which in turn constituted the entirety of the universe. The former hypothesis could be confirmed if the nebulae were resolved into individual stars, but this was beyond the power of the telescopes of the day; the latter would triumph if spectra of the nebulae showed that they were gaseous—but the spectra, confusingly, were stellar. This meant that the nebulae, if gaseous, were made of a substance as yet unknown. Eddington seized on this discrepancy, and argued for the galaxy hypothesis on the grounds that it was the intellectually more fertile of the two: “If the spiral nebulae are within the stellar system, we have no notion of what their nature may be,” he wrote.

That hypothesis leads us to a full stop…. If, however, it is assumed that these nebulae are external to [our galaxy], that they are in fact systems coequal with our own, we have at least an hypothesis which can be followed up, and may throw some light on the problems that have been before us.

SETI is somewhat similar. If we assume that technically advanced civilizations are doomed, we are discouraged both from searching for them with our radio telescopes—having concluded, in our wisdom, that they do not survive—and from envisioning a bright future for our own species. Better to hope for the best, to imagine that intelligence and technical facility are rewarded in the universe at large, and, therefore, to keep our eyes and ears open. He is wisest who remembers how little he knows. Einstein wrote, in answer to a child who inquired of him about the end of the world, “I advise: Wait and see!”