I live in Colorado, near the mountains. A few years ago, I had occasion to hike to the top of one of the smaller peaks, whose summit was just above the tree line. As I sat, observing the scenery while eating my lunch, an odd question crossed my mind: How did all these trees get up here? Conifers lined the slopes nearly to the summit of every peak in sight. How did a mob of immobile trees ever climb those steep heights?
As I munched away, pondering this, I noticed a group of chipmunks, scurrying about, carrying pine cones. The answer was thus made apparent. The chipmunks had transported the seeds uphill. Interesting. Every mountain in sight was covered with trees. By moving the seeds upslope, the chipmunks had enormously expanded their “natural habitat.” In fact, if by “natural habitat” one means the habitat that would support a chipmunk population that exists prior to and independent of their seed-spreading activity, it's unclear whether any such place exists at all. The chipmunk habitat does not exist “naturally.” It exists because the chipmunks (together with a host of other participating species) have created it. That's how life works.
A major challenge that humans will face as we become a Type II and then Type III civilization is that of transforming the environments found on other planets to more Earthlike conditions. This must be done because environments friendly to life are a product of the activity of life. Thus, as humans move out into space, it is unlikely that we will find environments that perfectly suit our needs. Instead, as life and humanity have done historically on Earth, we will have to improve the natural environments we find to create the worlds we want. Applied to other planets, this process of planetary engineering is termed “terraforming.”1
Some people consider the idea of terraforming other planets heretical—humanity playing God. Others would see in such an accomplishment the most profound vindication of the divine nature of the human spirit—dominion over nature, exercised in its highest form to bring dead worlds to life. Personally, I prefer not to consider such issues in their theological form, but if I had to, my sympathies would definitely be with the latter group. Indeed, I would go further. I would say that failure to terraform constitutes failure to live up to our human nature, and a betrayal of our responsibility as members of the community of life itself.
These may seem like extreme statements, but they are based in history, about four billion years of history. The chronicle of life on Earth is one of terraforming—that's why our beautiful blue planet is as nice as it is. When the Earth was born, it had no oxygen in its atmosphere, only carbon dioxide and nitrogen, and the land was composed of barren rock. It was fortunate that the sun was only about 70 percent as bright then as it is now, because if the present-day sun had shined down on that Earth, the thick layer of CO2 in the atmosphere would have provided enough of a greenhouse effect to turn the planet into a boiling, Venus-like hell. Fortunately, however, photosynthetic organisms evolved that transformed the CO2 in Earth's atmosphere into oxygen, in the process completely changing the surface chemistry of the planet. As a result of this activity, not only was a runaway greenhouse effect on Earth avoided, but the evolution of aerobic organisms, which use oxygen-based respiration to provide themselves with energetic life styles, was enabled (though a primeval Environmental Protection Agency dedicated to preserving the status quo on the early Earth might have regarded this as a catastrophic act of environmental destruction). This new crowd of critters, known today as animals and plants, then proceeded to modify the Earth still more—colonizing the land, creating soil, and drastically modifying global climate. Life is selfish, so it's not surprising that all the modifications that life has made to the Earth have contributed to enhancing life's prospects, expanding the biosphere, and accelerating its rate of developing new capabilities to improve the Earth as a home for life still more.
Humans are the most recent practitioners of this art. Starting with our earliest civilizations, we used irrigation, crop seeding, weeding, domestication of animals, and protection of our herds to enhance the activity of those parts of the biosphere most efficient in supporting human life. In so doing, we have expanded the biospheric basis for human population, which has expanded our numbers and thereby our power to change nature in our interest in a continued cycle of exponential growth. As a result, we have literally remade the Earth into a place that can support billions of people, a substantial fraction of whom have been sufficiently liberated from the need to toil for daily survival that they can now look out into the night sky for new worlds to conquer.
It is fashionable today to bemoan this transformation as destruction of nature. Indeed, there is a tragic dimension to it. Yet it is nothing more than the continuation and acceleration of the process by which nature was created in the first place.
Life is the creator of nature.
Today, the living biosphere has the potential to expand its reach to encompass a whole new world, on Mars, and the Type II interplanetary civilization that develops as a result will have the capability of reaching much further. Humans, with their intelligence and technology, are the unique means that the biosphere has evolved to allow it to blossom across interplanetary and then interstellar space. Countless beings have lived and died to transform the Earth into a place that could give birth to a species with such capabilities. Now it's our turn to do our part.
It's a part that four billion years of evolution has prepared us to play. Humans are the stewards and carriers of terrestrial life, and as we spread out, first to Mars and then to the nearby stars, we must and shall bring life to many worlds, and many worlds to life.
It would be unnatural for us not to.
MARS-LIKE WORLDS
Mars is then first extraterrestrial planet that will be terraformed. As discussed in detail in my earlier book, The Case for Mars, the engineering methods by which this can be done are relatively well understood. The first step will be to recreate the atmosphere of early Mars by setting up factories to produce artificial greenhouse gases, such as perfluoromethane (CF4), for release into the atmosphere. If CF4 were produced and released on Mars at the same rate CFC gases are currently being produced on Earth (about a thousand tons per hour), the average global temperature of the Red Planet would be increased by 10°C within a few decades. This temperature rise would cause vast amounts of carbon dioxide to outgas from the regolith, which would warm the planet further, since CO2 is a greenhouse gas. The water vapor content of the atmosphere would vastly increase as a result, which would warm the planet still more. These effects could then be further amplified by releasing methanogenic and ammonia-creating bacteria into the now-livable environment, as methane and ammonia are very strong greenhouse gases. The net result of such a program could be the creation of a Mars with acceptable atmospheric pressure and temperature, and liquid water on its surface within fifty years of program start. Even though such an atmosphere would not be breathable by humans, this transformed (essentially rejuvenated) Mars would offer many advantages to settlers: they could now grow crops in the open; space suits would no longer be necessary for outside work (just breathing gear); large supplies of water would be much more accessible; aquatic life could flourish in lakes and ponds oxygenated by algae; and city-sized habitation domes could be constructed, as there would be no pressure difference between their interior and the outside world. These short-term advantages would be more than sufficient to motivate Martian settlers to initiate the required terraforming operations to obtain them. In the longer term, plants spreading across the surface of such a partially terraformed Mars would put oxygen in its atmosphere. Using the most efficient plants available today, it would take about a thousand years for enough oxygen to be released to create an atmosphere that humans can breathe. However, future biotechnology may allow the creation of more efficient plants, or other technologies might become available which could accelerate the oxygenation of Mars considerably.
It may be observed that if humans had encountered Mars not in its current condition but in its warm, wet youth, terraforming would have been much simpler. Essentially, we could have skipped the large-scale industrial engineering required to greenhouse the planet with CF4, and gone directly to the stage of using self-replicating systems such as bacteria and plants to further warm and oxygenate the planet. Thus, a team of interstellar explorers who chance upon a “young Mars” could initiate massive terraforming operations with little more equipment than an appropriate array of bacterial cultures, seeds, and a bioengineering lab.
(See plate 14.)
Mars-like and young-Mars-like worlds may be quite common in nearby interstellar space. If so, then the terraforming of Mars will thus give us not only one new world but the tools for creating many.
In January 2019, I received an email from a nurse named Patte, who asked how I thought it could be moral to terraform Mars when we have done so much damage to the Earth. Here is my reply.
Patte,
Mars is a dead or nearly dead world, with at most some microbes living deep underground.
I'm sure you would agree that it would be a horrible act of destruction to turn our own glorious living Earth into a dead world like Mars.
That being the case, then you must see what a wonderful act of creation it would be to transform a dead world like Mars into another glorious living world like Earth.
Furthermore, just as a doctor who chooses not to save a life when he could because he has problems of his own is acting immorally, so failing to bring Mars to life when we could would also be deeply immoral.
Humans are not the enemies of life. Humans are the vanguard of life.
Life to Mars, and Mars to life!
That's the wonderful calling that Nature has given us.
All the best,
Robert
WORLDS TOO HOT
Within our own solar system, the other planet that has attracted serious attention from would-be terraformers is Venus.
Venus was once thought to be Earth's sister planet, since it is about 95 percent the diameter of the Earth and has about 88 percent of Earth's gravity. The fact that it orbits the sun at 72 percent of Earth's distance implied that it would be warmer than Earth, but not necessarily fatally so, and visions of Venus as a world rich with steaming jungles beneath cloudy skies filled astronomy books through the late 1950s. However, in the early 1960s, NASA and Soviet probes reached Venus and discovered that the fair planet of the love goddess not only lacked jungles, but was a pure hell, sporting a mean surface temperature of 464°C—hot enough to melt lead. This was especially surprising, since Venus is masked by highly reflective clouds—so reflective, in fact, that the planet actually absorbs less solar radiation than the Earth! Based on the amount of sunlight it absorbs, Venus should be colder than Canada. Instead, it is as hot as a self-cleaning oven. The explanation for this paradox was soon found, however; Venus is hot because what heat it does absorb is kept captive due to the “greenhouse effect” caused by its thick CO2 atmosphere. (This is how the “greenhouse effect” phenomenon currently of concern on Earth was first discovered.)
Well then, if the CO2 atmosphere is baking Venus to death, why not just get rid of it? This was the genesis of Carl Sagan's seminal 1961 proposal to terraform Venus with aerial algae.2 According to Sagan, Venus could be cooled by dispersing photosynthetic organisms in its atmosphere that could convert it from greenhousing CO2 to transparent (and breathable) oxygen. This proposal was a landmark, in that it was the first serious discussion of terraforming within the world of science and engineering, as opposed to in science fiction. However, it would not have worked for a number of reasons.
In the first place, while algae have been found in rainwater, there are no plants that actually live in an aerial habitat. Perhaps they could be engineered, especially for a planet with as thick an atmosphere as Venus. Sagan's proposal faces a bigger problem, however. Though based on the scientific knowledge available at the time, his concept grossly overestimated the amount of water available on Venus. Photosynthesis involves combining water molecules with CO2 molecules in accordance with:
As can be observed in equation 8.1, to get rid of a molecule of CO2 using photosynthesis, you need to use up a molecule of water. On Venus today, there is much more CO2 than water, so if you could find organisms to perform reaction 8.1 on a mass scale, you would simply rid Venus of the small amount of water it retains while leaving the large bulk of the CO2 atmosphere basically untouched. It would take the equivalent of a global ocean two hundred meters deep to provide enough water to react away the CO2 in Venus's atmosphere via photosynthesis; in fact, Venus only has enough water to cover itself with a layer five centimeters deep. Sagan's idea could not have worked because there just isn't enough water on Venus to do the job.
Importing the water isn't an option. It would require moving ninety-two million iceteroids, each with a mass of a billion tons. So, in fact, the only way to cool Venus is to block the sun with a huge solar sail. If we were to make a sail twice the diameter of Venus out of aluminum 0.1 microns thick, 124 million tons of processed materials would be required, to which we would need to add several billion tons of ballast. Manufacturing this object out of asteroidal material would be a huge job, but probably not beyond the means of an advanced Type II civilization. Such a sail could be stationed at the position between Venus and the sun where their gravitational fields, sunlight force, and heliocentric centrifugal force all balance (near the Venus-sun L1 point, about a million kilometers from Venus). An alternative approach might be to put thick clouds of dust in orbit around Venus, thereby blocking sunlight. The advantage of this approach is that simple unprocessed material could be used. The disadvantage would be that a lot more mass would be required, and orbital operations could be seriously impaired for some time. In either case, we would aim to block more than 90 percent of the sun's rays, leading to the precipitation of Venus's CO2 atmosphere as dry ice after a cooling-off period of about two hundred years. The dry ice could then be buried, and by moving the sail around in a small orbit about the L1 point, we could create an acceptable surface temperature and day/night cycle on Venus. But the planet would still be incredibly dry. (In contrast to Venus's 0.05 meters of water, Mars has 200 meters and Earth 2,000 meters. Only the moon, at 0.00003 meters, is dryer.) All this would tend to imply that terraforming Venus is a very big project offering modest payoff.
However, this was not always so. The young Venus had lots of water, comparable, in fact, to the water inventory of the Earth. According to the “moist greenhouse” theory proposed by planetary scientist James Kasting in 1988 and now generally accepted, the early Venus featured oceans of water at temperatures between 100° and 200°C. The pressure of the thick overlying atmosphere prevented these oceans from boiling away. The rapid cycling of water in this ultratropical environment would have caused most of Venus's CO2 supply to rain out and react with minerals to form carbonate rocks. This moist greenhouse Venus could exist because the early sun was only about 70 percent as luminous as the sun is today.3 But after a billion years or so, the sun's luminosity increased to 80 percent of its present value, and temperatures on Venus rose above 374°C, the critical temperature for water. Once this happened, liquid water could no longer exist on Venus, and all of its oceans turned to steam. With so much water vapor in the atmosphere, water loss from the planet due to ultraviolet dissociation of upper atmospheric water molecules occurred at a rapid rate. Moreover, once it stopped raining, geologic recycling was able to release the vast supplies of CO2 stored in Venus's carbonate rocks, thereby creating the hellish runaway greenhouse environment that curses Venus today.
If human explorers had arrived on the scene when Venus was still in its young, moist greenhouse phase, terraforming could have been accomplished by building the sun shade described above. That would be a significant engineering project, but well worthwhile, since the result would be an Earth-sized planet complete with temperate oceans and a moderate-pressure, nitrogen-dominated atmosphere, fully ready for the rapid propagation of life.
As discussed in The Case for Mars, solar sails used as reflectors to increase solar flux will also be a useful auxiliary technique in melting the permafrost and activating the hydrospheres of cold Mars-like planets. A solar sail mirror fifteen thousand kilometers in radius (190 million tons of 0.1 micron aluminum) positioned with Titan at its focal point would increase Titan's solar flux to Mars-like levels, which, together with the strong greenhouse effects produced by Titan's methane atmospheric fraction, should be enough to raise the planet to Earthlike temperatures. A similar mirror would be sufficient to melt surface water ice into oceans and vaporize dry ice to create a CO2 atmosphere on Callisto. Thus, the ability to manufacture large, thin solar sails in space, a central skill for engineering both Type II interplanetary transportation and emergent Type III interstellar spaceship propulsion, is also a key technology for terraforming Mars-like, Titan-like, Callisto-like, and young-Venus-like moist greenhouse worlds.
It should be noted that while Sagan's proposal for terraforming Venus with algae would not have worked, there is a class of planet for which it would: young Earths. The early Earth also had a thick CO2 atmosphere, which was fortunate because the early sun was weak. But there could be many other young Earths out among the stars that are receiving solar input comparable to what the Earth gets today, but which are too hot to be habitable because of heavy CO2 atmospheres. In the case of such worlds, appropriately selected or bioengineered photosynthetic organisms might be able to rapidly terraform the planet to fully Earthlike conditions without any additional macroengineering effort. This is important, because the Earth has only had its current high-oxygen/low-CO2 atmosphere for the past six hundred million years, or the most recent 14 percent of the history of the planet. If humans had encountered the Earth during the other 86 percent of its history, we would have needed to adopt Sagan-like terraforming strategies in order to make it habitable.
So, Sagan didn't really have the wrong idea—he had the right idea but applied it to the wrong world. There are undoubtedly many right worlds out there waiting for its application. Based on our own history, young Earths, uninhabitable but ready for Sagan-style biological terraforming, are likely to outnumber already habitable Earths by a considerable margin.
ENGINEERING THE EARTH
The ability of humans to change planetary environments can be demonstrated by the effect we are already having on the Earth. Considering this necessarily brings us into a discussion of climate change, a scientific issue that unfortunately has been corrupted by political actors seeking to obscure, cherry-pick, or exaggerate various aspects of the case for partisan purposes. Nevertheless, despite the fact that any objective discussion of this matter is sure to evoke outrage by militants hailing from both wings of the spectrum, I will try to discuss this important matter in as level a manner as possible.
First of all, global warming is quite real. Indeed, it is demonstrable that the Earth has been warming for the past four hundred years. During Elizabethan times, the Thames River used to freeze over every winter, creating a place for “frost fairs” and other festivities. The last such fair was held in the mid-1600s, but as late as the mid-1800s, Charles Dickens could describe snowy winters in London, which no longer exist. During the American Civil War, Confederate soldiers stationed as far south as Georgia amused themselves by holding massive snowball fights pitting one regiment against another. By the early twentieth century, such climates were things of the past. Given the limitations of human industry over the period from 1600 to 1900, it's pretty clear that the warming that occurred during that span was natural rather than anthropogenic. That said, in the twentieth century, the rate picked up in a manner that is consistent with the predictions of the more conservative climate models for CO2-driven global warming. On the basis of the available data, it would appear that coincident with a 33 percent rise of atmospheric CO2 levels from three hundred to four hundred parts per million (ppm) over the past century (which is consistent with human fossil fuel use), global temperatures have risen an average of about 0.8°C. This result may be doubted by some because measuring an average global temperature to that degree of accuracy is a difficult task, with the results easily influenced by the choice of location for the thermometers. Nevertheless, it is clear that substantial warming has occurred because the average length of the growing season (i.e., the time between that last killing frost of the spring and the first one of the fall, an easy measurement to make) has expanded markedly over this period. For example, as shown by the EPA data presented in figure 8.1, the average length of the growing season in the United States has expanded by about twenty days since 1910, which is about as clear a demonstration of warming as you can get.4
Figure 8.1. The length of the growing season in the United States has expanded markedly since 1895, a clear proof of climate change. Image courtesy of the US Environmental Protection Agency.
Unfortunately, since this is a beneficial change, those attempting to make the case for a climate emergency never mention it, leaving their argument hanging on unconvincing claims of statistically averaged precision temperature measurements.
Global warming should also lead to greater net rainfall on average, and this indeed has also been measured, although the degree of change varies locally, with some regions actually experiencing drought despite an overall increase in the total.5
In addition to driving warming, the easily measured and clearly anthropogenic 33 percent increase in atmospheric CO2 is also having a powerful direct effect on the biosphere, with rates of plant growth worldwide accelerated by about 20 percent. This result, which is consistent with the well-established theory of photosynthesis, is supported by innumerable lab studies, field studies, and, most strikingly, satellite observations, as presented by the NASA data in plate 15 in the photo insert, which show a dramatic increase in the rate of plant growth over the past thirty-six years.6
(See plate 15.)
These findings provide grounds for dismissing some of the more strident warnings of climate activists. But let's not be too hasty. While the moderate warming and CO2 enrichment of the atmosphere we have experienced thus far has been, on the whole, rather beneficial, unconstrained temperature and CO2 increases could be another matter altogether. Much larger increases are possible, as simply raising the rest of the world to the current US standard of living would require quintupling global energy production, with even more needed in reality due to population increase and the continued rise of living standards in the advanced sector. Furthermore, the happy picture presented in plate 15 only shows what is occurring on land. Most of the Earth is covered by oceans, and they show little evidence of enhanced biological productivity due to increased CO2. On the contrary, significant damage to coral reefs appears to be occurring due to CO2-driven ocean acidification (although conventional pollution and overfishing may also be to blame in some cases).
So it would appear that massive anthropogenic CO2 emissions are fertilizing the land while harming the oceans. This is happening because while CO2 availability is a limiting factor for the growth of land plants, in most of the ocean the rate of growth of the phytoplankton that stand at the base of the food chain is controlled by the availability of trace elements, such as iron, phosphorus, and nitrates. This is why well over 90 percent of the biological productivity of the world's oceans comes from the less than 10 percent of their area that is fertilized with runoff from the coasts or continental shelf upwelling, with the vast open seas left a virtual desert. As a result, the ocean's CO2 levels simply increase, with no useful and some potentially seriously harmful results.
What to do? The conventional answer from most of the global warming activist community has been to propose increased taxes on fuel and electricity, thereby dissuading people of limited means from making much use of such amenities. This program seems to me to be both unethical and impractical, and regardless of anyone's opinion on the matter, it is quite clear that it is failing to impact the growth of global carbon emissions in any significant way.
The basis for a much more promising approach was demonstrated by the British Columbia–based Haida First Nations tribe, who in 2012 launched an effort to restore the salmon fishery that has provided much of their livelihood for centuries. Acting collectively, the Haida voted to form the Haida Salmon Restoration Corporation, financed it with $2.5 million of their own savings, and used it to support the efforts of American scientist-entrepreneur Russ George to demonstrate the feasibility of open-sea mariculture through the distribution of 120 tons of iron sulfate into the northeast Pacific to stimulate a phytoplankton bloom, which in turn would provide ample food for baby salmon.
By 2014, this controversial experiment proved to be a stunning, over-the-top success. In that year, the number of salmon caught in the northeast Pacific more than quadrupled, going from 50 million to 219 million. In the Fraser River, which only once before in history had a salmon run greater than 25 million (about 45 million in 2010), the number of salmon increased to 72 million.
“Up and down the West Coast fisheries scientists and fishers are reporting they are baffled at the miraculous return of salmon seen last fall and expected this year,” commented George. “It is of course all because when we take care of our ocean pasture. Replenish the vital mineral micronutrients that we have denied them through our high and rising CO2 just one old guy (me) with a dozen Indians can bring the ocean back to health and abundance.”7
In addition to producing salmon, this extraordinary experiment yielded a huge amount of data. Within a few months after the ocean-fertilizing operation, NASA satellite images taken from orbit showed a powerful growth of phytoplankton in the waters that received the Haida's iron.8 It is now clear that as hoped, these did indeed serve as a food source for zooplankton, which in turn provided nourishment for multitudes of young salmon, thereby restoring the depleted fishery and providing abundant food for larger fish and sea mammals as well. In addition, since those diatoms that were not eaten went to the bottom, a large amount of carbon dioxide was sequestered in their calcium carbonate shells.
Unfortunately, the experiment, which should have received universal acclaim, was denounced by many leading environmental activists. For example, Silvia Ribeiro, of the international environmental watchdog ETC group, objected to it on the basis that it might undermine the case for carbon rationing. “It is now more urgent than ever that governments unequivocally ban such open-air geoengineering experiments. They are a dangerous distraction providing governments and industry with an excuse to avoid reducing fossil fuel emissions.” Writing in the New York Times, Naomi Klein, the author of a book on “how the climate crisis can spur economic and political transformation,” said that “at first,…it felt like a miracle.”9 But then she was struck by a disturbing thought:
If Mr. George's account of the mission is to be believed, his actions created an algae bloom in an area half of the size of Massachusetts that attracted a huge array of aquatic life, including whales that could be “counted by the score.”…I began to wonder: could it be that the orcas I saw were on the way to the all-you-can-eat seafood buffet that had descended on Mr. George's bloom? The possibility…provides a glimpse into the disturbing repercussions of geoengineering: once we start deliberately interfering with the earth's climate systems—whether by dimming the sun or fertilizing the seas—all natural events can begin to take on an unnatural tinge…. A presence that felt like a miraculous gift suddenly feels sinister, as if all of nature were being manipulated behind the scenes.
But the salmon are back.
Not only that, but contrary to those who have denounced the experiment as reckless, its probable success was predicted in advance by leading fisheries scientists. “While I agree that the procedure was scientifically hasty and controversial, the purpose of enhancing salmon returns by increasing plankton production has considerable justification,” Timothy Parsons, professor emeritus of fisheries science at the University of British Columbia, told the Vancouver Sun in 2012. According to Parsons, the waters of the Gulf of Alaska are so nutrient poor they are a “virtual desert dominated by jellyfish.” But iron-rich volcanic dust stimulates growth of diatoms, a form of algae that he describes as “the clover of the sea.” As a result, volcanic eruptions over the Gulf of Alaska in 1958 and 2008 “both resulted in enormous sockeye salmon returns.”10
The George/Haida experiment is of historic significance. Starting as a few bands of hunter-gatherers, humanity expanded the food resources afforded by the land a thousandfold through the development of agriculture. In recent decades, the bounty from the sea has also been increased through rapid expansion of aquaculture, which now supplies about half our fish. Without these advances, our modern global civilization of seven billion people would not be possible.
But aquaculture makes use only of enclosed waters, and commercial fisheries remain limited to the coasts, upwelling areas, and other small portions of the ocean that have sufficient nutrients to be naturally productive. The vast majority of the ocean, and thus the Earth, remains a desert. The development of open-sea mariculture could change this radically, creating vast new food resources for both humanity and wildlife. Furthermore, just as increased atmospheric carbon dioxide levels have accelerated the rate of plant growth on land, so increased levels of carbon dioxide in the ocean could lead to a massive expansion of flourishing sea life—more than fully restoring the world's depleted wild fisheries—provided humans make the missing critical trace elements needed for life available across the vast expanse of the oceans.
The point deserves emphasis. The advent of higher carbon dioxide levels in the atmosphere has been a boon for the terrestrial biosphere, accelerating the rate of growth of both wild and domestic plants and thus expanding the food base supporting humans and land animals of every type. Yet in the ocean, increased levels of carbon dioxide not exploited by biology could lead to acidification. By making the currently barren oceans fertile, however, mariculture would transform this apparent problem into an extraordinary opportunity.
Such an effort would more than suffice to limit global warming. Indeed, were 3 percent of the Earth's open-ocean deserts enlivened by mariculture, the entirety of humanity's current CO2 emissions would be turned into phytoplankton, with available worldwide fish stocks greatly increased as a result.11
The situation is ironic. In some places in the ocean, excessive nutrients delivered by runoff of agricultural fertilizers cause local algae blooms that are so massive as to destroy all other aquatic life. Yet when delivered in the right amounts, such “pollutants” become the key to creating a vibrant marine ecology.
You can irrigate a farm to make it productive, or you can flood it and destroy the crops. You can fertilize land with horse manure, or you can…well, you get the idea.
“Pollution” is simply the accumulation of a substance which is not being put to good use. Carbon dioxide emissions are neither good nor bad in themselves. They are good for parts of the biosphere that are ready to make use of them and bad for those that are not.
Say what you will, humans are not going to stop using fossil fuels any time soon. So we need to prepare the Earth to take full advantage of the resulting emissions.
We live on fertile islands in a saltwater desert planet. If we apply some creativity, we can make those deserts bloom.
ROBOTICS, BIOENGINEERING, NANOTECHNOLOGY, AND PICOTECHNOLOGY
Terraforming will require a lot of work, and the people who attempt it will want helpers. There is thus no doubt that in terraforming, as in all other extraterrestrial engineering projects, robotics will play an important role. No commodity will be in shorter supply in an early Martian colony than human labor time, and as the frontier moves outward among the planets and then to the stars, the labor shortage will grow ever more pressing. The space frontier will thus serve as a pressure cooker for the development of robotics and other forms of labor-saving technologies.
But robots that must be manufactured still demand human labor. This will make them expensive, as space labor will be dear and transportation from Earth will be costly. Expensive robots are acceptable for assisting in certain tasks, such as exploration, where large numbers are not required. But terraforming will need multitudes. The only solution would be robots that make themselves.
Back in the 1940s, the mathematician John Von Neumann proved that self-replicating automatons are possible. That is, he proved that there is no mathematical contradiction that precludes the existence of such systems. But creating them is another issue altogether.
No one today has a clue as to how to do it, but it would not be too big a leap of faith to believe that a machine could be built and programmed that, if let loose in a room filled with gears, wires, wheels, batteries, computer chips, and all its other component parts, could assemble a copy of itself. But who would make the parts? Consider what is necessary to make even a simple part, such as a stainless steel screw.
To make the steel for the screw, iron, coal, and alloying elements from all over the world need to be transported to a steel mill. They need to be transported by rail, ship, truck, or plane, and all of these contrivances must be made in factories or shipyards of great complexity, each of which involves thousands of components shipped in from all over the world, by various devices, made in various facilities, and so on. So just supplying the steel for the screw involves the work of thousands of factories and millions of workers. If we then consider who made the food, clothing, and housing used by all those workers, who taught them, and who wrote the books that educated them, we find that a large fraction of the present and past human race was involved. And that's just the steel for the screw. If we now consider the processes needed to put the thread on the screw…but I think you get my point. Self-replicating machines cannot exist unless the parts they require are ready-made. This will never be the case for machines built out of factory-produced gadgets.
The only self-replicating complex systems known to exist are living things. Organisms can reproduce themselves because they are made of cells that can reproduce themselves using naturally available molecules as parts for their component structures. Because they can reproduce themselves, bacteria, protozoa, plants, and animals have extraordinary power as terraforming agents; a few of the right kinds, released under the right conditions, can multiply exponentially and radically transform an environment. Of course, for the transformation to be beneficial, some aspect of the organism's self-directed activity must contribute to the terraforming program. As we have seen, in such cases as methanogenic bacteria producing greenhouse gases, or photosynthetic plants eliminating them (while producing useful oxygen), the metabolisms of many forms of life make them natural servants of the terraforming process. This is to be expected, because as discussed earlier in this chapter, life would not exist if it did not terraform.
That said, current bacteria, plants, and animals are not specifically adapted to terraforming virgin planets; their adaptations are focused on terraforming and living on the current Earth. Their ancestors pioneered the early Earth, and they retain some of the necessary skills, but they are by no means the ideal candidates for pioneering new worlds.
However, since the domestication of the dog, twenty thousand years ago, humans have practiced modification of other species to meet our needs, primarily through the practice of selective breeding. In recent years, a series of advances—first the development of genetics, then the discovery of DNA, and now the actual reading of the genetic code and mastery of recombinant DNA techniques—has enormously expanded our abilities in this area. As a result, it will soon be within our capabilities to design ideal pioneering microorganisms and ultraefficient plants well suited to transform a wide variety of extraterrestrial environments.
But microorganisms and plants have their limits. They are all based on water/carbon chemistry, which cannot function beyond the temperature boundaries defined by the freezing and boiling points of water. If temperatures are sustained below 0°C, life survives but goes dormant; above the boiling point (100°C at Earth's sea level, 374°C maximum), organisms are destroyed. Many extraterrestrial environments of interest exist beyond these narrow limits.
The question thus arises if it might be possible to develop self-replicating organisms with a fundamental chemistry other than the water/carbon type universal to life as we know it. If in venturing out into interstellar space we should discover novel kinds of life, based on silicon or boron, for example, but with their own equivalent of a genetic code that future human bioengineers can master, this problem would be partially solved, as the new chemistry would undoubtedly define a new set of temperature limits. But it is unclear whether such organisms will ever be found, or what the extent of their utility might be. From the point of view of the planetary engineer, a more intriguing question is whether we can devise from scratch self-reproducing organisms that are not water/carbon.
This is the idea behind “nanotechnology”—the construction of self-replicating microscopic programmable automatons out of artificial structures built to design specifications on the molecular level. Why try to build microscopic self-replicating robots when we don't even know how to build human-scale reproducing automatons? The reason, once again, is that parts for large robots need to be manufactured in advance, while the molecules used as parts for nanorobots (a nanometer is a billionth of a meter) either come ready-made or can be readily assembled from atoms that do. So while building a nanorobot would unquestionably be more difficult that constructing a normal-sized one, nanorobots are the only kind that hold the promise of being potentially self-replicating.
The vision of nanotechnology is described at length by the field's champion, K. Eric Drexler, in his book Engines of Creation.12 The basic idea is that once we learn how to manipulate individual atoms and molecules, machines can be constructed using small clumps of atoms as gears, rods, wheels, and other parts. Because each of these parts would be made from a precisely assembled group of atoms—just as carbon is arranged in a diamond lattice—they potentially could be very strong. Thus nanomachines filled with gears, levers, clockwork, motors, and all kinds of mechanisms could, in principle, be built. Energy to drive the units could be obtained from nanophotovoltaic units and stored in nanosprings or nanobatteries. To go from there to nanorobots, we need nanocomputers. Drexler proposes that these could be built out of mechanical nanomachines, along the same principle as the first mechanical computers proposed by Charles Babbage and Ada Lovelace to be built out of brass gears and wheels in the nineteenth century. Such machines could be programmed with punched tape or cards, and presumably nanoscopic analogs for these mechanical software devices could be found as well.
Babbage's ingenious mechanical computers don't even remotely compare in capability to modern electronic ones, but the parts used by Drexler's nano–Babbage machines would be so small that enormous amounts of computing power could be contained in a microscopic speck. So, once we accomplish the admittedly difficult job of building the first nanorobot, with all its necessary nanomechanisms for locomotion and manipulation, and equip it with a superpowerful version of a Lovelace-programmable Babbage machine built on the nanoscale, we could set this first “assembler” loose and it would multiply itself through exponential reproduction. The vast horde of assemblers would then turn their attention to accomplishing some task they had been programmed to execute, such as inspect a human body for cancer cells and make appropriate adjustments, manufacture huge solar sails from asteroids, or terraform a planet.
Figure 8.2. Hardware and software. Charles Babbage invented the mechanical computer. The mathematician Countess Ada Lovelace, daughter of the poet Lord Byron, realized that Babbage's computers could be programmed to act like mechanical brains. Her insight could enable self-replicating nanorobots, endowing humanity with nearly unlimited power to terraform worlds.
To build macroscopic structures, billions of nanorobots would have to group themselves together to form large robots, perhaps on the human scale or even much bigger. This could lead to the manifestation of systems that would have all of the capabilities of the evil “liquid metal” robot depicted in the movie Terminator 2, able to change its shape and disperse and reassemble itself as required. But it actually would be much more powerful, since when it did choose to disperse, each of the billions of its subcomponents could be used as a seed to reassemble an entire unit from dirt. Even Arnold Schwarzenegger would have had a hard time saving the world from one of those!
It certainly sounds like fantasy, but is it? In defense of the nanotechnology thesis, one can advance the statement that it does not defy any known laws of physics, and therefore, given sufficient technological advance, it should become possible. Against it, one can easily point out the enormous technological difficulties that must be mastered before nanotechnology becomes a reality. Furthermore, while nanotechnology may not violate any laws of physics, controllable self-replicating robots may well violate the laws of biology. Consider: small replicating nanomachines will unquestionably undergo random alterations, or mutations, if you will. Those mutations that produce strains that reproduce more rapidly will swiftly outnumber to insignificance those that don't. Clearly, if the goal is to rapidly reproduce, it would be to a nanomachine's advantage not to have to bother with doing work for the benefit of human masters. Instead, evolutionary pressures will dictate that nanorobots attend only to their own needs. Those nanorobots that continue to slave away in obedience to their human-directed programs will not be able to compete with the wild varieties and will rapidly go extinct. As the saying goes, “Live free or die.”
There is another reason to hold inorganic nanorobots suspect—we don't observe them. If diamond-geared self-replicating assemblers could be built, they would be ideally suited for dispersal across interstellar space using microscopic solar sails for propulsion. If, in the vast sweep of past time, a single species anywhere in the Milky Way developed such micro-automatons, it long since would have been able to use them to colonize the entire galaxy. All life on Earth would be based on nanorobots. But since this is not observed, we are driven toward concluding that either (a) there is no other intelligent life in the galaxy, or (b) nonorganic nanotechnology of the self-replicating micro-Babbage robot type described by Drexler is impossible. Since we know that the evolution of intelligent life is possible, but we do not know that nanotechnology is, I must consider (b) the more likely alternative.
It may be observed that bacteria, the organic nanocritters of nature, are also capable of surviving spaceflight. We would therefore expect that if bacteria had evolved (or been developed) elsewhere in the galaxy, that they would be the basis for life on our planet. Interestingly, they are. Not only are bacteria the earliest known inhabitants of the Earth, but the higher eukaryotic cells that compose all animals and plants are clearly evolved from symbiotic colonies of bacteria. The possible broader significance of this will be discussed further in a following chapter. For our purposes here, however, it suffices to say that the omnipresence of organic self-replicating nano-spacefarers (bacteria) and the absence of nonorganic nano-assemblers are evidence for doubting the feasibility of Drexler-style nanotechnology.
But maybe nanotechnology isn't impossible—maybe it's just incredibly difficult. Maybe the reason nobody else has invented it is because they weren't smart enough, or didn't try long and hard enough, or were scared of the consequences of it getting out of control. Maybe everyone else just decided that using bacteria was easier and sufficed for their purposes. Maybe there really is a way to initiate and control nanotechnology, and it's just waiting for someone to invent. In every field of endeavor, someone has to be first. Maybe that someone could be us.
That's a lot of maybes. But it's worth some speculation, because if the promise ever does pan out, programmable self-replicating nanomachines will offer our descendants powers of creation limited only by the rate at which solar flux provides the energy needed to drive work in a given region. If we continue the vector toward ever-growing technological sophistication that necessarily will accompany our transformation into first a Type II and then Type III civilization, the intricate wizardry required to develop nanotechnology might someday fall within our grasp. And who knows? Perhaps, in the still more distant future, even greater capabilities could become possible—building machines not out of atoms or molecules but from subatomic particles such as atomic nuclei. Operating on a scale thousands of times smaller and faster than even nanomachines, such picotechnology might draw its energy not from chemical reactions but from far faster and more powerful nuclear reactions. The capabilities that such programmable picomachines would make available could only be described today as sheer magic.
In the meantime, however, my bet is on bioengineering. Life offers us a tried-and-true type of self-replicating micromachine, and the programming manual is already in our hands. With our brains and their muscle, human-improved microorganisms will do some very heavy lifting in the hard work required to bring dead worlds to life.
LIGHTING STARS
It is better to light a candle than to curse the darkness.
Stars are the sources of life. Enormous engines of nuclear fusion, they pour light out into the cosmos, warming the dead cold of space and providing the antientropic power needed for the self-organization of matter. Starry nights have a mystic beauty, but when considered from a scientific standpoint, they are even more beautiful than they look. For the million specks of light that adorn the black velvet of a dark night sky are, in fact, nothing less than a million fountains of life.
Without question, there are numerous worlds too far from any star to support life. In our own solar system, we find world-sized moons of Jupiter and Saturn that can only be terraformed with the aid of giant reflectors, and worlds beyond, such as Neptune's giant moon Triton, for which the huge efforts required for such an expedient make it difficult even to contemplate.
Our sun is actually among the brightest 10 percent of stars. Most of the stars we know are much dimmer type K or M red dwarfs, and there are likely legions dimmer still: brown dwarfs, too small to ignite fusion, whose dead planets therefore orbit endlessly in frozen darkness.
What if we could light their fires?
If the object in question is an actual luminous star, such as a type M red dwarf, or even a brown dwarf, we could amplify its power by using solar sails to reflect back a small portion of the star's output. The rate at which thermonuclear fusion proceeds in a star goes as a strong power of its temperature. For proton-proton fusion in a star the size and temperature of our sun, reaction rates scale as the fourth power of temperature, while for cooler stars, the temperature dependence is stronger. If the CNO cycle is being used by the star to catalyze fusion, then the reaction rate will increase in proportion to T20 (!!!). So even increasing the temperature of a star by a small amount through reheating with reflected light can cause a large increase in power generated. This increased output will cause the star's temperature to rise further, which will amplify output yet again.
Type M stars outnumber all other types three to one. Brown dwarfs may well outnumber all luminous stars put together by several orders of magnitude. If we can set these bodies alight, we can vastly expand the domain of life in the universe.
In his book Star Maker, written in the 1930s, the British philosopher Olaf Stapledon compared the star maker to God.13
Gods we'll never be. But starmaking is a very noble profession.
If we learn to light stars, we will become capable of bringing not only planets but whole solar systems to life. That's not too shabby for the children of tree rats.
In the early universe, nearly all matter was hydrogen or helium. The heavier elements, including the carbon, oxygen, and nitrogen vital to organic chemistry and life, were all made in stars and spread through the cosmos by stars in nova and supernova explosions. We are stardust, warmed to life by a mother star and now ready to leave the nest to seek our fortune and make our mark among her siblings.
Ex astra, ad astra.
Stars have made life. Life should therefore make stars.
FOCUS SECTION: THE KEPLER MISSION
Because not only the sun but almost every large object in our own solar system is encircled by smaller orbiting bodies, it has always been reasonable to suppose that the stars have planetary systems too. Indeed, as far back as the Renaissance, Italian philosopher Giordano Bruno advanced the claim that the stars were indeed suns, surrounded by worlds that, like our own, are inhabited by intelligent beings. These people when they look up into the sky see us, and therefore, “we are in heaven.” In other words, the laws of the heavens are the same as those of the Earth. This being true, the human mind should be able to comprehend the nature of the universe.
For this daring hypothesis, which stands as the fundamental basis of science itself, Bruno was burned at the stake in 1600. Notwithstanding his fate, many others more favorably situated tried to prove it over the centuries that followed, using ever-better telescopes to search for extrasolar planets. However, it was not until the 1990s that a single one was detected.
The reason for this rather extended delay is that any planet orbiting another star would not only be billions of times dimmer than a planet in our own solar system—some of which, like Uranus, Neptune, and Pluto, were not themselves detected until fairly recent times—but outshone billions of times over by their own stars. This combination made them practically invisible, even to very good twentieth-century instruments like the two-hundred-inch-diameter Mount Palomar telescope.
With direct imaging nearly impossible, more sophisticated techniques were advanced. The first to succeed involved looking for the wobble that might be induced in a star's motion by a heavy planet orbiting close by, pulling the star this way and that as it circled about. If the planet's orbit lay close to the plane pointing toward the Earth, the star would be pulled toward us when the planet came in front, temporarily Doppler shifting its light toward the blue end of the spectrum. Similarly, when it went around the back, it would pull the star away, shifting its light toward the red. By observing these periodic shifts in spectra, the existence of such planets could be inferred. Using this technique, Pegasus 51b, the first extrasolar planet orbiting a normal star, was discovered in 1995, with several more found each year afterward. (An alternative gravity-induced-wobble technique enabled discovery of a planet orbiting a neutron star in 1992.)
However, this Doppler-wobble detection method only works for really massive planets orbiting very close (as in a few percent of the Earth's distance from our sun) to their stars and so is limited to finding uninhabitable “hot Jupiters.” Our own solar system would be undetectable to alien astronomers if they were restricted to this technique.
There had to be a better way, and NASA Ames astronomer William Borucki thought he had it. Instead of looking for reflected light emitted by an orbiting planet, he reasoned, why not look for the much larger loss of light that would occur when a planet passed in front of its star, partially eclipsing it as seen by us? The Earth is a hundredth the diameter of the sun, and so it would eclipse a ten-thousandth of the sun's light as seen by an observer in another solar system. The same would hold looking the other way. By the 1980s, photometers with such sensitivity were available. If we could launch one into space, attached to a good telescope with appropriate gear for recording massive amounts of data to compare light levels observed from thousands of stars over time, we would be able to detect all kinds of planets, including other habitable Earths!
So Borucki began to campaign for such a mission, holding the first workshop to discuss the idea in 1984, to pull together a team. Progress was slow in convincing the NASA establishment, which had other priorities, but things began to open up in the 1990s when NASA administrator Dan Goldin initiated the Discovery program, which invited independent teams with creative mission concepts to seek funding in repeated open competitions, with proposals judged comparatively on the basis of cost, risk, and perceived scientific value. Borucki's team entered the competition and lost. They tried again the second time and lost again, and then again. But the fourth time around, they won. In 2001, the Kepler mission (so named after the discoverer of the laws of planetary motion, at the insistence of team members Carl Sagan and Jill Tarter) became the tenth mission to be funded by the Discovery program.
After much hard work, budget struggles, delays, and other heartache, in March 2009 Kepler was finally launched into interplanetary space, following behind the Earth about sixty degrees in its orbital track around the sun. Aiming its 0.95-meter Schmidt telescope toward a region in the sky between the stars Deneb in the constellation Cygnus and Vega in Lyra, it began its scientific reconnaissance in June 2009. Deneb and Vega are two of the three corners of the “Summer Triangle” well-known to stargazers (the third is Altair, in the constellation Aquila). They can be easily seen by observers in the northern hemisphere on a clear summer night. They lie in the Milky Way, but not its densest-packed part, which would be toward the center of our galaxy. Rather, they lie in the plane of our galaxy but in the direction of the sun's orbit around its center, so that when we look toward them, we are seeing stars that co-orbit the galaxy with us at our same distance from the galaxy's core—which is to say, about halfway out. The field of view of Kepler encompasses about ten degrees by ten degrees, allowing it to see about 0.25 percent of the sky at any one time. Within this field of view, there were about 500,000 stars visible to Kepler, which the team cut down to 150,000 candidates for observation, as many of the others had natural variations in brightness that would make Kepler's photometric technique of planet detection ineffective. The telescope then began to collect data and process it onboard to limit the load for transmission, and the astronomers began to watch and wait for the results.
Figure 8.3. Kepler's field of view. Image courtesy of NASA.
A planet will only eclipse its star if it orbits it in the plane that points toward to Earth. This fact reduces its chance of detection by Kepler by a factor of two hundred. Furthermore, since Kepler requires three eclipses from a given planet to establish that a single body exists that performed two orbits of the same period while Kepler was watching, only planets with orbital periods shorter than half the total observation time can be detected. So, for example, after the first two months of operation, Kepler could only detect planets with orbital periods (“years,” if you will) of less than one month. To go around so swiftly, a planet would have to orbit very close to its star—in our own solar system, it would circle about halfway between Mercury and the sun. So such superhot worlds are easiest to detect, and, to the joy of the team, Kepler found dozens of them virtually immediately. But the planets of greatest interest to us are those that could possibly harbor life, which orbit further out and, moving more slowly around a longer path, take longer periods of time to make their circuit, and thus to be detected. The longer Kepler could keep its eyes focused on its single field of view, the more interesting its results would become. But Kepler was a low-cost spacecraft built and operated within the limited budget of a Discovery mission: about $500 million, around a fifth as much as flagships like Galileo or Cassini. To find habitable worlds with Earthlike years, it would need to keep working well for at least three years. How long could it last?
It lasted four years. And not only did it detect many Earthlike worlds orbiting in their star's habitable zone, it has found innumerable worlds of every possible description, orbiting all over the place. It found rocky worlds and gaseous worlds, and water worlds, and ice worlds, and lava worlds, and iron worlds, and diamond worlds—solar systems galore with components and arrangements unanticipated even by science fiction writers.14 In all, it detected some 4,000 candidate planets, with more than 2,500 now confirmed. These statistics are incredible, because given the limits of its detection technique, Kepler could only have been expected to find around 800 planetary systems among its 150,000 targets. The implication is that virtually all stars are surrounded by solar systems with multiple planets. That means that there are hundreds of billions of planets to be found in our own galaxy alone. Furthermore, Kepler threw in the trash can all previous theories of planetary formation based on explaining why our own solar system, with its four small rocky planets orbiting close to the sun and surrounded by four large gas giants orbiting further out, necessarily had to be constructed that way. In the words of one scientist: “It is now apparent that any planet which is physically possible exists.”
Figure 8.4. Kepler has discovered more than four thousand planets. The nature of the instrument causes bias toward finding larger planets with short orbital periods. Much larger numbers of smaller, long-period planets may be inferred. Image courtesy of NASA.
We are living in a universe of all possible worlds.
Two of the four reaction wheels necessary for accurately pointing Kepler gave out in 2013, putting an end to its systematic long-duration survey of the Deneb-Vega sector. But the resourceful team came up with a clever idea of using natural forces to keep Kepler pointing away from the sun, allowing it to sweep through the zodiac with a series of sixty-day observing campaigns on each sector. This has allowed it to keep operating, discovering hundreds of additional short-period planets observable by looking outward through our own solar system's ecliptic plane.
The resounding success of Kepler inspired photometric planet detection campaigns to be launched by many large ground-based telescopes, resulting in the discovery of an additional 1,500 planets all over the sky. More powerful follow-up will soon be available from the recently launched TESS, which promises to multiply Kepler's discoveries a thousandfold.
But the summary result is already clear. As noted astronomers Sara Seager and Andrew Howard summed it up delivering the prestigious Pickering Lecture to the American Institute of Aeronautics and Astronautics in September 2018:
“On the basis of the available data, we estimate that 20 percent of stars have Earthlike planets in their habitable zone.”
Wow.
Figure 8.5. Kepler team member Sara Seager delivers the astonishing mission verdict: one in five stars harbors an Earth-sized planet in its habitable zone.