TECHNOTOPIA
Will life be worth living in 2000 AD? . . . Scientists have looked into the future and they can tell you. . . . You will be whisked around in monorail vehicles at 200 miles an hour and you will think nothing of taking a fortnight’s holiday in outer space. . . . Doors will open automatically, and clothing will be put away by remote control. . . . In commercial transportation, there will be travel at 1000 mph at a penny a mile. Hypersonic passenger planes, using solid fuels, will reach any part of the world in an hour. . . . And this isn’t science fiction. It’s science fact—futuristic ideas, conceived by imaginative young men, whose crazy-sounding schemes have got the nod from the scientists.
It’s the way they think the world will live in the next century—if there’s any world left!
—“WILL LIFE BE WORTH LIVING IN 2000 AD?” WEEKEND MAGAZINE, JULY 22, 1961
 
Such vision of endless mechanical progress, such totalitarian utopias, such realistic extrapolations of scientific and technical possibilities all played a more active part in practical day-to-day changes than has usually been realized. These anticipatory subjective promptings were always in advance of actual experience, insistently beckoning, pointing ahead to the next step, breaking down resistance by suggesting that any attempt to reduce the tempo of change or to alter its direction was doomed by the very nature of the universe. . . .
—LEWIS MUMFORD
 
 
 
THE WORD “TECHNOTOPIA” is a portmanteau of “technology” and “utopia.” It conjures up a vision of a world of advanced technologies in which automation and the use of various advanced, often hypothetical technologies allow (some) human beings to enjoy a high quality of life and large amounts of leisure time. Of course, all human cultures have had technology, and many indigenous cultures have been marked by a high quality of life and large amounts of leisure time as well, but for my scenario I’ll focus on a society driven by high levels of industrial technologies. And although some visions of a technological future have been driven by a cornucopian worldview where technology renders a healthy environment somehow obsolete, for the purposes of our discussion let’s take a slightly less preposterous approach and assume that this future society makes sustainability a primary social value.
To summarize, this technotopia is an “ideal” society driven primarily by technological innovation and marked by a generally high quality of life. In this scenario, high levels of technology can be maintained indefinitely through near-perfect recycling and manufacturing systems. Population and consumption are firmly constrained in an attempt to maintain this society indefinitely. These constraints could be through market forces (having kids is expensive), by government regulations, or by social mores regarding reproduction. For the purposes of this scenario you can choose whichever option seems most appealing—or least dreadful—based on your own political bent. It doesn’t really matter how, just that mass society has somehow come to terms with the fact that it lives on a finite planet where infinite growth simply won’t fly.
This hypothetical society has placed limits on growth in an attempt to keep disruption of remaining ecological systems minimal. However, because of land demands for agriculture, and especially for biofuels, most of the terrestrial surface of the planet is farmed to meet human needs for energy, fiber, and other raw resources. Mining, remember, is based on exploiting nonrenewable resources. In our technotopia, all minerals that are economically accessible have already been mined out, leaving metal recycling and the use of biologically-derived materials supply the manufacturing industry.
Remember that human quality of life is paramount in this technotopia—and we will ignore for now the fact that from the beginning, this culture has been based on the gaining of riches by the wealthy at the expense of the labor of the poor and the land (in other words on slavery), so I’ll do for a moment what most writers in this culture do, and only concern myself with the quality of life of the beneficiaries of this pyramid scheme—but that as an industrial society it’s still based on the activities of machines. Machine manufacturing is usually big, fast, hot, or otherwise hazardous to humans. So, dangerous or toxic industrial jobs are carried about by a vast number of robots which are made mostly out of materials like biodegradable corn plastic. To sustain their construction, maintenance, and fueling, these robots collectively consume more resources than the entire human population. Any toxins that are produced are either remediated or sealed in underground vaults. Now, unsustainable industrial activities like mining have been largely phased out—mines are now used to store toxic by-products of industry in hermetic containers—but landfills and old industrial sites are mined for the resources they contain.
133
This technotopia does not use—or require—fossil fuels. As a finite resource, any significant use of them is unsustainable.250 In modern society, fossil carbon extracted from the ground is physically made into plastic and many other synthetic materials. This technotopia will have to get that carbon from other sources. It’s safe to assume that they will also want to make their synthetic materials as nontoxic and biodegradable as possible. There are polymers that exist currently that conform to these requirements. Do those polymers have some potential to help our technotopians?
I decided to talk to an expert, Dr. Tillman Gerngross. Dr. Gerngross is a professor of engineering at Dartmouth College and holds advanced degrees in chemical engineering and molecular biology. Although Dr. Gerngross has studied biologically derived plastics in great detail, he’s not employed by a bioplastics manufacturer, and so is able to offer us a more honest perspective (never forget Upton Sinclair’s line about how hard it is to make a man understand something when his job depends on him not understanding it).
Bioplastics are plastic materials derived from biological materials, like corn, as opposed to oil, as is the case with most modern plastics. Although they can be biodegradable, bioplastics aren’t necessarily so. Many early plastics were derived from biological materials, but were deliberately designed to be non-biodegradable. Dr. Gerngross and I discussed mostly three types of specific plastics: polyhydroxyalkanoates (PHAs), polylactic acids (PLAs), and various starch composites (which are usually rather brittle).
I asked Dr. Gerngross how these plastics are manufactured. He suggested that we focus on the PHAs and PLAs, which are the dominant approaches in the United States. “The starting materials for any of these approaches are typically carbohydrates, mostly sugars. So of course, various agricultural feedstocks lend themselves to providing these sugars,” he told me. These sugars can be put through a fermentation process and converted either directly into the desired plastic, or indirectly through different chemical intermediates.
Dr. Gerngross told me that one of the key issues in bioplastics manufacturing is that “you need a source of carbohydrates, and when we look at industrial-scale sources of sugar or carbohydrates, corn-derived dextrose or glucose is the primary source.” Because we have a lot of information about growing corn, and about the specific processes used to convert those corn sugars into plastic, we can make a very solid analysis of what’s involved. “And so then you go back and say, ‘How do we make this particular material?’ Well we make it by growing corn in large amounts. And then you can say, ‘We have to make a ton of glucose, to feed it into a process by which we convert that glucose either into an intermediate, to make polylactic acid, or directly into polymer: what’s the environmental impact of generating a ton of glucose?’” We can ask, “‘What are the yields in corn farming? What are the fertilizers? What are the pesticides, herbicides, insecticides, all that stuff that goes into corn farming? What’s the energy required to make the fertilizer?’ And you can do a full analysis of the environmental impact of making a ton of glucose.”
Dr. Gerngross also told me that we have “very solid numbers” on the energy in various steps of bioplastics manufacturing, but that “the unfortunate net result is that going through these multiple processing steps requires a lot of energy to get glucose, and requires even more energy to convert that glucose into a polymer. And our analyses in my lab here have shown that it’s almost a wash with conventional polymer production. In fact, the energy consumption is significantly higher.” Dr. Gerngross pointed out that the energy required to turn oil into a plastic—say, polyethylene—is actually relatively small, and the conversion steps are comparatively energy efficient. In other words, even though a lot of energy goes into plastics manufacturing in the modern industrial economy, a lot more energy would have to go into it if they were making plastics out of corn sugar instead of oil.
Dr. Gerngross made another important point. He reminded me that most of the energy used to grow corn and to process bioplastics comes from burning oil and most especially coal. “So really, when you then look at the whole situation from a net environmental impact situation, clearly these biopolymers have a negative net impact on greenhouse gas emissions, clearly they have a net negative impact on land use, because the land you’re using to grow corn is not used to sequester carbon through a natural environment or biotope, and in fact is competing with corn for animal feed and for human use.”
He told me that people in the bioplastics industry are taking the approach that “it’s green because it can be grown again.” It’s true that source crops can be grown over and over, he told me, “But what is not true is that the energy going into the process is renewable. In fact, it’s referred to by many in the industry as ‘the dirty secret.’”
Dr. Gerngross does not have a very high opinion of the ecological implications of the bioplastics industry. “My view is, if the objective is to have a positive environmental impact, the impact of these two technologies does not hold up to any meaningful scrutiny. If your desire is to preserve fossil resources, I think there’s no case to be made, because they’re just going to burn it all.”
He then made another important point. Even though conventional plastics manufacturing involves using oil, coal, and so on, much of the carbon in those energy sources ends up incorporated in the final product. “You can make an argument that making polyethylene involves extracting fossil carbon out of the ground, burning part of it, and then ending up with fossil carbon in the form of a plastic bottle or a plastic bag. And when it goes back into a landfill it goes back into the ground. It’s basically a very efficient carbon sequestration mechanism.” In contrast, bioplastics manufacturing involves “using a lot of agricultural land, and you’re using significantly more energy, which currently is derived by burning fossil carbon (therefore, you have a negative impact on air quality) to make something that is biodegradable, which arguably also is not such a great thing.”
Why doesn’t he think making biodegradable plastics is such a great thing? He told me that if they do produce biodegradable plastics, and “if these materials end up in landfills they will degrade in landfills at a rather slow rate. But because of the anaerobic environment in landfills, the carbon that is released is not released in the form of CO2, it is mostly released in the form of methane. And methane is a very potent greenhouse gas. It’s about twenty times more potent than CO2.” He said that he’s not an advocate on either side, but that he wants people to look at the problem from a more rigorous and analytical perspective, “not this fuzzy ‘it’s green because it comes from a renewable plant’ approach.” Which, he pointed out, doesn’t necessarily mean much, given the unsustainability of current agricultural practices.
I asked him what he thinks is behind the growing popularity of “green plastics,” if those plastics aren’t actually as green as they are marketed to be. Dr. Gerngross told me that there are several aspects at play. “One is that the USDA has been extremely aggressive in finding new ways of getting stuff that farmers grow to consumers. This is sort of an obvious statement, but look what has happened with corn, the ability of making extracts and intermediates of corn commodities that now go into so many different things. I think of the large amount of corn that’s converted to glucose and high-fructose corn syrup: that’s what the USDA wants. They want large outlets for the stuff that farmers make. Whether that’s good for the environment or for society, that’s a completely different question.” He pointed out that the corn lobby has been very successful in getting people to eat more high-fructose corn syrup. Americans now eat about sixty-three pounds per person per year, as a sweetener in everything from soda pop to ketchup to yogurt. This, as Dr. Gerngross pointed out, has ultimately resulted in “not only obesity, but also diabetes.” But remember, you can only dump so much corn product into food. People can only eat so many corn tortilla chips in a year, only chow down on so many hamburgers made from corn-fed feedlot beef, only slurp down so many Coca-Colas. Eventually, if you are say, Cargill, and you want to keep making more and more profits, you have to find a place to dump all that corn. “I think the USDA is very heavily promoting new ways of using agricultural raw materials to make more than just food. Certainly when you look at their granting programs, they’ve promoted all those very heavily, with the completely ludicrous argument that it’s going to prevent oil imports, et cetera, et cetera.”
He said that another reason “green plastics” are becoming popular is that the public doesn’t take into account the behind-the-scenes energy costs and ecological impacts of different materials. Materials manufacturers aren’t really bothered by this fact. “They’re not in the education business,” he told me.
Dr. Gerngross outlined a study done by McDonald’s in which they examined the environmental impact of Styrofoam fast-food containers and compared them to cardboard containers. What they found was that it actually took significantly more energy, and made a significantly larger environmental impact, to produce cardboard containers. “So, McDonald’s was fully aware of that. Now when you go out and ask the public, ‘What do you think is more environmentally friendly? A polystyrene cup or a cardboard cup made from renewable resources?’ What do you think the answer’s going to be? I think the vast majority of people are going to say, ‘Well, the cardboard is more environmentally friendly.’ So McDonald’s made a marketing decision: they said, ‘We don’t care, really. We would rather do what the public perceives as environmentally friendly than what is really environmentally friendly.’ [Or rather, what is slightly less environmentally hostile.] For them that’s a wise marketing decision, but again it’s not necessarily the best thing for the environment.
“And the dirty secret, of course, is that the US paper industry burns more fossil carbon than the entire chemical industry uses as raw material. Again, this question of whether something is renewable runs into this very misconstrued notion that if that raw material itself is renewable, that’s a good thing. The problem is that the raw material may be renewable, but the energy required to convert the raw material into something useful . . . typically it’s very, very energy intensive. And the paper industry is, I think, a great example of that.”
I asked Dr. Gerngross if these plastics—PLAs and PHAs—were actually biodegradable. What would happen if you put them out in the environment? Would they yield toxic by-products? He told me that these bioplastics, at least, are essentially biodegradable. Many microbes have enzymes that can break them down, and the plastics would be a kind of food source. This wouldn’t produce toxic by-products, but he reminded me that the methane produced is “natural,” but can still have a negative impact if produced in large quantities.
Since he’s a molecular biologist, I asked him if it would be possible to selectively breed or engineer bacteria that would be capable of breaking down conventional plastics, and here’s what he told me: “It’s possible, but again, why would anyone want to do that and release those bugs into the environment? I don’t think that, necessarily, would be a very good idea. And in addition to that I would argue that degradability is not a good thing. The reason it’s a good thing is because society contains a distribution of different people, and some of them don’t care about the environment and are willing to throw out plastic bags or plastic bottles. Now, are we trying to support that behavior by saying, ‘The plastic bottle that you are throwing out of your car window, we’re now going to design, engineer in a way where it degrades?’ We can do that, yes. But is that a good thing? I don’t think so. I think we should manage our environment in a more socially responsible way.”
134
The specific shortcomings of bioplastics actually outline some of the over-arching problems with the technotopia scenario, and with technofixes in general, with technofixes being defined as proposed solutions to societal problems that rely solely on the introduction of a new technology without requiring changes in the structure of industrial society. If the culture is from core to surface, bottom to top, micro to macro based on exploitation, what sane person could think that yet another piece of technology that facilitates this exploitation will make the culture any less exploitative?
As of this writing, there’s a headline on a popular technology news site about scientists who have managed to get bacteria, with the help of several intermediate steps, to produce gasoline. Below the article is a series of comments about how great it is that “we” won’t be dependent on oil, and about how now “we” can simply make renewable hydrocarbon fuels. The problem with this, and with other biologically synthesized fuels like biodiesel, is that there is only so much photosynthesis that can happen on the earth’s surface. British newspaper columnist George Monbiot has noted that to fuel all of the cars in Britain with biodiesel would require farming every single acre of the country just for biodiesel, leaving no room for anything else.
The planet, having a limited surface area and limited growing seasons, has a finite total capacity for photosynthesis, a capacity measured by the net amount of carbon from the atmosphere “fixed” each year and integrated into organic matter: the amount of carbon that is saved in any given area through photosynthesis is called “net primary production” (NPP). A recent study estimated that humans “appropriate” about one quarter of the Earth’s net primary production, through direct consumption in the form of food, indirectly by feeding plants to farm animals, or by land use changes like deforestation that affect NPP. In the dry understatement characteristic of published scientific papers, the authors note that “[t]his is a remarkable impact on the biosphere caused by just one species.” They also warn that their “results suggest that large-scale schemes to substitute biomass for fossil fuels should be viewed cautiously because massive additional pressures on ecosystems might result from increased biomass harvest.”
To some people, one quarter may not seem like a lot. After all, we’re still leaving most of the net primary production for the planet, right? Well, not so fast. First of all, this planet is home to millions or tens of millions of different species. This is something like having a city of millions of people where 99.99999 percent of the people live in modest houses and apartments, and one person lives in a gigantic house that covers a quarter of the entire city. In other words, this arrangement is not winning any awards in the fairness department. Especially when you consider that not only do humans make up the tiny, tiny, tiny minority of all species, but also the tiny, tiny, tiny minority of all biomass, being vastly outweighed by other groups of species, like insects, and even other specific species, like giant squid.
Moreover, it’s not like this carbon appropriation is happening evenly all over the planet. Human tax collectors aren’t dropping down and scooping up a mathematically fair amount of carbon from each square meter on the planet. Rather, some areas are totally destroyed or radically altered for the purposes of resource extraction, which results in major species extinctions.
If the issue of fairness doesn’t faze you, here’s an argument that the more self-interested can get behind. The Earth’s normal “photosynthesis budget” isn’t spent on lottery tickets and potato chips (or at least, the portion that isn’t appropriated by humans isn’t). All of that photosynthesis is spent maintaining healthy ecosystems, replenishing species, nourishing soils, and taking carbon out of the atmosphere so that the planet stays at a reasonable temperature. That photosynthesis is being spent to hold the planet together. Once again, how do you think the planet got to be so beautiful, resilient, and fecund before this culture started to destroy it?
A major cut in the photosynthesis budget can be a big deal, just as a major cut in your household budget can be a big deal. Imagine you make, to choose a nice round number, a thousand dollars every two weeks. During those two weeks, you need to spend two hundred dollars on rent, two hundred dollars on food, two hundred dollars on transportation to work at your job, and three hundred dollars on medical costs, clothing, and general maintenance. One day when you get to work, your boss—he’s a real jerk—tells you that he needs to “appropriate” 25 percent of your salary so he can get a new car. That’s progress, after all. “Don’t worry,” he tells you. “You still get to keep most of it.” So what do you do? Try to find a cheaper apartment? Start dumpster diving? Wear old clothes? Start farming out your children to relatives who are just as strapped? You can adapt to a small cut in your household budget, but a big cut means trouble with your essentials. The same goes for the planetary photosynthesis budget, and the planet doesn’t have any friends it can go to borrow money from if it hits rough times.
Within certain limits, human civilization can appropriate photosynthesis without major short-term consequences. But as that cooptation reaches those limits, the consequences (in terms of climate change, drought, or general ecological collapse) become overwhelming, even locally: prior to the arrival of this culture, what is now Iraq was covered in cedar forests so thick the sunlight never reached the ground, the Arabian peninsula was oak savannah, and so on. This culture destroys landbases wherever it goes. These consequences become a vicious cycle, in which more primary production is appropriated each year to deal with the consequences caused by last year’s consumption. Furthermore, and this is a running theme in this book, the benefits of this consumption go to a small group in the short term, and the costs are paid by others in the short term and everyone in the long term.
And then there are matters of scale. This is another shared problem between bioplastics in specific and technofixes in general. An act that’s sustainable at one size, clearly, is not necessarily sustainable when grossly enlarged. A very large system has quantitatively and qualitatively different impacts on its environment than a smaller-scale one. That’s why a water strider weighing one-thousandth of an ounce can move effortlessly across the surface of a lake, resting on the water’s surface tension, but a cow weighing one thousand pounds would plunge thrashing through the surface.
And then there are impacts that an environment has on a creature. As evolutionary biologist J. B. S. Haldane famously wrote: “You can drop a mouse down a thousand-yard mine shaft; and, on arriving at the bottom, it gets a slight shock and walks away, provided that the ground is fairly soft. A rat is killed, a man is broken, a horse splashes.” The vast majority of animals on the earth are tiny invertebrates like worms and flies. And the vast majority of living creatures are microscopic. Clearly there are many advantages to staying small.
If you grossly enlarge any act to an industrial scale you will see destructive impacts to and from the environment. If you take the backyard sugar cane plot and turn it into a 10,000-acre ethanol farm, you are going to cause major ecological and social destruction. And at the same time, that industrial-scale structure is going to be much more brittle than a smaller structure, and more prone to collapse.
Our last common problems, and possibly our worst, have to do with social impact, power relationships, and control. That’s a subject that I’ll return to shortly.
135
If I’m going to talk about technology, it’s a good idea to have a clear definition of what it is. Often people who critique technology, or civilization, or “progress,” get pigeon-holed as “antitechnology” or “luddites.” Part of this stems from meanspirited and misleading rhetorical techniques, and part of it stems from muddy ideas of what technology actually is.
Some people would define technology as anything humans make. I think this is too miserly a definition (and of course narcissistic, but why should that surprise us?)—a variety of nonhumans use tools creatively, including, among many others, primates, various birds, and invertebrates like octopi. A more inclusive definition is appropriate. I like to say that technology is any structure a creature employs which is not actually part of that creature’s body. This fits (more or less) with the definition of technology we used earlier, when talking about magical thinking.
By this definition, a snail’s shell, for example, would not be a technology, because it is part of the snail’s body. A bird’s song, on the other hand, is a form of communications technology. Birds can adapt songs to different circumstances, and many birds will learn new songs (or other sounds) and potentially employ them in problem solving. A bow and arrow is a technology. A beaver dam definitely qualifies as a technology, and so does a bird’s nest. A bent piece of wire used by a crow to retrieve a tasty piece of meat from out of reach is a technology, as would be the knowledge of how to make it. Some might argue that a crow bending a piece of wire into a tool doesn’t actually count as technology, because the crow is only making a tool out of something it found. But humans do the same—even advanced composite materials are made out of modified precursors, not conjured out of midair.
Some examples pose fascinating conundrums when considered under this definition. For example, a hermit crab will find empty shells, or even empty cans or containers, and then use one of those as its new home. Does this qualify as technology?
At this point, the lines between what is and is not a technology become blurred (and that, of course, is part of the point). A fox might curl up in a hollow log, and though this is not as sophisticated as the crab’s means of attachment, it is in many ways the same. Is that a technology? How is a fox using a hollow tree to keep warm all that different than you sitting in your wood home? The aquatic caddisfly larva will cover itself in a protective coating of available debris, from twigs to pebbles to small fragments of metal. This debris essentially becomes part of the larva’s body, which would from one perspective imply that it is not technology. But how is that different than other forms of armor? How is that different than the pacemaker placed in someone’s chest, or the pharmaceuticals you ingest and that become a part of you? Or what about those species of cephalopods, like octopus and squid, who communicate through shifting patterns of color on their skin? If a bird’s song counts, why not a squid’s color patterns? Or a spider’s web, secreted out of the spider’s own body, but an undeniably sophisticated and adaptable structure which surpasses many examples of human high technology?
One of the things that makes technology so interesting—and so deadly—in this culture is speed. Technology now changes quickly, whereas a few centuries ago technological change was not so stunningly fast. Cell phones. Computers. Garage door openers: anyone over about forty remembers when they were unheard of, and now in the US at least they’re everywhere.
This rapid change is central to the issue of accumulating waste and pollution, because humans are creating more and more materials which have no timely or natural means of decomposition. Indeed, often the motivation is to produce materials that resist decay.
For non-technological structures this is generally not an issue. A snail’s shell, or a tree’s leaf, or a bone, are all built from a shared biochemical “palette.” In the billions of years life has been on this planet, living creatures have thoroughly explored the range of molecules that can be produced by living cells. When new molecules were developed that offered a survival advantage, the genes that produced them spread gradually. Sometimes those molecules would have resisted decay. Plants in particular have an incentive to produce indigestible compounds, since that’s one of the few means they have to discourage predators. But as those durable molecules became more commonplace, there would also be a greater incentive for predators and scavengers to develop an ability to digest them.
Imagine that a plant develops a waxy coating that can’t be digested by any other creature and doesn’t break down under normal environmental conditions. If this means the plant has fewer predators, it may produce more offspring and the genes for the waxy coating will spread. But at the same time, that waxy material will begin to accumulate in areas where the plants grow. The more commonplace this plant becomes, the greater the concentration of the wax, and the more incentive another creature has to work out how to eat it.
If it takes a long time for someone to figure out how to eat the wax, and it accumulates in the environment, it may actually start to harm other creatures that share habitat with the wax-producing plant. This may prevent the plant from thriving in areas it already inhabits, but it won’t stop the plant from spreading into new ones. Scenarios in which a living creature actually produces enough waste for it to accumulate to a danger point are exceedingly rare in nature.
Okay, fine, but are there actually any examples of a living creature producing enough of one indigestible material for it to accumulate significantly in the environment? Has any metabolic by-product ever built up to the point where it was a threat to other living creatures on a global scale? The answer to both questions is yes. On the surface, garbage apologists can use this as an excuse—“what we’re doing is natural”—but the way those scenarios played out in nature shows us yet again why our modern-day waste situation is so unnatural, and so destructive. More on this later.
We talked about an inedible waxy coating building up in our hypothetical example above. But there are real-world examples of materials that accumulate because they aren’t very digestible. In fact, close to two thirds of all the nonfossil organic carbon in the world is tied up in just two biological materials—cellulose and lignin.
Cellulose incorporates the majority of that carbon. Cellulose is a crucial building block for plants. Structurally, it has a superficial similarity to synthetic plastics. Cellulose and plastics are both polymers—large molecules made by connecting many identical units. One crucial difference is that cellulose is made out of many sugar molecules. This means it can be broken down into sugar, and then into water and carbon dioxide. In contrast, synthetic plastics are built from nonbiological units, which often include chlorine and fluorine atoms which are decidedly not harmless. In fact, some early plastics were made by modifying natural polymers like cellulose: that’s where “cellophane” gets its name.
Living creatures in general are very capable of dealing with sugar molecules, but in cellulose these molecules are linked to each other in a way that is difficult to separate. Because of this, many creatures—humans included—cannot digest cellulose. This isn’t necessarily a problem: even though we can’t digest cellulose, it’s not harmful for us to eat. It is good to get a certain amount—that’s what dietary fiber is made from—but we can’t live off of plants consisting largely of cellulose, such as grasses.
Now, if no one could digest cellulose, that would be a problem, because cellulose would continue to accumulate, and (harmless as it is) we—the inclusive we, including all beings, including plants, including the earth— don’t want the entire biosphere to be made out of cellulose. Fortunately, some creatures have evolved that ability. When a new material shows up, if anyone can figure out how to digest it, it’s probably going to be bacteria or other microbes. They have a couple of advantages; they have short and rapid generations, allowing them to essentially evolve faster, and bacteria can share genes with other bacteria who aren’t their offspring. Larger creatures may eventually work out partnerships with these bacteria. This is the case with cows and other ruminants. Although cows cannot digest the cellulose that makes up much of their food, one of their four stomachs house bacteria who can eat cellulose. (That stomach is called the rumen: hence the name ruminants.) Those bacteria also synthesize vitamins and other beneficial compounds for the cows.
Although we haven’t evolved to digest cellulose, it’s actually not a tough nut to crack, biochemically speaking, if we compare it to lignin. Lignin is what makes wood, in a word, wood. It’s strong, tough, and very difficult to digest. That’s why if you’re lucky enough to take a stroll in an old growth forest, you can still see fallen logs from decades, or in some cases centuries, ago. Lignin, which makes up about 30 percent of the world’s non-fossil organic carbon, takes a long time to break down. And not many living creatures have mastered the trick. Really, only fungi are adept at breaking it down.
Now, again, if you’re lucky enough to take a stroll in an old growth forest, you can see that this is not a bad thing. In healthy forests in general, stumps and fallen logs play vital ecological roles. They provide shelter for animals from ants and sow bugs all the way to elk and bears. They act as nurseries for young trees and other plants. Have you ever seen a seedling sprouting from the top of a rotting stump? And perhaps in the same forest, have you seen a mature tree with roots branching out in mid air, where a stump was when that tree was a seedling? The variation in terrain created by fallen trees yields a variety of ecological niches and microclimates, moist lower areas and dryer mounds, which ensures the ecological diversity of the forest’s plant, fungi, animal life, and so on.
The carbon in cellulose has a similar role to play, though at a different scale. Fallen leaves on the forest floor retain moisture and nutrients, and feed the living detrivores there. As they’re broken down, they’re incorporated into the soil, increasing the soil’s ability to hold water, air, and nutrients, and feeding the microbes who feed the tree roots.
If cellulose and lignin were magically altered so they broke down overnight, the results would be disastrous. All of the carbon stored in fallen trees and leaves would be released into the atmosphere. The forest floor would become lifeless as it lost the ability to retain water and nutrients. Many plants and animals would lose their habitats and become extinct. Seedling trees would no longer have suitable places to take root, and the forest would die.
This is what makes these durable, natural polymers so different from synthetic polymers. They’re not waste. Rather, their evolution laid the basis for a great increase in the volume and diversity of life. Fallen trees and leaves are not just in the forest, they are the forest. They are its future as well as its past. Who can say that of a plastic fork?
136
There’s another big obstacle to an ecologically sustainable industrial technotopia: war.
Supposedly green military technologies have been making headlines more and more over the last couple of years. Wind turbines constructed at Guantánamo Bay, programs to develop a biofuel for fighter jets, and a hybrid-electric military jeep are among the most notable examples. It’s up for debate whether such initiatives demonstrate simply a series of shallow public-relations moves or a military recognition of peak oil and a looming energy crisis. In either case, most thinking people realize that war and sustainability don’t go together.251
In the technotopia I’m describing, the people of the future deal with finite supplies of various valuable metals by instituting a perfect cradle-to-cradle approach to industry, similar to that proposed by William McDonough (never mind that it still, several hundred pages later, is not going to happen). All manufactured metals are carefully fabricated, tracked, and recycled when a product reaches the end of its use (I almost wrote life span, except of course products aren’t alive: I mention this to show how deeply we’re all enculturated to identify the products of this culture with life). We can talk about how (im)plausible that is at the best of times, but war is not the best of times. War materiel, from bullets and uniforms, to missiles and bombs, to tanks and fighter jets, is built to be destroyed. There’s a good chance that in a time of war almost anything built by or for the military could be torpedoed, sunk, burned, blown up, nuked, or otherwise obliterated. That’s not a good way to build a cradle-to-cradle industrial system. In fact, various munitions are built specifically to be destroyed and converted into dust and unsalvageable wreckage. They are designed to become waste, and to waste something—or someone—else. The worst munitions don’t just physical destroy the target—they irradiate it. They also irradiate the battlefield, and the landscape. Case in point: depleted uranium.
Depleted uranium (DU) is primarily a by-product of the processes that “enrich” uranium used for nuclear warheads and reactor fuel. The military mostly uses depleted uranium for two purposes—armor and ammunition. The reason bullets are commonly made of lead is that lead is a fairly dense metal. Bullets are made out of dense materials so that a small projectile can carry of lot of energy. The less dense a projectile is, the less damage it can cause—think nerf guns. Well, depleted uranium is about twice as dense as lead, which means it can do a lot of damage, and is especially effective in armor-piercing munitions. It’s also “self-sharpening,” meaning that when a DU projectile strikes a hard object, it fractures into smaller, sharp shards. Most of a projectile is turned in to dust, but the remainder—which may weigh several pounds in larger shells—is a solid lump of uranium left on the battlefield. DU is also pyrophoric, capable of spontaneous ignition. This means when an armor-piercing DU penetrator is fired at an armored vehicle, it may punch through the armor and, by the time it enters the crew compartment, be fragmented into a radioactive dust that will catch fire. Just what you want in an armor-piercing shell. Not what you want if you care about a non-irradiated populace and landscape.
By the 1970s, the US had stockpiled about half a million tons of depleted uranium. Since DU is radioactive, it was proving expensive to properly store such large amounts. However, at that time the Pentagon was searching new materials for armor-piercing munitions, and realized it could, as it were, kill two birds with one stone—make more effective weapons, and get rid of large and expensive stockpiles of DU. Isn’t recycling grand?
DU munitions were first used in the 1973 Arab-Israeli War, but the US military has used DU in the first Gulf War, in Bosnia, Kosovo/Serbia, and in the more recent invasion and occupation of Iraq.252
Representatives from those countries have attempted to determine what health, safety, and decontamination measures are appropriate, but the US military has been less than helpful, even refusing to give locations where DU munitions have been used. During the first Gulf War, US forces fired close to a million rounds of DU ammunition, with a total mass of DU somewhere in excess of three hundred tons.253 It’s also been used in training operations in the US, despite the fact that DU use is prohibited outside of combat.
Although the existence and use of DU munitions was largely unknown outside of the military, it gained public attention in the early 1990s because of the Gulf War Syndrome. American veterans of the Gulf War began reporting a host of strange symptoms, including immune system disorders and birth defects, and the departments of Defense and Veterans Affairs now recognize brain cancer, amyotrophic lateral sclerosis, and fibromyalgia as potentially connected to the Gulf War.254 A host of potential causes were suggested, including vaccines administered to soldiers, chemical weapons, and infectious diseases. The use of DU munitions was also suggested, since soldiers handled the munitions and were exposed to dust particles on the battlefield. After the Gulf War ended, Dr. Doug Rokke, who served in the US Army for thirty-six years, became the director of the Depleted Uranium Project for the Department of Defense. It was his job to investigate the health impacts of DU for the army. Starting from the perspective that his job was simply to make sure that DU munitions could still be used, he was shocked by what he found, and the information he found brought him “to one conclusion: uranium munitions must be banned from the planet, for eternity.”255 He has since become an outspoken critic of DU weapons.
The morality and legality of using DU weapons has been, of course, extremely controversial. The US military continues to insist, predictably, that DU is essentially harmless in terms of radioactivity. It has been argued that depleted uranium munitions, though clearly a type of nuclear weapon, do not qualify as such because radioactivity is not their primary effect. Of course, radioactivity is not the primary effect of an atomic bomb, either—the primary effect is an enormous explosion. But that didn’t stop thousands of people in Hiroshima and Nagasaki from dying of radiation poisoning.
One report from the former Yugoslavia concluded that the US could not be brought before a war crimes tribunal for using DU munitions in that country, because there is no treaty which specifically and explicitly bans the use of depleted uranium.256 This seems a little like saying you can’t convict someone of murder for beating another person to death with a ball-peen hammer, because there’s no specific law against beating someone to death with a ball-peen hammer. There have been more sensible interpretations of international law regarding such weapons. Humanitarian lawyer and UN delegate Karen Parker analyzed international laws that apply to DU and concluded that “DU weaponry cannot be used in military operations without violating [humanitarian law], and therefore must be considered illegal.” The use of such weapons “constitutes a violation of humanitarian law . . . its use constitutes a war crime or crime against humanity.”257 Parker writes that there are “four rules derived from the whole of humanitarian law regarding weapons,” and hence, four tests we can apply to any weapon to see whether they are at all compatible with international humanitarian law: 1) The Territorial Test: “Weapons may only be used in the legal field of battle, defined as legal military targets of the enemy in the war. Weapons may not have an adverse effect off the legal field of battle.” 2) The Temporal Test: “Can only be used for the duration of an armed conflict. A weapon that is used or continues to act after the war is over violates this criterion.” 3) The Humaneness Test: “Weapons may not be unduly inhumane.” 4) The Environmental Test: “Weapons may not have an unduly negative effect on the natural environment.”258
Parker concludes that the use of DU munitions fails all four of these tests. It cannot be contained to a given field of battle, since DU dust or DU contaminated soils can easily blow great distances from the original battlefield. And DU, with a half life of 4.5 billion years, will continue to irradiate and toxify long after everyone who even remembers the war is dead. DU also fails the humaneness test, because DU contamination kills by inhumane means (such as cancer and kidney disease) and can also cause birth defects in those not yet born. And clearly, DU fails the environmental test, since contamination is extremely long term, and any cleanup (which almost never takes place) is extremely expensive and limited.
It’s estimated that each of us ingests about half a milligram of uranium each year in our food and water—that’s for people who aren’t living in or near a war zone where DU munitions are actually being used.259 But there’s another industrial by-product that emerged from the nuclear industry which we consume in far greater quantities.
137
One of the cities where I work as a paramedic has a factory complex which manufactures uranium fuel rods for nuclear power plants. Last year we got a call for a patient who “got uranium in his eye.” I was a bit worried, but fortunately it wasn’t as bad as it sounds: I learned that uranium isn’t the most pressing concern. When uranium is refined, it’s combined with hydrofluoric acid, an extremely dangerous compound of hydrogen and fluorine dissolved in water. Fluorine wasn’t produced commercially in the US until the dawn of the nuclear age, but you likely encounter it on a daily basis now.260 It’s put in most of North America’s municipal drinking water.
When I think of opposition to fluoride in drinking water, the first image that comes to mind is the scene in Dr. Strangelove where the insane General Ripper proclaims that water fluoridation is masterminded by an “international communist conspiracy to sap and impurify all of our precious bodily fluids.”261 It’s not an accident that this comes to mind. Although water fluoridation was a highly contentious and controversial issue in the United States during the 1950s, advocates of fluoridation eventually won in part by ridiculing opponents like members of the John Birch Society, who genuinely believed that fluoridation was part of a conspiracy to poison the American people .262
So I have to admit, when someone first suggested to me some time ago that the fluoride in drinking water was a toxic industrial by-product, I wasn’t sure whether I should take them seriously. In fact, for years I automatically relegated the idea to the status of urban myth. Sure, it wouldn’t be the first time public health organizations and other government agencies deliberately acted against the health of the public. I thought of the Tuskegee Study of Untreated Syphilis in the Negro Male, a forty-year-long study conducted by the US Public Health Service in which hundreds of black men were deliberately denied treatment for syphilis even when safe treatments had been readily available for decades. Or Operation Whitecoat, a secret US Army project lasting from 1954 to 1973, in which various biological weapons and vaccines were tested on thousands of conscientious objectors, mostly Seventh-Day Adventists. Or perhaps the Cold War-era Project SHAD, in which the US military tested biological and chemical weapons on unknowing and un-consenting subjects. But deliberately dumping industrial by-products in the drinking water and calling it healthy? It just seemed too outlandish. Too much like the evil plan of a cackling supervillian in a James Bond film.
I read through a dozen articles and reports on water fluoridation and related controversies without finding a single mention of the sources of the fluoride used in municipal drinking water. And then I found a somewhat obscure statement by the Environmental Protection Agency, in which they blithely endorsed the practice. In their own words, the EPA “regards such use as an ideal environmental solution to a long-standing problem. By recovering by-product fluosilicic acid from fertilizer manufacturing, water and air pollution are minimized, and water utilities have a low-cost source of fluoride available to them.”263
What does this mean? Well, the fluoride used in municipal drinking water is from mineral processing plants.264 The smokestacks of phosphate fertilizers have scrubbers installed to reduce air pollution by removing pollutants from the outgoing smoke. In this case, however, the pollutants contain a sizeable amount of fluoride. The substances recovered by the scrubbers are repurposed and put into drinking water. A convenient way to minimize pollution, as the EPA put it. And it is, or at least was, a serious source of toxic pollution.
I’m typing this with my computer resting on a very old wooden milk crate. It reads “Dunnville Dairy, Dunnville, Ontario.” Dunnville is a small southern Ontario town where my grandfather worked as the milkman in the middle of the last century. In 1960, Dunnville became the home of a brand new monument to progress—the smokestack of their very own phosphate plant. As the Canadian Broadcasting Corporation reported several years later: “Farmers noticed it first. . . . Something mysterious burned the peppers, burned the fruit, dwarfed and shriveled the grains, damaged everything that grew. Something in the air destroyed the crops. Anyone could see it. . . . They noticed it first in 1961. Again in ’62. Worse each year. Plants that didn’t burn were dwarfed. Grain yields cut in half. . . . Finally, a greater disaster revealed the source of the trouble. A plume from a silver stack, once the symbol of Dunville’s progress, spreading for miles around poison—fluorine. It was identified by veterinarians. There was no doubt. What happened to the cattle was unmistakable, and it broke the farmer’s hearts. Fluorosis—swollen joints, falling teeth, pain until cattle lie down and die. Hundreds of them. The cause—fluorine poisoning from the air.”265
The problem wasn’t just in Dunnville. Problems were commonly reported near phosphate processors. A 1972 US Department of Agriculture Handbook warned: “Airborne fluorides have caused more worldwide damage to domestic animals than any other air pollutant.”266 Of course, since the 1960s and 1970s, air pollution control measures on phosphate stacks have significantly improved. Scrubbers are now present to capture such hazards so that they can be more sensibly disposed of in our drinking water.
And current industrial sources of fluoride are hardly exceptional. Before phosphate fertilizer by-products were the source of choice, most fluoride for drinking water was a by-product of aluminum refining.267 And essentially no forms of fluorine were produced in the US before the Manhattan Project. But now two thirds of Americans ingest about a milligram of fluoride with every liter of water they drink, making it the most common drug administered in the US.
Examining the benefits and hazards of water fluoridation are beyond the scope of this book.268 But there are two lessons to be learned from all this. First, for the US government to deliberately scatter a known, toxic, industrial by-product across someone else’s land would hardly be unprecedented—they already do it with their own drinking water. And second, you don’t need a dark communist conspiracy to undertake such a venture: capitalist business as usual will more than suffice.
138
A few years ago, at the online Derrick Jensen discussion group, a new poster started saying things like “we don’t need to worry about environmental catastrophe. All of this pollution is just a new nutrient. At one point oxygen was a deadly pollutant to most creatures and caused a massive extinction, but look how well that turned out. Really, industrial civilization is just a totally natural step in our planet’s evolution.”
The poster did get one thing right. When life first appeared on Earth, there was very little oxygen gas in the air or dissolved in water. Photosynthesis had not evolved yet, so only limited sources of energy were available, mostly from digesting various chemicals like hydrogen sulfide.
When photosynthesis did evolve, about 2.7 billion years ago, it was the start of big changes. For anaerobic organisms, oxygen is toxic. And when oxygen became widespread in the atmosphere, it caused the death of those anaerobic species who couldn’t hide themselves underground or evolve to tolerate the gas. This event is called the Oxygen Catastrophe, or sometimes, more optimistically, the Oxygen Revolution. Many species did go extinct.
But the poster had a bunch of things wrong, too. There are some big differences between current events and the Oxygen Revolution. First, that event involved the production of really just one chemical. It’s pretty conceivable to me that life could evolve to tolerate one new chemical, but modern industry is producing thousands upon thousands of new toxins. And second, the issue of time. Evolution takes time. Between the evolution of photosynthesis and the oxygen-induced extinctions around 300 million years passed. Most modern pollutants have been around for less than a century.
We could outline other differences, too, but I think the main distinction is on the long-term impact on life on this planet. Photosynthesis allowed life on Earth to make use of energy from the sun, instead of relying on small amounts of chemical energy from (mostly) underground. Although some species did go extinct, because of the change the planet can support a far larger and more diverse population of life than it could before. An oxygen rich atmosphere also allowed the evolution of large, fast moving, warm-blooded animals (ourselves included). All that sounds like a net improvement for life.
But industrial civilization isn’t offering that at all. In fact, it’s decreasing the population and diversity of living creatures in general. In addition, it’s shifted human society from relying on solar energy to relying on small (and short-lived) amounts of chemical energy from underground, now in the form of fossil fuels. And civilization is taking oxygen out of the air and replacing it with the gases carbon dioxide and methane, essentially moving the atmosphere slowly back towards what it was like before the Oxygen Revolution.
The poster on the discussion group was arguing that industrial pollution was the same as the Oxygen Revolution. But in every way that matters, it appears to be the opposite.
139
This attempt to naturalize industrialized ecocide by equating it with great ecological changes of the past is something I’ve seen many times. Of course, it’s now generally accepted that human beings—specifically industrial civilization—are causing one of the largest mass extinctions in our planet’s history. If you’re reading this book, you’ll probably agree that’s not a good thing. However, it’s also been argued that since mass extinctions have happened in the past they are natural, and therefore good, implying that it’s okay for industrial civilization to destroy the planet.
This is a bit like saying that because someone fell out of a tree and broke her leg that it’s then okay to run over her with a truck. Just because something happened in the past doesn’t make it right to replicate it. Especially since some mass extinctions were caused by things like asteroid impacts, which are essentially unpredictable and unplanned accidents in ecological terms.
Moreover, the current mass extinction—what’s called the Holocene mass extinction—is markedly different from mass extinctions of the past. Yes, something like an asteroid impact kills a lot of individual creatures, as well as many species, and it changes the planet’s ecology. Well, that’s happening now, you might say. The difference is that when those mass extinctions pruned the tree of life, they also allowed it to branch out again. Mass extinctions cleared habitat and ecological niches, leaving room for new species to evolve. These extinctions may be followed by an explosion of radiative evolution, in which nature has room to try out new and experimental ways of living. Some evolutionary traits are lost, but many are gained.
However, for this to happen there must be room. The activities of industrial civilization now destroy or reduce habitat in the long term. Cities and roads are paved over and inhospitable. Farms wipe out ecosystems and replace them with a few domesticated species. This kind of habitat reduction and ecological imperialism means there’s no room for the kind of explosive evolution we’ve seen in the past.
That is, of course, unless civilization collapses, and farms and cities begin to return to wild habitat, a subject to which we’ll soon return.
In the technotopia we’re describing, of course, this does not happen at all. Instead, human impacts continue to grow, with farms for biodiesel and bioplastics expanding rather than contracting. A technotopia would make the Holocene extinction permanent, and leave the tree of life crippled forever. 269
140
A couple of years ago I was sitting with a friend of mine, talking about peak oil, the collapse of civilizations, and related ideas. This friend worked on community gardens, helped people repair their bicycles, dumpster dove, and generally lived a pretty low-impact life. So it surprised me when partway through the conversation he said, “Well, we don’t need to worry about any of that, because we can just grow food in orbital space colonies and ship it down to Earth.”
I was dumbstruck. I didn’t even know how to answer. I must have missed the point where we stumbled into bizarro-land, I thought. He might as well have said that magical pixies were going to end world hunger, or that Xandraxis from the Fifth Dimension was going to teleport in and make new rainforests grow out of discarded coffee cups.
Partly I was surprised because this wasn’t a stupid guy. He was pretty smart, as, I presume, are a lot of people who are attracted to the technotopian solution. But a belief in technotopia is essentially a form of magical thinking. And not even magical thinking in that Starhawk-style of think-positive-but-still-do-activism kind of way. The problem is, since the idea of technotopia is disguised in high-tech terminology, people don’t think of it as magic.
Magical thinking, if you recall, is sometimes defined as, “The erroneous belief that one’s thoughts, words, or actions will cause or prevent a specific outcome in some way that defies commonly understood laws of cause and effect,” or more succinctly as “a conviction that thinking is equivalent to doing.” This seems to sum up many beliefs about industrial technology, and sometimes activism, especially pacifist activism that proposes things like “meditating for world peace.” The die-hard pacifist, and the die-hard technotopian both believe that just thinking hard enough will solve any problem, either through cosmic vibrations or new technologies.
When I was writing this section, the amazing activist and writer Lierre Keith told us she couldn’t wait for the section to be finished. “Technotopia is where progressives have gone to die,” she told me, “and the idea is in serious need of debunking.” It’s true, and I think it’s fairly evident why that is. Anyone who cares about human welfare, social justice, or ecology can see that the planet is in a lot of trouble. There are more people every day, consuming more every day. Those in power are getting more powerful every day, and the gap between them and everyone else continues to widen. Ecological limits are being trampled. And these basic trends have been at work, with a few interruptions and collapses, essentially since the beginning of civilization. Environmentalism has failed to stop or reverse global destruction, and many of the social justice gains that have been made are now dependent on the goodwill of governments or on the surplus production of a system which is itself unsustainable and based on exploitation. In other words, it’s clear to intelligent people, and painful to sensitive people, that we’re in a lot of trouble and that our efforts to deal with that have so far been pretty ineffective.
141
So what do you do if the odds are against you, if the situation is incredibly complex, and you don’t see a way out that doesn’t involve a lot more pain and suffering for a lot of people? Sometimes you cope by pretending that everything is going to be all right in the end, because that makes it a bit easier to get through the day, and maybe that helps you to do work that is important and valuable. If you’ve been raised by a society of God-fearing men and women, maybe that belief is in heaven or the Second Coming. On the other hand, if you’ve been raised by a society of gadget-loving Star Trek watchers constantly shown a future where technology has ended poverty and where even the most dire problems can be solved by reversing the polarity on the deflector dish, maybe you have a different belief.
There’s no question that industrial technology is good at solving certain problems. But drawbacks include: industrial technology depends on a large-scale and centralized society; those in power choose the problems it will address; and every problem it does solve creates a cascade of new problems.
In any case, those who believe that orbital space colonies or the like will feed us in the future won’t be moved by technical arguments to the contrary. Although the belief is ostensibly one of science, they won’t be impressed by discussion of energy return on energy invested, or the technicalities of carrying capacity or nutrient recycling. The belief that technology will solve all of our problems is a comforting article of faith.
In the end, technotopia offers a pacifying false hope. It used to be that the discontented masses were promised “pie in the sky when you die,” mediated by a class of theocrats. In the more “rational” modern age many people won’t believe religious promises of the afterlife. So a new promise, predicated on the same model, promises a future of technological bounty, mediated by a class of technocrats. In both cases, the promise serves to lull and distract potential dissidents, and prevent them from taking responsibility into their own hands.