TECHNOTOPIA: INDUSTRY
The machine is antisocial. It tends, by reason of its progressive character, to the most acute forms of human exploitation.
—LEWIS MUMFORD
So far from presenting its utopia as a beautiful dream, its effort terminates all too often in a kakotopia or realizable nightmare.
—LEWIS MUMFORD
THE MAIN REASON A TECHNOTOPIA won’t happen is because it is necessarily based on an industrial society. So if I want to critique this technotopia, or industrial society in general, I need to define what it means to be “industrial.” First and foremost, an industrial society is one that depends on machines for the essentials of daily human life.
Machine production maximizes labor efficiency: generally, one person operating a machine can produce much more than many people without machines. This is well illustrated by industrial agriculture. With the use of a tractor, one person can plow and plant hundreds of acres. Using only hand tools, one person would be hard-pressed to plow and plant even one acre.
But maximizing labor efficiency doesn’t mean maximizing other kinds of efficiency. Industrial agriculture produces an immense amount of food
per farmer, but less food
per acre than is possible under human-directed agricultural methods like permaculture.
284 Industrial agriculture also produces much less food
per calorie than other methods. Industrial agriculture requires, on average, upwards of ten calories of
fossil fuel energy per calorie of
food energy produced. Of course, forms of agriculture and horticulture that predate industrial agriculture all yield more energy than must be invested in them, or there would be no point engaging in them.
These specific points about industrial agriculture illustrate a great deal about industrial society in general. An industrial society is based on access to cheap supplies of energy, which is why industrial agriculture can afford to be so energy-inefficient. It also depends on large areas of land into which it can expand for resource exploitation: it requires that there always be somewhere “beyond these current mountains” for it to take trees, fish, fur, all life. And the malignant expansion of industrial civilization has taken place in an era where both of those preconditions were made available (often by the use of a great deal of violence and genocide).
One reason industrial activities are so efficient in some ways and so incredibly inefficient in others, is that intelligence is selectively employed. A tractor magnifies the effort of one farmer, but it does so in a way that is stupid and clumsy compared to the precision of a pair of hands. The same applies to the vast majority of machines. The result of this is that machines require uniform resources to work with in order to be more efficient than human beings.
Think about the massive Central American garden terraces that have nourished people for millennia. Pick up a tractor from a grain farm on the wide, flat prairie and drop it onto those terraces and see how well it does with the not-very-uniform terrain. Next, take a piece of birch bark and try to put it through a printing press. Third, mass produce buildings using locally available timbers instead of machine-milled plywood and two-by-fours.
This is not to illustrate that humans are “better” than machines—although of course in a moral sense they are, as all living beings are—but to make a specific point. Humans are more adaptable and intelligent, meaning we can usually make more efficient use of available materials than a machine.
Industrial society has grown so much because people are systematically rewarded within it by increasing production. Within society that means more money and power. Between societies, this means that a society that produces more can outcompete or even conquer another. Labor efficiency is central to maximizing production, but land and energy are only important to the extent that they boost or interfere with production in the short term. All decisions are made on the short term, because in such a highly competitive society, any group that can’t compete in the short term will not exist to make decisions later on. This is strangely appropriate, seeing as a society that makes all its decisions in the short term will not exist in the long term.
The brilliant writer, thinker, and historian Lewis Mumford described differences between what he called “polytechnic” and “monotechnic” approaches. Polytechnic approaches involve using many different technologies to meet human needs. Monotechnic approaches, on the other hand, prioritize technology for the sake of technology, to the exclusion of other options, regardless of the impacts on human beings or the planet. Mumford’s favorite example of the monotechnic approach was the automobile, because automobile-based transportation systems thrive at great human and ecological cost, and grow at the expense of other modes of transport like walking or bicycling.
It would be fair to say that agriculture is the first example of a truly monotechnic approach; an approach that set a pattern for all of civilization’s future technologies. If that’s confusing, think about what agriculture
is: you take a piece of land and destroy all visible plant and animal life on it; use plows to destroy the structure of the soil underneath; replace them all with one monocultural species; kill any competitors or predators; harvest that monocultural species; repeat. As Lierre Keith writes, “Agriculture is carnivorous: what it eats is ecosystems, and it swallows them whole.”
285
Agriculture is monotechnic in the sense that it eliminates biological diversity, to be sure, but there’s much more to it. Indigenous societies are generally quite mobile, and can move to make use of the many different foods available in different seasons in healthy ecosystems. The same goes for other material gathered or hunted, such as firewood, furs, or medicinal plants. Though early agricultural societies certainly gathered food initially, it doesn’t take long to deplete what is available around a village, which would have made those societies even more dependent on agriculture. Agriculture also eliminated many of the birth control methods that were intrinsic to hunter-gatherer life.
286 That, along with other changes, like the ability to replace breast-feeding in young children with foods made from stored grains, led to a trend toward constant population growth that worsened local ecological destruction. Agriculture grew at the expense of technologies, skills, and social structures used by indigenous peoples.
For a technotopia to succeed, it needs to meet two main conditions. First, sustainable alternative technologies need to be possible—and they have to be developed—to replace essentially all current (unsustainable) industrial processes. And second, those sustainable technologies actually have to be implemented within that society’s social conditions, power structures, and economic system.
The first condition requires a profound leap of faith. The problem is that industrial civilization itself is based on unsustainable technologies, like annual grain agriculture and the use of fossil fuels. If you invent a new technology that depends on unsustainable infrastructure, your technology is simply not going to be sustainable. Proven sustainable technologies of the past—like beaver dams or cheese fermentation—start from the context of a healthy living biome. It’s safe to say that proven sustainable technologies of the future will start there as well.
The term “green technology” has been thrown around so much these days that it is essentially meaningless. Most supposedly green industrial technologies—like bioplastics—fall apart under scrutiny, and only look promising when compared to industrial society’s abysmal ecological track record. Industrial green technologies are more promise than substance, and in essentially every case green technologies are essentially prospective or theoretical. If we rely on them for our future, we are once again giving up our agency, once again making ourselves powerless, once again falling for a distraction when we should be looking for the way out of the cage.
But it’s the second condition that’s really the final nail in the coffin of technotopias. Belief in a technotopia requires not only that sustainable industrial technologies exist, but that those technologies be implemented at the expense of current unsustainable technologies. And the history of garbage we’ve discussed in this book has been the complete opposite. Over the past century, low waste systems have been deliberately dismantled by corporations and governments, specifically because they were low waste systems. We’ve discussed some of their motivations for doing this, but there are deeper and subtler reasons that we can tease out.
I think the idea that “technology is neutral” is one of the most dangerous myths of our time. If we fall for the myth, it blinds us to the many ways that various technologies determine social structures and influence power relationships. We’re often told that technology is “simply technology” and can be used for “good or for evil” but is fundamentally amoral. In other words, that whether a technology is harmful or beneficial depends on the intent of whoever is using it. Hence, we should ignore issues around technology and focus on more productive approaches to changing the world, like choosing which wealthy capitalist to vote for at the ballot box.
This belief probably has more to do with constant repetition than with deliberate misdirection. If we can see through the myth, however, we can see a whole new layer to the human world, and a whole new history. One of the first and most brilliant people to explore and document this history was Lewis Mumford. Throughout the vast majority of the twentieth century, Mumford wrote extensively, beautifully, and insightfully on the interplay between technology, history, hierarchy, social mythologies, and systems of control. And he didn’t buy into the “technology is neutral” myth. Mumford made a clear distinction between two classes of technologies: those which were democratic, and those which were authoritarian. Moreover, Mumford explicitly recognized that society structure and technology went hand in hand, and used the term “technics” to encompass both the cultural and industrial aspects of technologies.
According to Mumford, democratic technics are comprised of some of the earliest human technologies. These technics are human and community scale, “resting mainly on human skill and animal energy but always, even when employing machines, remaining under the active direction” of autonomous human communities.
287 Such technologies, Mumford notes, are widely available, require minimal equipment or resources, and both robust and highly adaptive. These technologies, like gardening or pottery, “underpinned and firmly supported every historic culture until our own day.” In addition, they had something of a moderating effect on the impact of tyranny. “Even when paying tribute to the most oppressive authoritarian regimes, there yet remained within the workshop or the farmyard some degree of autonomy, selectivity, creativity.”
Authoritarian technics are another story. Rather than being ancient, Mumford identifies them as being more recent, specifically equating them with the origin of civilization. Authoritarian technics are based on centralized, top-down control, on a much larger scale. Mumford writes: “The new authoritarian technology was not limited by village custom or human sentiment: its herculean feats of mechanical organization rested on ruthless physical coercion, forced labor and slavery, which brought into existence machines that were capable of exerting thousands of horsepower centuries before horses were harnessed or wheels invented.”
He continued, “. . . above all, it created complex human machines composed of specialized, standardized, replaceable, interdependent parts—the work army, the military army, the bureaucracy. These work armies and military armies raised the ceiling of human achievement: the first in mass construction, the second in mass destruction, both on a scale hitherto inconceivable.”
The development of authoritarian technics climaxes in what Mumford called megamachines, massive centralized social machines with human beings as their components. For Mumford, these megamachines could certainly include industrial technologies, but that aspect was secondary, since megamachines like the Roman Empire predated the industrial revolution.
Obviously, Mumford’s concepts of authoritarian and democratic technics overlap with his ideas of polytechnic and monotechnic approaches.
Mumford’s categories are a useful starting point for debunking the myth of technological neutrality. There are also specific questions we can ask if we want to evaluate any particular technology for its neutrality.
First, there are questions about the prerequisites for the technology. How many people are required to make and run the infrastructure? Does the use of the technology presuppose the existence of other infrastructure like roads or energy distribution grids? Can the materials be obtained sustainably? Can human-scale communities with minimal hierarchy implement the technology? Or is a large, hierarchal society required to utilize it?
And second, there are questions about who can direct and make decisions about the technology. Is the infrastructure distributed so that it is under the control of many people, or is it centralized under the control of an elite? Does the technology allow people to become more autonomous and fulfilled, or do people surrender their autonomy to meet the needs of the technology? And who gets veto power? Can the technology be repealed or stopped?
And third, there are questions about the inherent effects of the technology. Are any benefits widely available? Are there barriers to accessing them? Is the technology useful to all, or only to an elite? And are the costs paid by the same people who get the benefits, or are the costs exported? Does the technology affect the degree of stratification and hierarchy in social?
Many of the technologies that we take for granted clearly fail the neutrality test and fall into the camp of authoritarian technics. Nuclear power and nuclear weapons are especially strong examples of this.
For nuclear technologies even to be developed requires the existence of a megamachine. Nothing less could marshal together the large numbers of people and raw materials required for an endeavor like the Manhattan Project. For example, as one of several massive complexes created for the Manhattan Project, the US government built an entire secret city called Oak Ridge to manufacture enriched uranium. Oak Ridge was home to the largest building in the world (which housed the first nuclear reactor), and by itself used
one sixth of all electricity generated in the US.
288 Constructing the infrastructure for nuclear power is simply beyond the capability (and likely even further beyond the desires) of decentralized democratic communities.
Clearly all decision-making about the technology is highly centralized. That certainly goes for today, when much of the nuclear energy in the world is under military control. And it definitely went for the Manhattan Project—anything that is a complete state secret can hardly be called democratic. Furthermore, nuclear infrastructure requires a regimented hierarchy to function, and requires the people involved to sacrifice their autonomy to meet the needs of the technology and the social structure implementing it. In one illuminating case, a World War II-era Oak Ridge employee wondered, “Why on earth did they have all these high-school girls running this machinery? We could have blown up the whole of Tennessee!” Visiting the still active facility six decades after the bombs were dropped on Hiroshima and Nagasaki, she got her answer. “I was told that they wanted young women who would do what they were told and not ask questions. Really, we were just robots.”
289
What about the effects of the technology, the distribution of benefits and costs? It’s clear that the purpose of nuclear weaponry, from the beginning, has been to cement and increase the power of the countries that created them—more accurately, to cement and increase the power of megamachines, a primary goal of all megamachines being to increase their own power and extent. In that regard, nuclear weaponry has so far been a success. So it certainly has benefits for those in power, though I think few people would say that the world is better off with nuclear weapons than it would be without them. And the costs, as usual, have been imposed on those with relatively little power. The Japanese civilians who were killed or injured at Hiroshima and Nagasaki. The Micronesian islanders (and other indigenous peoples) who were displaced or poisoned by nuclear testing on their territory. The many people in Iraq and other countries who have been poisoned by the use of non-fissile nuclear weapons like depleted uranium munitions. And, of course, all of the (human and nonhuman) people displaced, oppressed, exploited and killed by the megamachine so that it could create such weapons in the first place.
Many people would probably admit that nuclear weapons don’t fall into the category of neutral technology, but might insist that it’s an exception to the general rule. So let’s choose another, less obvious example.
In the interests of fairness, I’ll simply go to the Technology portal on Wikipedia and choose a random page. (Oddly enough, the current featured article is “Nuclear Weapon.”) And our next example? The steam locomotive.
All right, prerequisites. Metal is the single greatest infrastructural requirement for the steam locomotive. And metal means mining, both for ore and for fuels like coal to operate smelters and forges. In a number of very real ways, the steam locomotive—and all rail transit—is a product of mines. The metal that forms a locomotive, the steam engine that drives it, and the rails it travels on all originate from mines.
The first practical steam engines were invented only about three hundred years ago. Because of their large size and weight, they were stationary. The first and most important application of the steam engine was in mines. Since deep mineshafts are essentially gigantic wells, large machines were needed to continuously pump out water to allow miners to work. It was in this application that early steam engines were refined and developed. Steam engines weren’t used in self-propelled vehicles until some seventy years later.
Rails themselves, originally made of wood, also originate in mining. Because metals are usually extracted from much larger amounts of ore, miners must find a way to move the ore to a place where it can be melted and refined. Almost five hundred years ago, miners in Europe started to use rails for this, though carts were hauled by hand or draft animals rather than by machines. Again, it was for mining that the railway was gradually developed—to carry ever-heavier loads of ore—until it reached the solid metal tracks we see today.
These different mining technologies converged to create the steam locomotive only two hundred years ago.
The steam locomotive also has social prerequisites, especially because of the extensive systems of railroads put in place for locomotives. Railroad construction in the North American West in the late nineteenth century involved tens of thousands of laborers at a time, most of them poorly paid and poorly treated Chinese immigrants, who had horrendous injury and mortality rates because of the dangerous work. Those railroads also presupposed the dispossession, displacement, and genocide of the many indigenous peoples whose territories they were built on. It seems fairly clear that building railway networks is the job of a large, hierarchical, and expansionistic society.
Question two, decision-making. Who controls steam locomotives? As a massive machine requiring extensive mining (smelting, etc) and transportation infrastructures to build and maintain equipment, the steam locomotive is not a community-scale device. In North America a century ago, steam locomotive manufacturing was essentially limited to three specialized companies.
290 At the same time, it’s not as centralized as our previous nuclear example. Even though locomotives and trains are themselves under the control of an elite, the infrastructure is very spread out. Individual communities cannot use that infrastructure for themselves, except by buying passage on a train. However, they can use—and have used—the distributed nature of railroads as a means of political leverage in disputes with empire. It’s pretty easy to block the tracks. And when the going gets rough, railways—especially bridges—have been a favorite target of resistance and guerilla groups, since they are often unprotected and fairly easy to disable.
Category three, effects. Steam locomotives—and railroads—are the basis of an industrial empire. In a sense, they shrink the world, and allow governments and corporations to extend their influence, especially in military and industrial terms. They make possible and speed up the transport of large volumes of raw materials and other goods across long distances over land, for which railroads are still used today. And they allow the movement of heavy equipment and machinery over great distances, and to places that previously could only be reached by hauling over dirt roads or wagon trails.
Railways allow those in power to increase and concentrate their power ever more, both materially and socially. A railway brings distant resources within reach of the engines of industry, and can ship the engines of industry to those resources. By dramatically increasing the geographic scale and convenience of resource extraction, railways make possible the creation of much larger and more powerful megamachines.
Let’s talk briefly about two other consequences of steam locomotives, to show how deep and wide can—and will, and often must—be the consequences of a piece of technology. The invention and use of any piece of technology will by necessity affect many other parts of any culture, because for better or worse, technology and culture are intertwined: certain attitudes and social mores and reward systems will lead to certain technologies; and certain technologies will lead ineluctably to certain changes in perception, experience, behavior, and personal and social (cultural) mores and systems of rewards. The first of the two social consequences I want to mention here is that railroads led to the standardization of time zones. Prior to railroads, different communities kept their own time. Standardization wasn’t necessary because by the time someone walked three days from one town to another, whether the time was now 8:15 or 8:34 in the evening rarely was of consequence. But with railroads came railroad timetables, and with timetables came the necessity for one community’s 8:15 to be another community’s 8:15. Now, superficially, this may not seem important to you, but the point is that the existence of railroad locomotives requires—as do so many technologies—standardization, even in parts of our lives where we might not expect.
The second consequence to mention here is that railroads really were the prototypical modern corporation. As I wrote in The Culture of Make Believe, “Corporations are a legal device invented in the eighteenth and nineteenth centuries to deal with the myriad of limits exceeded by this culture’s social and economic system: the railroads and other early corporations were too big and too technological to be built or insured by the incorporators’ investments alone; when corporations failed or caused gross public damage, as they often did, the incorporators did not have the wealth to cover the damage. No one did. Thus, a limit was placed on the investors’ liability, on the amount of damage for which they could be held liable. Because of limited liability, corporations have allowed several generations of owners to economically, psychologically, and legally ignore the limits of toxics, fisheries depletion, debt, and so on that have been transgressed by the workings of the economic system.
“By now we should have learned. To expect corporations to function differently than they do is to engage in magical thinking. We may as well expect a clock to cook, a car to give birth, or a gun to do other than that for which it was created. The specific and explicit function of for-profit corporations is to amass wealth. The function is not to guarantee that children are raised in environments free of toxic chemicals, nor to respect the autonomy or existence of indigenous peoples, nor to protect the vocational or personal integrity of workers, nor to design safe modes of transportation, nor to support life on this planet. Nor is the function to serve communities. It never has been and never will be. To expect corporations to do other than to amass wealth at any (externalized) cost is to ignore the system of rewards that has been set up, to ignore everything we know about behavior modification: if you reward someone—those investing in or running corporations, in this case—for doing something, you can expect them to do it again. To expect corporations to do other than they do is at the very least poor judgment, and at the very worst delusional. Corporations are institutions created explicitly to separate humans from the effects of their actions, making them by definition inhuman and inhumane. To the degree that we desire to live in a human and humane world—and really, to the degree that we wish to survive—corporations need to be eliminated.”
291
The invention of steam locomotives was tightly tied to the rise of corporations. Does anyone still want to argue that technology is inherently neutral?
A common thread in both the nuclear and locomotive examples is that both require a large, hierarchical society in order to develop. And in both cases, the effects of the technology are to benefit those at the top in their efforts to make megamachines even larger, to gain more power. Now, we can get into an argument about whether making bigger megamachines is good or bad (though if you’re still reading, that’s probably not going to be an issue), but it’s clear that such an act is far from amoral.
The issue is complicated slightly by the fact that the development of technology is an iterative process: new technologies are built upon old ones, often stacked so deep that the foundational technologies become invisible and unquestioned. When people talk about technology they almost invariably mean something from the past few decades, not the technological bases of industry or agriculture.
Because of this stacking, we’re sometimes put in confusing and morally complex situations—technics that could be somewhat democratic may be based on deeper authoritarian technics. While I like the idea that the Internet can be used for rapid, long-distance communication between equals, the Internet and all electronic communications are based on deeper, authoritarian technics like mining and industrial manufacturing. And despite the tendency of some academics to refer to this age as “postindustrial,” the character of modern civilization is determined much more by its clanking industrial foundation than its digital veneer. Not only did the manufacture of my computer require the ecological and social damage caused by mining and manufacturing, the social conditions required for its manufacture emerge from and lead to the dominance of authoritarian technics. Because of this, even the most theoretically liberatory technologies, like the Internet, end up being dominated by corporations and governments that use them to advertise to, propagandize, spy on, and track people. We should also never forget that although we may be able to use computers to quickly communicate with others of those who oppose the destruction of life on Earth, computers are used more efficiently by those who are killing the planet. Indeed, much modern commerce and war (and surveillance) would be impossible without computers.
None of this is to say we should shun potentially democratic—or even potentially useful—technics that are based on authoritarian ones. We can use them judiciously while keeping our end goals in mind. We can disillusion ourselves of myths of a magical and painless conversion to a nonauthoritarian system using identical technologies. We can rid ourselves of the narcissistic notion that our personal purity (spiritual or otherwise) is more important than conditions in the real, physical world, including life on earth. And we can use the most effective tools and techniques to which we have access in order to systemically dismantle the underlying authoritarian system.
A central story of the dominant culture is that the past few centuries have been characterized by constant technological and democratic progress. In fact, even the people who insist that technology is inherently amoral will often insist that technological and democratic progress are inextricably linked. Mumford didn’t buy it. Instead, he thought that recent technological progress was characterized primarily by a growth in authoritarian technics: “[W]hat we have interpreted as the new freedom now turns out to be a much more sophisticated version of the old slavery: for the rise of political democracy during the last few centuries has been increasingly nullified by the successful resurrection of a centralized authoritarian technics.”
292
Lewis Mumford doesn’t deny the many struggles toward democracy, but asserts that overall, modern authoritarian systems are winning, and with less resistance than cruder authoritarian systems of the past: “Why has our age surrendered so easily to the controllers, the manipulators, the conditioners of an authoritarian technics? The answer to this question is both paradoxical and ironic. Present day technics differs from that of the overtly brutal, half-baked authoritarian systems of the past in one highly favorable particular: it has accepted the basic principle of democracy, that every member of society should have a share in its goods. By progressively fulfilling this part of the democratic promise, our system has achieved a hold over the whole community that threatens to wipe out every other vestige of democracy.
“The bargain we are being asked to ratify takes the form of a magnificent bribe.”
293
Mumford, writing close to fifty years ago, couldn’t possibly have anticipated all of the new “magnificent bribes” that high-tech industrial culture has offered us. And technotopia is one of these bribes.
One of the earliest articulators of the technotopian ideal was Buckminster Fuller. A writer, lecturer, designer, and architect with a career spanning five decades, Fuller gained public prominence in the fifties and sixties, and left his mark on many different fields. Fuller’s renown has waned somewhat since his death in 1983, but even people who haven’t heard his name will recognize some of his iconic inventions, like the geodesic dome.
Although Fuller lacked a university degree and was not wealthy, he was a driven and prolific inventor and innovator from the 1920s onward. An early advocate for sustainability, Fuller was driven by this question: “Does humanity have a chance to survive lastingly and successfully on planet Earth, and if so, how?” Fuller had a genuine desire to improve conditions for humans using technology, a goal stemming in part from his young daughter’s death from polio and meningitis. He wrote that industry should be converted from manufacturing weaponry to making “livingry”: technology that benefits humans (Fuller was fond of inventing new words and terms). He was aware that the planet had finite resources, and popularized the term “Spaceship Earth” to emphasize the idea. Fuller was the prototypical technotopian—whose aim was to apply “the principles of science to solving the problems of humanity”—and his influence has been enduring, if not widely recognized.
Fuller’s goal of learning about the world and employing technology to benefit everyone—at least many humans—was certainly laudable and authentic. But his specific ideology, like the idea of technotopia more broadly, falls down in a few ways. His doctrine was profoundly anthropocentric. He preferred to call humans “earthians,” as though humans were the only species living on planet Earth rather than one of millions. His definition of “technology” is essentially the same as “industrial technology.” His inventions, though clever, were plagued by implementation problems and most never entered production, leading some to criticize him as a hopeless utopian dreamer. Furthermore, many of his ideas were themselves inconsistent with his stated philosophy.
Take his concept of “ephemeralization.” Fuller observed that in his lifetime the size of machines continued to shrink, even as their functionality continued to increase, becoming ever more “ephemeral.” Think of computers, getting smaller and smaller so that the cell phone in your pocket has more computing power than the early, room-sized computers. Or compare a 4000-pound 1950 Buick to a 1600-pound Smart Car. Or compare wood and stone buildings to those built with lightweight space-age materials. As knowledge and technology increased, Fuller believed, material items could become physically less substantial. Fuller presumed that since these smaller machines were constructed out of less material, production and population could continue to increase even with finite resources. In fact, Fuller went so far as to say that this could continue indefinitely, and that there was no upper bound on production. It’s surprising that he would say this, especially because of his stated recognition of Earth’s finite limits, but he isn’t the first technologist to fall prey to magical thinking. Fuller believed that because of ephemeralization, knowledge and information would continue to increase toward infinity, as material technology continued to shrink away to become more and more ephemeral.
Yes, some nanotechnologists are predicting the same thing, and we already discussed why the human population won’t continue to grow indefinitely. In addition, Fuller ignores the fact that if you really had the technology to supply an infinite number of humans, you wouldn’t need humans. I mean, if you were a government with the ability to synthesize infinite quantities of iPods, or cruise missiles, or robots out of next to nothing, why would you even keep humans around? You don’t need them to be labourers or manufacturers, and if you can manufacture anything for free, there is no profit to be made from having consumers. If your viewpoint is completely utilitarian, and your production near-infinite, then humans just get in the way. If the ultimate Fullerian future did exist, it wouldn’t include humans.
Furthermore, Fuller’s analysis of ephemeralization in the twentieth century isn’t even accurate. Yes, gadgets are smaller now than they were a year, or a decade, or a century ago. But just because devices themselves are smaller doesn’t mean they use fewer resources. As discussed earlier in the book, tiny devices like computer chips are incredibly wasteful. And when talking about technology, the infrastructure can be much more important than the manufactured product. The amount of industrial infrastructure required to produce that pocket-sized cell phone is immense, and that must be factored into an analysis of whether or not ephemeralization is actually taking place.
And of course, they don’t make things like they used to. Newer devices may be smaller and lighter, but they’re often more fragile and built to be disposable. A new plastic-built electric drill, for example, might be half the weight of an older, metal-built one. But if the older drill has a lifetime—there’s that conflation of the mechanical with the living, so we really should say, “the older drill will last”—three or four times longer than the new drill, then we aren’t seeing true ephemeralization, just a manifestation of a trend toward profitable disposability.
Fortunately, if you really want to believe in the feasibility of a more “ephemeral” society, you’re in luck. The technology has already been invented. Some clever and persistent people have already developed a way (many ways, actually, literally thousands of ways) to replace big, industrial infrastructure with knowledge. They’re called indigenous people. Ultimately, hunter-gatherers, with their portable lifestyle, lack of industrial infrastructure, minimal physical goods, and extensive knowledge of the land and its nonhuman inhabitants, have been far more successful at ephemeralization than industrial society ever will be.
Of course, such an approach requires a healthy landbase (and a sane people who value physical reality over social constructions; or perhaps saying this another way, a sane people whose social constructs facilitate and don’t hinder their perception of, engagement with, and love for physical reality). And technotopians, like other industrialists, treat all nonindustrial technology as useless, and healthy living communities as little more than impediments to resource extraction.
In short, technotopians are insane: out of touch with physical reality.
There’s a logical endpoint to this belief, in the minds of some futurists, and it goes far beyond turning silicon into computers. Here’s how the argument goes, according to some futurists who call themselves “extropians” (from the word “extropy,” the opposite of entropy). Some of the first digital computer parts were vacuum tubes, also called thermionic valves, which we used to modify electrical signals. They were about the size of lightbulbs, and were invented in the late nineteenth century. They were replaced by the transistor, a solid-state device the size of your thumbnail, in the middle of the twentieth century. Standalone transistors were largely superseded by microchips, and a row of microchips fifty long would be smaller than the diameter of a human hair. According to extropians, there’s no reason this couldn’t go on forever (“The Law of Increasing Returns”), with computation taking place in components the size of a single molecule, a single atom, even using hypothetical structures within atoms themselves. According to some, matter itself could eventually be reconfigured at a subatomic level to be optimized for computing, turned into a material dubbed “computronium.”
What would be the purpose of such a material? Some futurists have suggested you could build something called a “Jupiter brain,” a single planet-sized hunk of computronium (getting enough matter for such a project would likely require dismantling an existing planet and manipulating it at the atomic level). Not to be outdone, others have suggested building a much larger “matrioshka brain,” a set of concentric computronium spheres as large across as the earth’s orbit, which would monopolize all of the energy output of the sun and use it for computation.
And how would such vast computation potential be employed? It’s been suggested that a planet-sized computer could be used to run simulations of reality—kind of like in The Matrix—including simulations of human beings who had their brains scanned and “uploaded.” In this simulated environment, the argument goes, human intelligence would no longer be constrained by the biological limits of the brain, and would develop into God-like beings. Generally speaking, those who describe such a future don’t have a specific goal in mind for massive amounts of computation, except that sufficiently developed artificial intelligences would be able to develop more advanced artificial intelligences than themselves, and so on, in a runaway cycle dubbed “the Singularity.”
Computer scientist and science fiction writer Rudy Rucker casts doubt on the plausibility—and desirability—of computronium: “Although it’s a cute idea, I think computronium is a fundamentally spurious concept, an unnecessary detour. Matter, just as it is, carries out outlandishly complex chaotic quantum computations just by sitting around. Matter isn’t dumb. Every particle everywhere everywhen is computing at the maximum possible rate. I think we tend to very seriously undervalue quotidian reality.
“In an extreme vision . . . Earth is turned into a cloud of computronium which is supposedly going to compute a virtual Earth—a “Vearth”—even better than the one we started with.
“This would be like filling in wetlands to make a multiplex theater showing nature movies, clear-cutting a rainforest to make a destination eco-resort, or killing an elephant to whittle its teeth into religious icons of an elephant god.”
294
Rucker goes on to point out that there are no shortcuts for the work nature is already doing: its complexity is irreducible.
Others have asked, if you were living in a poor simulation of reality, how would you know? What would you compare it to? Especially if your psyche were only a poor imitation of an actual psyche?
It sounds like science fiction—it is science fiction—and I hope it stays that way. But the extreme extropian future doesn’t interest me because I think it’s a literally probable outcome—it’s obviously not—but instead it interests me because it’s such a remarkably (and inadvertently) clear articulation of the pathology of civilization. It’s a digital manifest destiny. It encapsulates civilization’s willful denial of the laws of ecology and of physics (the very name is in contravention of thermodynamics), as well as its myths of inevitability and immortality. The dream of continual expansion and “progress.” It takes civilization’s drive for control to a new level—the vision of tearing apart the entire solar system and remaking it from the subatomic level. And all that to support an imaginary world where they can become as gods, the pantheon of their own mythology.
High-tech trappings aside, the story is strikingly familiar. For millennia, those in power have been dismantling the real world to shore up an imaginary one defined by control instead of diversity—a toxic mimic of the real world. Quarrying mountains to make artificial mountains—monuments to themselves—wrapping themselves in cocoons of civilized culture and infrastructure, attempting to project and imprint their thoughts and words and propaganda on the entire world.
But unlike the fantasy, their work is rough and incomplete. There are still wild places and wild people left; there are cracks in their toxic mythology; we are still born with feelings and knowledge that they (and then we) can’t completely suppress. So again, we can ask the question, how do you know if you are living in a poor simulation of reality, a toxic mimic of reality? Well, if you’re reading this, it’s safe to say you do know, that you’ve seen through the mythology.
The question is, what are you going to do about it?
Empires and civilizations—to whatever extent there’s a difference—are fundamentally expansionist. They’re driven to expand by their own internal competitiveness and a desperate drive to outrun their intrinsic unsustainability. Those who identify with civilization enjoy the benefits that come when you live beyond your means, when you take more than you give back. But people who are paying attention recognize the finite limits of a planet. So in order to try to reconcile the desire to keep benefiting from living beyond their means, and the finiteness of Earth, some people make up a fantasy of internal expansion. Like Buckminister Fuller, they try so hard to believe that civilization can expand indefinitely without actually taking up any more space.
Some technofix advocates, like James Lovelock of Gaia theory fame, argue that their goal is more a kind of damage control. They agree the Earth is a small place, that the ecological damage is extensive, that the social inertia is tremendous. And they argue that pursuing advanced industrial technology is necessary to reduce the damage.
I’m all for damage control. I’m all for harm reduction. Sure, do whatever can be done to reduce the destructiveness of industrial culture. But I think these advocates are often being disingenuous. For the most part, they don’t want to save the landbase if it means making real or substantial changes; they don’t want to save the landbase if it means inconveniencing themselves. What they want isn’t to kick the habit—to deal with their addiction—it’s just to find a superficially different way to get their fix. It’s Mumford’s slavery, all over again, and these slaves—these addicted slaves—are being bought off cheap.
I’m happy to have a vision and plan for the future. But a vision of shiny toys and glamorous tech that ignores the root issues does us a disservice. It serves a distraction instead of a path, makes waste of our time as well as this planet.
The problem with the technotopia scenario isn’t really the goal of employing technology to make a better society. Of course a good society should be materially sustainable. Of course it should be more equitable. Of course it should run on solar energy, as all societies of the past have. No, the problem with technotopia is the false assumptions entangled with it, and the almost deliberate ignorance. The assumption that technology means industrial technology; and that industrial technology is neutral. The assumption that technology alone could make an unsustainable society sustainable. The ethnocentrism that assumes indigenous ways of living lack technology, social structure, or well-being. The presumption that progress is always good, and that the centralization of power and resources is progress. The assumption that those in power are simply misguided about ecology. A willful ignorance of the way power and coercion underlie the dominant culture. That it is possible for most industrial technologies to become sustainable. That governments and corporations are capable of going against their own underlying systems. That governments, corporations, scientists, and engineers are going to solve our problems for us. That it is possible to maintain this current way of living with only minor adjustments to our daily habits, even though we know that the root problems go to the core of civilization. That maintaining current power structures with green technologies would actually be a good idea. That maintaining this current way of living is desirable for humans, let alone the other inhabitants of this planet.
Even if you agree with these assumptions—even if your dearest wish is that in fifty years you could go to a sod-roofed Wal-Mart powered by wind-mills and buy cheap biodegradable soy-sneakers—that doesn’t make the scenario more plausible. Industrial civilization is still based on centralizing control and externalizing consequences. The ability to externalize consequences requires a constant flow of cheap energy, new raw materials, and new lands, all of which civilization is now exhausting. Which brings us to our final scenario: collapse.