By 1960, the U.S. Empire had visibly diminished. The Philippines was independent, Hawai‘i and Alaska were states, and Puerto Rico had the nebulous status of “commonwealth.” The remaining colonies were small: Guam, the U.S. Virgin Islands, American Samoa—total population 123,151—plus another 70,724 living in the United Nations’ “strategic trust territory” in Micronesia under U.S. supervision.
Yet the United States is a restless country, and it didn’t take long for new prospects to present themselves. In 1962 President John F. Kennedy called for a mission to the moon. It was, he said, a “new frontier.”
Talk of frontiers was a throwback to the nineteenth century, but it made a certain sense. The prospect of claiming the moon—huge, uninhabited, strategically useful, and rich in minerals—is precisely the sort of thing that would have made the world conquerors of old salivate. “I would annex the planets if I could,” the British arch-imperialist Cecil Rhodes once mused. “I often think of that.”
Lunar colonization was a distant dream in Rhodes’s day and even seems far-fetched now, but at the time, it appeared graspable. One has to keep in mind the wrenching technological innovations that the leaders of the United States had already witnessed in their lifetime. Dwight Eisenhower was born into a world containing only a countable handful of cars, a world where lightbulbs were still a novelty. Yet he lived to see computers, nuclear bombs, supersonic jets, and manned spacecraft. Who was to say that the science-fiction tales of settling distant planets were fantasies? A few years after the moon landing, NASA convened a study group on space colonization, which judged it to be both “technically feasible” and “desirable.”
And yet the United States didn’t annex the moon. It didn’t even try. Instead, it went to extraordinary lengths to assure the world that the Apollo program was not about expansion or empire. President Lyndon Johnson signed the Outer Space Treaty in 1967, agreeing that no nation could claim sovereignty in space. Then, once it seemed likely that the Apollo missions would succeed, NASA appointed a Committee on Symbolic Activities for the First Lunar Landing and tasked it with ensuring that no one would confuse the moon landing for a landgrab. The committee seriously considered planting the United Nations flag instead of the U.S. one, or perhaps small flags for every country.
In the end, Congress insisted on the U.S. flag. But it issued a declaration explaining that this was simply “a symbolic gesture of national pride” and “not to be construed as a declaration of national appropriation.”
The plaque the astronauts left captured that internationalist spirit. “Here men from the planet Earth first set foot upon the moon July 1969, A.D.” read the text, under pictures of the hemispheres of the globe. “We came in peace for all mankind.”
What had happened? How could a country that had once launched wars for foreign lands be so blasé about the largest clump of territory ever to become available? Where had its imperialist spirit gone?
Part of the answer, of course, is the fierce resistance put up by the colonized peoples of the world. They had turned empire into an exhausting and occasionally bloody affair. Whereas colonizers in the nineteenth century had annexed territory with pride, by the 1960s they understood that forthright imperialism risked infuriating the increasingly powerful Third World. By then, even taking the uninhabited moon seemed as if it might kick up trouble.
But the exhaustion of colonialism can’t be explained solely by the new balance of forces. Yes, opponents of empire grew stronger after World War II, but so did would-be imperialists. The United States ended that war with a formidable air force, atomic weaponry, and a globe-spanning network of military bases. Its defeat of Japan showed what this firepower could do. Had it truly wished, the United States could have visited the same fate upon its Cold War adversaries in Vietnam and Korea. But it didn’t, nor did it even try to annex those countries. The newfound power of the Third World peoples cannot alone account for that.
It may help to look at the decline of colonialism from a different angle, focusing not just on supply but on demand as well. The worldwide anti-imperialist revolt drove the cost of colonies up. Yet at the same time, new technologies gave powerful countries ways to enjoy the benefits of empire without claiming populated territories. In doing so, they drove the demand for colonies down.
The “empire-killing technologies” ranged from skywave radio to screw threads, and they worked in different ways. But, collectively, they weaned the United States off colonies. In so doing, they also helped to create the world we know today, where powerful countries project their influence through globalization rather than colonization.
In the nineteenth century, there were many reasons why major powers took colonies. Ideologies of “civilization,” the international competition for prestige, dark psychosexual urges—these were all present in the tangled business of empire. But by the mid-twentieth century, talk of uplifting savages or carrying Christ to heathen lands had subsided, and starker motives shone through more clearly. Colonies were useful for their produce, and they were useful strategically.
Often, those two motives blended together. Complex industrial societies depended on goods that they couldn’t mine or grow at home. But it wasn’t just that they needed those goods, they needed secure access to them, the kind that couldn’t be denied even if war broke out. And if they couldn’t get it? Germany had crashed headfirst into that problem during World War I, when its enemies locked it out of South American markets. South America was where the all-important nitrates came from, used to make fertilizer and explosives. Germany found itself in the extremely uncomfortable position of fighting a two-front war without access to either Peru’s guano or Chile’s sodium nitrates. It was only Fritz Haber’s timely invention of ammonia synthesis that kept Germany fighting for four years.
Haber had solved the nitrate problem, but there were many other raw materials that advanced economies required, including petroleum, iron, coal, indigo, tin, copper, sisal, cotton, kapok, silk, quinine, tungsten, bauxite, and palm oil. The United States, with its massive mainland stretching across multiple climatic zones, was blessed with an abundant crop of internal raw materials. But it, too, was dependent. It relied most visibly on rubber, which grew only five to ten degrees from the equator, and which it got mainly by dint of its friendly relations with European empires.
Rubber was a colonial product par excellence. In the late nineteenth century, King Leopold II of Belgium had claimed a vast colony in the Congo and established a brutal regime bent on rubber extraction, one that brought the population down by some ten million. The French, British, and Dutch, for their parts, had set up rubber plantations in their Southeast Asian colonies.
These were profitable ventures, especially as rubber insinuated itself into every nook and cranny of the industrial economy. Tires, tubes, hoses, insulation for electrical wires, raincoats, life rafts, gas masks, and a thousand little parts and bits were made from it. Between 1860 and 1920, world rubber consumption grew nearly two-hundred-fold.
In the auto-mad United States, rubber thirst was unslakable. By the eve of the Second World War, the country consumed some 70 percent of the world’s supply, bought mostly from Europe’s Asian colonies. If war came, the United States would need still more. A Sherman tank used half a ton of rubber, a heavy bomber used a full ton, and a battleship used more than twenty thousand rubber parts, totaling eighty tons. As the president of the tire manufacturer B. F. Goodrich warned, without rubber the United States “could offer only 1860 defenses against 1942 attacks.”
Without rubber—it wasn’t a hypothetical scenario. On December 7/8, 1941, Japan, worried about its own access to rubber and other critical raw materials, expanded its war beyond China and moved on to the resource-rich lands of Southeast Asia. Within months, it conquered the European colonies that accounted for 97 percent of the U.S. rubber supply. The United States and its allies were virtually cut off.
It is hard to convey how dire a threat this was. “If a survey were made to determine the most frequently asked question in America today, it would probably turn out to be: ‘When are we going to get rubber—and how much?’” wrote the secretary of the interior in mid-1942. “We must get rubber—lots of it—and get it rather quickly, or our whole manner of living will be sadly awry.”
A high-profile governmental report found the situation “so dangerous that unless corrective measures are taken immediately this country will face both a military and civilian collapse.” A military and civilian collapse? Franklin Delano Roosevelt agreed, adding that in the short time since the report had been issued, “the situation has become more acute.”
The government scrambled to plug the gap. FDR begged citizens to turn over to the government “every bit of rubber you can possibly spare”: old tires, raincoats, garden hoses, shoes, bathing caps, gloves. The president’s Scottish Terrier, Fala, donated his rubber bones. Eventually nearly seven pounds of scrap rubber were collected for every man, woman, and child in the country.
It wasn’t nearly enough. The government pressed engineers to explore substitutes. Could cars roll on wooden wheels? Steel wheels? No, they couldn’t.
Foreign markets might yield some rubber, and the State Department negotiated agreements with some twenty countries, mostly in Latin America. Yet the wild rubber secured from these was scant, and newly planted rubber trees would take at least six years to start producing.
Could rubber be extracted from some other plant? Thousands of scientists and technicians were hastily recruited to try—it was like the Manhattan Project for botany—but without success.
To conserve what little rubber remained, the government forbade its use in many forms of manufacturing. A national speed limit of thirty-five miles per hour was imposed to reduce the wear on the mainland’s tires. In June 1942 Roosevelt warned that confiscating civilian tires was a real possibility, perhaps an inevitability. A high-ranking official confided to a journalist that soon there might not be enough rubber for baby bottles. Another proposed reducing the length of condoms by half. It took his colleagues a moment to realize he was joking.
There was another way out, a Fritz Haber–style solution. Perhaps the United States could find a way to manufacture rubber, to synthesize it from oil or grain alcohol. Yet this, too, seemed unpromising. On the eve of the war, an economist for the Council on Foreign Relations judged that replacing critical raw materials—rubber and others—with synthetic substitutes was simply “not in sight.”
Synthetic rubber was possible in theory, but it was more of a laboratory curiosity than a viable commodity. No U.S. author had ever published a book on rubber synthesis, and the small trickle of man-made rubber that chemists had produced before the war was useful only in highly specialized functions. The idea of conjuring up an entire industry, reliant on as yet unachieved technical breakthroughs, able to supply enough usable rubber to equip the United States and its allies in a global war—that remained far-fetched.
As the director the War Production Board’s Civilian Supply Division told the Senate, producing the requisite six hundred thousand tons by 1944 would “require a miracle.”
The United States wasn’t the only country facing a rubber drought. Germany had the same problem. As a major industrial power whose colonies had been confiscated after the First World War, Germany depended profoundly on foreign markets for crucial raw materials. It held coal and wood in relative abundance, but when it came to rubber, oil, iron, and many other necessaries, it was, like Japan, a “have-not” nation.
Adolf Hitler was obsessed with this. He’d lived through the First World War, when the British blockade cut Germany off and pushed it to near starvation. Germans had been reduced to using ineffective tires made of metal springs. This must never happen again. “The definitive solution,” Hitler believed, lay in “an extension of our living space, that is, an extension of the raw materials and food basis of our nation.” It was this quest for “living space,” Lebensraum, that impelled Hitler to invade neighboring lands and incorporate them into Greater Germany.
War was a dangerous gamble. Yet Hitler had one important weapon in his arsenal: the most advanced chemical industry in the world. Germany’s perpetual dearth of raw materials had spurred its chemists to great heights over the years. It wasn’t an accident that Fritz Haber had been a German. In the late nineteenth century, Germans had devised synthetic dyes to replace natural plants such as indigo. In World War I they had invented synthetic nitrates and poison gases. In the Weimar period they’d come out with rayon, an artificial silk made from wood pulp that alleviated dependence on trade with Asia (Marlene Dietrich proclaimed proudly that she wore only rayon stockings). By the time Hitler came to power, the German chemical manufacturer IG Farben was Europe’s largest private corporation.
Hitler saw in IG Farben a way to bridge the resource gap just long enough to allow Germany to claim new territories. Not only could the firm make nitrates from air, it could turn coal into fuel and, Hitler hoped, rubber. The Reich’s Four Year Plan, inaugurated in 1936, plowed a substantial fraction of the economy into IG Farben and its development of synthetics. Hitler ordered that German tires be made exclusively of artificial rubber by 1939. At a rally at Nuremberg that year, he announced triumphantly that Germany had “definitely solved the rubber problem!” Soon after, he invaded Poland.
But Hitler had not solved the rubber problem. When the war started, Germany’s production and stockpiles sufficed for only two months of fighting. Throughout the war, the Wehrmacht was perpetually short of fuel and rubber. Hitler relied on risky blitzkrieg tactics—sudden all-or-nothing attacks—in part because he simply couldn’t confront his enemies in sustained combat. His troops moved largely using horses.
Desperate for more rubber, the Reich ordered IG Farben to build a new plant in the east, where it would be safe from Allied bombardment. Ultimately, this would be the single largest expenditure in the Four Year Plan. The company chose a promising site in Upper Silesia, a railway hub close to supplies of coal, lime, and water, just outside the town of Auschwitz. To build the plant, the Reich expanded a transit camp, previously used to hold Polish prisoners pending their deportation farther east, into a massive, lethal Arbeitslager.
The Jewish chemist Primo Levi, who would go on to write one of the most haunting survivor’s accounts of the Holocaust, was an inmate at Auschwitz. He remembered the “brightly illuminated” sign outside the plant: ARBEIT MACHT FREI, “work makes one free” (it “still strikes me in my dreams,” he wrote). Levi toiled in the unforgiving Polish mud to build IG Farben’s plant. As it started to produce methanol and other supplies, he was moved to the laboratory.
The new work assignment saved Levi’s life by protecting him from the worst of the bitter winter of 1944–45. Others weren’t so lucky. In all, at least thirty thousand inmates died building the plant. Yet this forced march did nothing to improve Hitler’s rubber prospects. By the end of the war, the plant still hadn’t squeezed out a single pound of synthetic rubber.
Things went quite differently in the United States. The director of the U.S. rubber program was instructed to “be a son-of-a-bitch,” but that meant standing up to oil executives, not driving tens of thousands of enslaved laborers to their deaths.
Difference two: the U.S. program worked. There was no “eureka” moment when the secret to rubber synthesis was revealed. It was the result of a thousand little discoveries made by a small army of well-funded industrial chemists. Those scientists remembered it as a golden age, when men who had formerly labored as rivals in different companies could collaborate with a shared sense of purpose. “I don’t think I have ever seen as congenial a group of people work together,” said one.
The industrial achievements were as impressive as the scientific ones. By the end of the war, the government had built fifty-one synthetic rubber plants (compared with Germany’s three), operating at the collective cost of $2 million a day. Just one such plant, which might employ 1,250 workers, made enough rubber to replace a rubber plantation that had twenty-four million trees and a workforce of at least 90,000. In mid-1944 the supply of rubber met the government’s requirements. By 1945, it overshot them. At that point, the plants, not even operating at capacity, were pumping out eight hundred thousand tons a year. That was one-third more than the amount that in 1942 had seemed as if it would require “a miracle.”
Jeeps rode on synthetic rubber tires. Tanks rolled on synthetic rubber treads, and they rolled much farther than German panzers, whose inferior treads grew brittle and cracked in the cold. (“The Germans apparently had not controlled the distribution of styrene,” one U.S. chemist clucked.) By the war’s end, nearly nine in ten pounds of U.S. rubber were factory-made, mostly from oil. This was, wrote an awed observer, “one of the most remarkable industrial achievements of all time.”
It was also a political achievement. After the war, the United States resumed buying natural rubber, which it used alongside man-made rubber, but never again would it depend on plantations. When the Korean War broke out in 1950, once again interfering with supply lines, rubber prices shot up, creating a minor shortage. Manufacturers simply opened their taps and flooded the market with synthetic rubber.
A worker at B. F. Goodrich showing that sheets of synthetic rubber (left) and natural rubber (right) are nearly identical except for their color
Rubber—once the cause of war, colonization, and mass death—became a commodity that Washington could be cavalier about. In 1952 a blue-ribbon commission convened to assess U.S. raw material needs concluded that rubber shortages could no longer pose a serious threat to national security.
Natural rubber, coming mainly from Indonesia, Thailand, and Malaysia, still makes up about 30 percent of the market. Yet it’s no longer a vital necessity, the sort worth conquering territory to secure. When the supply drops, synthetic rubber plants make up the difference with ease. One such factory is the one outside Auschwitz, which survived the war and is today the third-largest European source of synthetic rubber. That single plant in Poland has the capacity to satisfy 5 percent of the world demand for rubber.
The replacement of colonial rubber with synthetic rubber was a sort of magic. Yet it wasn’t the only rabbit that chemists yanked from their hats. What’s extraordinary is how many raw materials the United States weaned itself off during the war. Silk, hemp, jute, camphor, cotton, wool, pyrethrum, gutta-percha, tin, copper, tung oil—for one after another, the United States found synthetic substitutes. Throughout its economy, it replaced colonies with chemistry.
No synthetic illustrates this better than plastic. Today it has become so ubiquitous that it’s hard to imagine a world without it. A few years ago, the writer Susan Freinkel resolved to go a day without touching anything plastic. Yet, upon waking, she realized that her task was impossible. Her mattress, alarm clock, glasses, toilet seat, light switch, toothbrush, underwear, clothes, shoes, and refrigerator were all made with plastic. The composition book and pencil she’d planned to use to record her experiment were part plastic. She declared defeat and decided instead to write down every object she touched during the day.
Nearly two-thirds were plastic.
Plastic is a chemical cousin of synthetic rubber—the ontological line between them can get blurry. Their histories are similar, too, though unlike synthetic rubber, plastic had notable successes well before the Second World War. The first plastic, celluloid, was devised to replace ivory in billiard balls and then made its way into other household goods: combs, knife handles, dentures, and so on. Another, Bakelite, was proudly billed during the interwar period as the “material of a thousand uses.” DuPont caused a sensation with its debut of nylon stockings in 1939 (“Better Things for Better Living … Through Chemistry”). In 1940 Henry Ford unveiled a plastic car, made principally of soybean-based resin.
Ford’s car failed to stir the passions nylon stockings had, but it illustrated the boundless possibilities that entrepreneurs saw in plastic. In 1940, Fortune magazine hinted at the plastic future to come when it published a map of “Synthetica,” a “new continent of plastics,” with such countries as “Vinyl,” “Acrylic Styrene,” and “Nylon Island.” It was a further frontier, though chemical rather than colonial.
Much of this still lay in the realm of fantasy when that map was published. It took the war to make the plastic economy real. The calculus was the same as for rubber. The Axis powers, Japan in particular, had cut the United States off from vital supplies. So the military sought to use plastic, made mostly from oil, as a substitute for any “strategic” material that could no longer be easily got. As much as it could, the war effort should run on plastic.
As they had with rubber, chemists started sprinting. They pooled information, honed techniques, and experimented wildly. Synthetic rubber had substituted for one big thing. For plastic, they found countless little applications. As plexiglass, it could be the cockpit window of a plane. As cellophane, it could replace a tin can in food storage. Mixed with wood fiber as plywood, plastic could substitute for timber and steel in small boats, making them lighter, faster, and cheaper. Mixed with glass as fiberglass, it could be used to make planes.
Fortune magazine’s “Synthetica, a New Continent of Plastics” imagines the development of plastics as the colonization of a new world, an “illimitable world of the molecule.”
In a large battleship like the USS Missouri, plastics played more than a thousand roles.
By 1945, a GI could expect his canteen, his knife handle, and parts of his pistol belt to be plastic. His buttons would be olive drab plastic—a substitution that saved the army more than sixty thousand tons of brass a year. If he received a decoration ribbon, it would be of nylon, not silk. So would his parachute, his tent, and, if he had to do any climbing, his rope (formerly made of Manila hemp, but the Japanese had taken the Philippines). His razor handle, bugle, comb, toothbrush, gas mask, goggles, helmet liner, boot insoles, rifle cover, whistle, shoelaces, mosquito netting, breakfast tray, and—if he was a betting man—poker chips were all plastic.
A soldier wounded in battle might receive nylon surgical sutures covered with nylon or rayon gauze (and recuperate on a hospital bed with sheets made not of rubber but of plastic-impregnated rayon). One who lost an eye would get a new plastic one rather than one made of cryolite glass, which could no longer be imported from Germany.
In a vividly metaphorical development, toy soldiers, formerly made of lead or tin, started selling after the war as “little green men” made entirely of molded plastic.
Those little green men were just the start—shock troops in a full-scale economic invasion. At the war’s end, one plastics executive remarked, “virtually nothing” in the civilian economy was made of plastic, yet it was clear that “anything could be.” And so the military technologies flooded into society at large. Swords were beaten into plowshares but, as an ad in Modern Plastics noted, the new plows had plastic handles.
It is striking, in fact, how many of the icons of the postwar consumer culture were wrought of plastic: Tupperware, Velcro, hula hoops, Frisbees, Barbie dolls, GI Joes, Bic pens, credit cards, pink flamingo lawn ornaments, Styrofoam, Formica counters, Naugahyde chairs, Saran wrap, vinyl records, hi-fis, linoleum floors, Silly Putty, Lycra bras, and Wiffle balls.
“The whole world can be plasticized,” observed the French philosopher Roland Barthes with evident alarm.
Plastic seeped into the economy in less visible ways, too. Natural fibers such as cotton, wool, and silk were increasingly replaced by nylon or polyester. Midway through the Korean War, the military switched to synthetic blends for its uniforms. Around the same time, the government ordered that all flags flying over public buildings be made of nylon.
It went even deeper. Contact lenses, hearing aids, prosthetics, artificial joints, and intrauterine devices turned postwar consumers into part-plastic cyborgs. In 1952, surgeons started installing artificial aortic valves in patients, so their hearts beat with the help of plastic.
Between 1930 and 1950, the volume of plastics produced annually in the world grew fortyfold. By 2000, it had grown to nearly three thousand times its 1930 size.
This was the legacy of the Second World War. Take the world’s most advanced economy, cut it off from most tropical trade, and send it into overdrive—it was the perfect recipe for a synthetic revolution.
The replacements came regularly and rapidly. A writer in 1943 giddily described “a regiment of new man-made materials” that was “turning old industries topsy-turvy.” The antimalarial drug quinine could be replaced by a synthetic called chloroquine, the opium-derived painkiller morphine by one called methadone. Camphor, a key ingredient in medicines, photographic film, and explosives, came only from Japanese-controlled Taiwan. That is, it did until chemists figured out how to synthesize it from turpentine at one-eighth the cost. When the rubber shortage prevented making liquid incendiaries from rubber and gasoline, a chemist invented napalm.
Whatever the military wanted, remarked an employee of Union Carbide, it got “as simply as turning on the faucet to draw water.”
This was the start, the chemist Jacob Rosin predicted, of the “synthetic age.” It would bring “freedom from the plant” and “freedom from the mine.” In other words, as the laboratory replaced the land as the source of materials, the United States would liberate itself from natural resource constraints. In 1959 the physicist Richard Feynman bragged that the time was soon coming when scientists would know “how to synthesize absolutely anything.”
Synthesizing anything—that was a lot to ask for. But it wasn’t absurd. Two years before Feynman’s prediction, in 1957, the chemical company Monsanto had installed the “House of the Future,” made entirely from synthetics, at Tomorrowland in Disneyland. By that year, in the United States, synthetic rubber outsold natural rubber, plastic had displaced leather, and margarine was more common than butter. And Gregory Pincus had just begun his birth control experiments with synthetic hormones in Puerto Rico.
Synthetics visibly remade everyday life. They also, less visibly, remade geopolitics.
There was little sense, before the Second World War, that they might do this. Geopolitical treatises from the 1930s didn’t say much about synthetics. Instead, they moaned about shortages and predicted bloody wars for territory.
By that 1930s logic, the United States should have consolidated its victory in the Second World War by locking down resource-rich territories. In fact, there was some talk of this during the war. War planners recognized that the quest for resources had both triggered the war and deprived the United States of vital raw materials. As a result, they sought ways to prevent that from happening again.
The most popular plan within the State Department in the early years of the war was to place the world’s colonies under international management. This was a touch more enlightened than old-school conquest, but the end-state was much the same. Powerful countries would, through some international body, ensure their access to the tropics. It was colonization by committee.
But that vision was never realized. The United States neither claimed new colonies nor organized the joint colonization of the tropics. Instead, synthetics dulled its hunger pangs.
One can see the realization dawning in successive U.S. official reports. An important survey in 1952 warned that scarcities may loom in the future but noted that synthetics had thus far kept them at bay (“We can produce gasoline from coal, cattle feed from sawdust, and commercial power from atomic fission”). The reports that followed spoke even less of scarcity and more of synthetics. By the 1970s, a large survey concluded that resource exhaustion was simply “not a serious possibility.” Yes, there might be temporary shortages and the occasional uncomfortable price fluctuation, but the idea that the United States would actually run out of something it needed no longer seemed plausible.
U Thant, the Burmese politician who served as secretary-general of the United Nations in the 1960s, was stunned. “The truth, the central stupendous truth, about developed economies is that they can have—in anything but the shortest run—the kind and scale of resources they decide to have,” he marveled. “It is no longer resources that limit decisions. It is the decisions that make the resources. This is the fundamental, revolutionary change—perhaps the most revolutionary mankind has ever known.”
Thant was exactly right. The synthetic revolution that began in the 1940s had rewritten the rules of geopolitics. Secure access to raw materials—one of the chief benefits of colonization—no longer mattered that much. One could procure the necessary goods through trade, and if, as in the thirties and forties, the markets closed down, well, that wasn’t the end of the world. It was just time to fire up the synthetic rubber plants.
Industrial economies got so good at inventing substitutes that the suppliers of raw materials panicked. Places that had once been the objects of imperial lust now scrambled to find buyers. Such was the effect of synthetic rubber on the economies of Malaya and Borneo, synthetic antimalarials on the quinine-producing plantations of Latin America, synthetic cordage on the Philippines, Mylar film on the Indian mica industry, synthetic quartz on Brazil, and synthetic diamond bort on the diamond mines of the Congo, Brazil, and South Africa. After World War II, the United States government adopted a policy of buying more natural rubber than it needed, to prop up the imperiled plantations of Southeast Asia. Still, the relative cost of extractive goods fell year by year.
None of this is to say that raw materials became irrelevant. Minerals were harder to synthesize than plants, and military planners kept a wary eye on the global stocks of bauxite, uranium, and cobalt (essential now to smartphone batteries). But the sense of urgency had diminished enough for those commodities to be safely sourced through international trade rather than colonial extraction. That’s because national security no longer hung on raw materials. In fact, when Richard Nixon formed a commission to develop a “national materials policy” for the 1970s, the resulting report didn’t even mention security as a goal.
There was, of course, one exception: oil. Many of the chemistry-for-colonies exchanges the United States made, including synthetic rubber and plastic, involved substituting petroleum for other materials. In 1945, when 59 percent of the world’s proven oil reserves lay within U.S. borders, this gave the United States an extraordinary measure of self-sufficiency. But as those reserves got used and large ones opened in other countries, oil became increasingly foreign in provenance.
It is fitting, then, that oil is the one raw material that has most reliably tempted politicians back into the old logic of empire. When faced with an Arab oil embargo, Henry Kissinger suggested that the United States “may have to take some oil fields.” “I’m not saying we have to take over Saudi Arabia,” the secretary of state continued. “How about Abu Dhabi, or Libya?” It is hard to imagine Kissinger embarking on such unbounded flights of imperialist reverie on behalf of rubber, tin, or any other former colonial commodity.
Still, even when it comes to oil, flare-ups of naked imperialism have been rare and haven’t ultimately led to annexations. Kissinger’s idea of a U.S. overseas territory of Abu Dhabi was a daydream, not a plan (though it does appear that the Nixon administration was serious about seizing Middle Eastern oil fields if necessary). And, however painful the 1970s oil shock was for the U.S. economy, its danger was a matter of rising prices rather than of absolute, “we can’t fight a war” shortages. At no point in the twentieth century was there a serious possibility that oil would actually run out. Today, with new technologies enabling the exploitation of Canadian tar sands and the partial substitution of natural gas for oil, that danger seems as remote as ever.
In 1969 the United States achieved what was probably its most technically difficult goal since the Second World War: the moon landing. The most powerful rocket engines in history had to blast the spacecraft into the sky, where the whole thing would progressively dismantle itself mid-flight, shooting a smaller module safely into the lunar gravity well. There is a reason that “rocket science” became the proverbial way to refer to the hardest intellectual challenges out there.
Yet it wasn’t all jets and orbits. The moon landing was a triumph of chemical engineering, too. NASA needed light materials that could endure extreme temperatures and micrometeoroid strikes, yet keep pressurized air in. This meant synthetics. The moon suits that Neil Armstrong and Buzz Aldrin wore had twenty-one layers, and twenty either contained or were made entirely of materials manufactured by DuPont. There were familiar inventions—nylon, neoprene, Mylar, and Teflon—and new ones such as Kapton and Nomex. What was harder to find up in space was anything that might once have grown in a colony.
Raw materials just weren’t as important as they’d once been. The fifty-star flag that the astronauts planted, marking humankind’s highest ambition, was sewn of DuPont nylon.