In December 1856, a middle-aged Chicago engineer named Ellis Chesbrough traveled across the Atlantic to take in the monuments of the European continent. He visited London, Paris, Hamburg, Amsterdam, and a half dozen other towns—the classic Grand Tour. Only Chesbrough hadn’t made his pilgrimage to study the architecture of the Louvre or Big Ben. He was there, instead, to study the invisible achievements of European engineering. He was there to study the sewers.
Chicago, in the middle of the nineteenth century, was a city in dire need of expertise about waste removal. Thanks to its growing role as a transit hub bringing wheat and preserved pork from the Great Plains to the coastal cities, the city had gone from hamlet to metropolis in a matter of decades. But unlike other cities that had grown at prodigious rates during this period (such as New York and London), Chicago had one crippling attribute, the legacy of a glacier’s crawl thousands of years before the first humans settled there: it was unforgivingly flat. During the Pleistocene era, vast ice fields crept down from Greenland, covering present-day Chicago with glaciers that were more than a mile high. As the ice melted, it formed a massive body of water that geologists now call Lake Chicago. As that lake slowly drained down to form Lake Michigan, it flattened the clay deposits left behind by the glacier. Most cities enjoy a reliable descending grade down to the rivers or harbors they evolved around. Chicago, by comparison, is an ironing board—appropriately enough, for the great city of the American plains.
Building a city on perfectly flat land would seem like a good problem to have; you would think hilly, mountainous terrain like that of San Francisco, Cape Town, or Rio would pose more engineering problems, for buildings and for transportation. But flat topographies don’t drain. And in the middle of the nineteenth century, gravity-based drainage was key to urban sewer systems. Chicago’s terrain also suffered from being unusually nonporous; with nowhere for the water to go, heavy summer rainstorms could turn the topsoil into a murky marshland in a matter of minutes. When William Butler Ogden, who would later become Chicago’s inaugural mayor, first waded through the rain-soaked town, he found himself “sinking knee deep in the mud.” He wrote to his brother-in-law, who had purchased land in the frontier town in a bold bet on its future potential: “You have been guilty of an act of great folly in making [this] purchase.” In the late 1840s, roadways made out of wood planks had been erected over the mud; one contemporary noted that every now and then one of the planks would give way, and “green and black slime [would] gush up between the cracks.” The primary system for sanitation removal was scavenging pigs roaming the streets, devouring the refuse that the humans left behind.
With its rail and shipping network expanding at extraordinary speed, Chicago more than tripled in size during the 1850s. That rate of growth posed challenges for the city’s housing and transportation resources, but the biggest strain of all came from something more scatological: when almost a hundred thousand new residents arrive in your city, they generate a lot of excrement. One local editorial declared: “The gutters are running with filth at which the very swine turn up their noses in supreme disgust.” We rarely think about it, but the growth and vitality of cities have always been dependent on our ability to manage the flow of human waste that emerges when people crowd together. From the very beginnings of human settlements, figuring out where to put all the excrement has been just as important as figuring out how to build shelter or town squares or marketplaces.
The problem is particularly acute in cities experiencing runaway growth, as we see today in the favelas and shantytowns of megacities. Nineteenth-century Chicago, of course, had both human and animal waste to deal with, the horses in the streets, the pigs and cattle awaiting slaughter in the stockyards. (“The river is positively red with blood under the Rush Street Bridge and past down our factory,” one industrialist wrote. “What pestilence may result from it I don’t know.”) The effects of all this filth were not just offensive to the senses; they were deadly. Epidemics of cholera and dysentery erupted regularly in the 1850s. Sixty people died a day during the outbreak of cholera in the summer of 1854. The authorities at the time didn’t fully understand the connection between waste and disease. Many of them subscribed to the then-prevailing “miasma” theory, contending that epidemic disease arose from poisonous vapors, sometimes called “death fogs,” that people inhaled in dense cities. The true transmission route—invisible bacteria carried in fecal matter polluting the water supply—would not become conventional wisdom for another decade.
But while their bacteriology wasn’t well developed, the Chicago authorities were right to make the essential connection between cleaning up the city and fighting disease. On February 14, 1855, a Chicago Board of Sewerage Commissioners was created to address the problem; their first act was to announce a search for “the most competent engineer of the time who was available for the position of chief engineer.” Within a few months, they had found their man, Ellis Chesbrough, the son of a railway officer who had worked on canal and rail projects, and who was currently employed as chief engineer of the Boston Water Works.
It was a wise choice: Chesbrough’s background in railway and canal engineering turned out to be decisive in solving the problem of Chicago’s flat, nonporous terrain. Creating an artificial grade by building sewers deep underground was deemed too expensive: tunneling that far below the surface was difficult work using nineteenth-century equipment, and the whole scheme required pumping the waste back to the surface at the end of the process. But here Chesbrough’s unique history helped him come up with an alternate scenario, reminding him of a tool he had seen as a young man working the railway: the jackscrew, a device used to lift multiton locomotives onto the tracks. If you couldn’t dig down to create a proper grade for drainage, why not use jackscrews to lift the city up?
Aided by the young George Pullman, who would later make a fortune building railway cars, Chesbrough launched one of the most ambitious engineering projects of the nineteenth century. Building by building, Chicago was lifted by an army of men with jackscrews. As the jackscrews raised the buildings inch by inch, workmen would dig holes under the building foundations and install thick timbers to support them, while masons scrambled to build a new footing under the structure. Sewer lines were inserted beneath buildings with main lines running down the center of streets, which were then buried in landfill that had been dredged out of the Chicago River, raising the entire city almost ten feet on average. Tourists walking around downtown Chicago today regularly marvel at the engineering prowess on display in the city’s spectacular skyline; what they don’t realize is that the ground beneath their feet is also the product of brilliant engineering. (Not surprisingly, having participated in such a Herculean undertaking, when George Pullman set out to build his model factory town of Pullman, Illinois, several decades later, his first step was to install sewer and water lines before breaking ground on any of the buildings.)
Amazingly, life went on largely undisturbed as Chesbrough’s team raised the city’s buildings. One British visitor observed a 750-ton hotel being lifted, and described the surreal experience in a letter: “The people were in [the hotel] all the time coming and going, eating and sleeping—the whole business of the hotel proceeding without interruption.” As the project advanced, Chesbrough and his team became ever more daring in the structures they attempted to raise. In 1860, engineers raised half a city block: almost an acre of five-story buildings weighing an estimated thirty-five thousand tons was lifted by more than six thousand jackscrews. Other structures had to be moved as well as lifted to make way for the sewers: “Never a day passed during my stay in the city,” one visitor recalled, “that I did not meet one or more houses shifting their quarters. One day I met nine. Going out on Great Madison Street in the horse cars we had to stop twice to let houses get across.”
The result was the first comprehensive sewer system in any American city. Within three decades, more than twenty cities around the country followed Chicago’s lead, planning and installing their own underground networks of sewer tunnels. These massive underground engineering projects created a template that would come to define the twentieth-century metropolis: the idea of a city as a system supported by an invisible network of subterranean services. The first steam train traveled through underground tunnels beneath London in 1863. The Paris metro opened in 1900 followed shortly by the New York subway. Pedestrian walkways, automobile freeways, electrical and fiber-optic cabling coiled their way beneath city streets. Today, entire parallel worlds exist underground, powering and supporting the cities that rise above them. We think of cities intuitively now in terms of skylines, that epic reach toward the heavens. But the grandeur of those urban cathedrals would be impossible without the hidden world below grade.
Raising the Briggs House—a brick hotel in Chicago— circa 1857.
—
OF ALL THOSE ACHIEVEMENTS, more than the underground trains and high-speed Internet cables, the most essential and the most easily overlooked is the small miracle that sewer systems in part make possible: enjoying a glass of clean drinking water from a tap. Just a hundred and fifty years ago, in cities around the world, drinking water was effectively playing Russian roulette. When we think of the defining killers of nineteenth-century urbanism, our minds naturally turn to Jack the Ripper haunting the streets of London. But the real killers of the Victorian city were the diseases bred by contaminated water supplies.
This was the—literally—fatal flaw in Chesbrough’s plan for the sewers of Chicago. He had brilliantly conceived a strategy to get the waste away from the streets and the privies and the cellars of everyday life, but almost all of his sewer pipes drained into the Chicago River, which emptied directly into Lake Michigan, the primary source of the city’s drinking water. By the early 1870s, the city’s water supply was so appalling that a sink or tub would regularly be filled with dead fish, poisoned by the human filth and then hoovered up into the city’s water pipes. In summer months, according to one observer, the fish “came out cooked and one’s bathtub was apt to be filled with what squeamish citizens called chowder.”
Workmen make progress on the Metropolitan Line underground railway works at King’s Cross, London.
Upton Sinclair’s novel The Jungle is generally considered to be the most influential literary work in the muckraking tradition of political activism. Part of the power of the book came from its literal muckraking, describing the filth of turn-of-the-century Chicago in excruciating detail, as in this description of the wonderfully named Bubbly Creek, an offshoot of the Chicago River:
The grease and chemicals that are poured into it undergo all sorts of strange transformations, which are the cause of its name; it is constantly in motion, as if huge fish were feeding in it, or great leviathans disporting themselves in its depths. Bubbles of carbonic gas will rise to the surface and burst, and make rings two or three feet wide. Here and there the grease and filth have caked solid, and the creek looks like a bed of lava; chickens walk about on it, feeding, and many times an unwary stranger has started to stroll across, and vanished temporarily.
Chicago’s experience was replicated around the world: sewers removed human waste from people’s basements and backyards, but more often than not they simply poured it into the drinking water supply, either directly, as in the case of Chicago, or indirectly during heavy rainstorms. Drawing plans for sewer lines and water pipes on the scale of the city itself would not be sufficient for the task of keeping the big city clean and healthy. We also needed to understand what was happening on the scale of microorganisms. We needed both a germ theory of disease—and a way to keep those germs from harming us.
—
WHEN YOU GO BACK to look at the initial reaction from the medical community to the germ theory, the response seems beyond comical; it simply doesn’t compute. It is a well-known story that the Hungarian physician Ignaz Semmelweis was roundly mocked and criticized by the medical establishment when he first proposed, in 1847, that doctors and surgeons wash their hands before attending to their patients. (It took almost half a century for basic antiseptic behaviors to take hold among the medical community, well after Semmelweis himself lost his job and died in an insane asylum.) Less commonly known is that Semmelweis based his initial argument on studies of puerperal (or “childbed”) fever, where new mothers died shortly after childbirth. Working in Vienna’s General Hospital, Semmelweis stumbled across an alarming natural experiment: the hospital contained two maternity wards, one for the well-to-do, attended by physicians and medical students, the other for the working class who received their care from midwives. For some reason, the death rates from puerperal fever were much lower in the working-class ward. After investigating both environments, Semmelweis discovered that the elite physicians and students were switching back and forth between delivering babies and doing research with cadavers in the morgue. Clearly some kind of infectious agent was being transmitted from the corpses to the new mothers; with a simple application of a disinfectant such as chlorinated lime, the cycle of infection could be stopped in its tracks.
There may be no more startling example of how much things have changed in our understanding of cleanliness over the past century and a half: Semmelweis was derided and dismissed not just for daring to propose that doctors wash their hands; he was derided and dismissed for proposing that doctors wash their hands if they wanted to deliver babies and dissect corpses in the same afternoon.
This is one of those places where our basic sensibilities deviate from the sensibilities of our nineteenth-century ancestors. They look and act like modern people in many ways: they take trains and schedule meetings and eat in restaurants. But every now and then, strange gaps open between us and them, not just the obvious gaps in technological sophistication, but more subtle, conceptual gaps. In today’s world, we think of hygiene in fundamentally different ways. The concept of bathing, for instance, was alien to most nineteenth-century Europeans and Americans. You might naturally assume that taking a bath was a foreign concept simply because people didn’t have access to running water and indoor plumbing and showers the way most of us in the developed world do today. But, in fact, the story is much more complicated than that. In Europe, starting in the Middle Ages and running almost all the way to the twentieth century, the prevailing wisdom on hygiene maintained that submerging the body in water was a distinctly unhealthy, even dangerous thing. Clogging one’s pores with dirt and oil allegedly protected you from disease. “Bathing fills the head with vapors,” a French doctor advised in 1655. “It is the enemy of the nerves and ligaments, which it loosens, in such a way that many a man never suffers from gout except after bathing.”
You can see the force of this prejudice most clearly in the accounts of royalty during the 1600s and 1700s—in other words, the very people who could afford to have baths constructed and drawn for them without a second thought. Elizabeth I bothered to take a bath only once a month, and she was a veritable clean freak compared to her peers. As a child, Louis XIII was not bathed once until he was seven years old. Sitting naked in a pool of water was simply not something civilized Europeans did; it belonged to the barbaric traditions of Middle Eastern bathhouses, not the aristocracy of Paris or London.
Slowly, starting in the early nineteenth century, the attitudes began to shift, most notably in England and America. Charles Dickens built an elaborate cold-water shower in his London home, and was a great advocate for the energizing and hygienic virtues of a daily shower. A minor genre of self-help books and pamphlets emerged, teaching people how to take a bath, with detailed instructions that seem today as if they are training someone to land a 747. One of the first steps Professor Higgins takes in reforming Eliza Doolittle in George Bernard Shaw’s Pygmalion is getting her into a tub. (“You expect me to get into that and wet myself all over?” she protests. “Not me. I should catch my death.”) Harriet Beecher Stowe and her sister Catharine Beecher advocated a daily wash in their influential handbook, The American Woman’s Home, published in 1869. Reformers began building public baths and showers in urban slums around the country. “By the last decades of the century,” the historian Katherine Ashenburg writes, “cleanliness had become firmly linked not only to godliness but also to the American way.”
The virtues of washing oneself were not self-evident, the way we think of them today. They had to be discovered and promoted, largely through the vehicles of social reform and word of mouth. Interestingly, there is very little discussion of soap in the popular embrace of bathing in the nineteenth century. It was hard enough just to convince people that the water wasn’t going to kill them. (As we will see, when soap finally hit the mainstream in the twentieth century, it would be propelled by another new convention: advertising.) But the evangelists for bathing were supported by the convergence of several important scientific and technological developments. Advances of public infrastructure meant that people were much more likely to have running water in their homes to fill their bathtubs; that the water was cleaner than it had been a few decades earlier; and, most important, that the germ theory of disease had gone from fringe idea to scientific consensus.
This new paradigm had been achieved through two parallel investigations. First, there was the epidemiological detective work of John Snow in London, who first proved that cholera was caused by contaminated water and not miasmatic smells, by mapping the deaths of a Soho epidemic. Snow never managed to see the bacteria that caused cholera directly; the technology of microscopy at the time made it almost impossible to see organisms (Snow called them “animalcules”) that were so small. But he was able to detect the organisms indirectly, in the patterns of death on the streets of London. Snow’s waterborne theory of disease would ultimately deliver the first decisive blow to the miasma paradigm, though Snow himself didn’t live to see his theory triumph. After his untimely death in 1858, The Lancet ran a terse obituary that made no reference whatsoever to his groundbreaking epidemiological work. In 2014, the publication ran a somewhat belated “correction” to the obit, detailing the London doctor’s seminal contributions to public health.
John Snow’s cholera map of Soho
The modern synthesis that would come to replace the miasma hypothesis—that diseases such as cholera and typhoid are caused not by smells but by invisible organisms that thrive in contaminated water—was ultimately dependent, once again, on an innovation in glass. The German lens crafters Zeiss Optical Works began producing new microscopes in the early 1870s—devices that for the first time had been constructed around mathematical formulas that described the behavior of light. These new lenses enabled the microbiological work of scientists such as Robert Koch, one of the first scientists to identify the cholera bacterium. (After receiving the Nobel Prize for his work in 1905, Koch wrote to Carl Zeiss, “A large part of my success I owe to your excellent microscopes.”) With his great rival Louis Pasteur, Koch and his microscopes helped develop and evangelize the germ theory of disease. From a technological standpoint, the great nineteenth-century breakthrough in public health—the knowledge that invisible germs can kill—was a kind of team effort between maps and microscopes.
Today, Koch is rightly celebrated for the numerous microorganisms that he identified through those Zeiss lenses. But his research also led to a related breakthrough that was every bit as important, though less widely appreciated. Koch didn’t just see the bacteria; he also developed sophisticated tools to measure the density of bacteria in a given quantity of water. He mixed contaminated water with transparent gelatin, and viewed the growing bacterial colonies on a glass plate. Koch established a unit of measure that could be applied to any quantity of water—below 100 colonies per milliliter was considered to be safe to drink.
New ways of measuring create new ways of making. The ability to measure bacterial content allowed a completely new set of approaches to the challenges of public health. Before the adoption of these units of measurement, you had to test improvements to the water system the old-fashioned way: you built a new sewer or reservoir or pipe, and you sat around and waited to see if fewer people would die. But being able to take a sample of water and determine empirically whether it was free of contamination meant that cycles of experimentation could be tremendously accelerated.
Microscopes and measurement quickly opened a new front in the war on germs: instead of fighting them indirectly, by routing the waste away from the drinking water, new chemicals could be used to attack the germs directly. One of the key soldiers on this second front was a New Jersey doctor named John Leal. Like John Snow before him, Leal was a doctor who treated patients but who also had a passionate interest in wider issues of public health, particularly those concerning contaminated water supplies. It was an interest born of a personal tragedy: his father had suffered a slow and painful death from drinking bacteria-infested water during the Civil War. His father’s experience in the war gives us a compelling statistical portrait of the threat posed by contaminated water and other health risks during this period. Nineteen men in the 144th Regiment died in combat, while 178 died of disease during the war.
Leal experimented with many techniques for killing bacteria, but one poison in particular began to pique his interest as early as 1898: calcium hypochlorite, the potentially lethal chemical that is better known as chlorine, also known at the time as “chloride of lime.” The chemical had already been in wide circulation as a public health remedy: houses and neighborhoods that had suffered an outbreak of typhoid or cholera were routinely disinfected with the chemical, an intervention that did nothing to combat waterborne disease. But the idea of putting chlorine in water had not yet taken hold. The sharp, acrid smell of chloride of lime was indelibly associated with epidemic disease in the minds of city dwellers throughout the United States and Europe. It was certainly not a smell that one wanted to detect in one’s drinking water. Most doctors and public health authorities rejected the approach. One noted chemist protested: “The idea itself of chemical disinfection is repellent.” But armed with tools that enabled him to both see the pathogens behind diseases such as typhoid and dysentery and measure their overall presence in the water, Leal became convinced that chlorine—at the right dosage—could rid water of dangerous bacteria more effectively than any other means, without any threat to the humans drinking it.
Eventually, Leal landed a job with the Jersey City Water Supply Company, giving him oversight of seven billion gallons of drinking water in the Passaic River watershed. This new job set the stage for one of the most bizarre and daring interventions in the history of public health. In 1908, the company was immersed in a prolonged legal battle over contracts (worth hundreds of millions of dollars in today’s money) for reservoirs and water-supply pipes they had recently completed. The judge in the case had criticized the firm for not supplying waste that was “pure and wholesome” and ordered them to construct expensive additional sewer lines designed to keep pathogens out of the city’s drinking water. But Leal knew the sewer lines would be limited in their effectiveness, particularly during big storms. And so he decided to put his recent experiments with chlorine to the ultimate test.
In almost complete secrecy, without any permission from government authorities (and no notice to the general public), Leal decided to add chlorine to the Jersey City reservoirs. With the help of engineer George Warren Fuller, Leal built and installed a “chloride of lime feed facility” at the Boonton Reservoir outside Jersey City. It was a staggering risk, given the popular opposition to chemical filtering at the time. But the court rulings had severely limited his timeline, and he knew that lab tests would be meaningless to a lay audience. “Leal did not have time for a pilot study. He certainly did not have time to build a demonstration-scale facility to test the new technology,” Michael J. McGuire writes in his account, The Chlorine Revolution. “If the chlorine of lime feed system lost control of the amount of chemical being fed and a slug of high chlorine residual was delivered to Jersey City, Leal knew that would define the failure of the process.”
It was the first mass chlorination of a city’s water supply in history. Once word got out, however, it initially seemed as though Leal was a madman or some kind terrorist. Drinking a few glasses of calcium hypochlorite could kill you, after all. But Leal had done enough experiments to know that very small quantities of the compound were harmless to humans but lethal to many forms of bacteria. Three months after his experiment, Leal was called to appear in court to defend his actions. Throughout his interrogation, he stood strong in defense of his public health innovation:
Q: Doctor, what other places in the world can you mention in which this experiment has been tried of putting this bleaching powder in the same way in the drinking water of a city of 200,000 inhabitants?
A: 200,000 inhabitants? There is no such place in the world, it has never been tried.
Q: It never has been.
A: Not under such conditions or under such circumstances but it will be used many times in the future, however.
Q: Jersey City is the first one?
A: The first to profit by it.
Q: Jersey City is the first one used to prove whether your experiment is good or bad?
A: No, sir, to profit by it. The experiment is over.
Q: Did you notify the city that you were going to try this experiment?
A: I did not.
Q: Do you drink this water?
A: Yes sir.
Q: Would you have any hesitation about giving it to your wife and family?
A: I believe it is the safest water in the world.
—
ULTIMATELY THE COURT CASE was settled with near complete victory for Leal. “I do there find and report,” the special master in the case wrote, “that this device is capable of rendering the water delivered to Jersey City, pure and wholesome . . . and is effective in removing from the water . . . dangerous germs.” Within a few years, the data supporting Leal’s daring move had become incontrovertible: communities such as Jersey City that enjoyed chlorinated drinking water saw dramatic decreases in waterborne diseases like typhoid fever.
At one point in Leal’s cross-examination during the Jersey City trial, the prosecuting attorney accused John Leal of seeking vast financial rewards from his chlorine innovation. “And if the experiment turned out well,” he sneered, “why, you made a fortune.” Leal interrupted him from the witness box with a shrug, “I don’t know where the fortune comes in; it is all the same to me.” Unlike others, Leal made no attempt to patent the chlorination technique that he had pioneered at the Boonton Reservoir. His idea was free to be adopted by any water company that wished to provide its customers with “pure and wholesome” water. Unencumbered by patent restrictions and licensing fees, municipalities quickly adopted chlorination as a standard practice, across the United States and eventually around the world.
About a decade ago, two Harvard professors, David Cutler and Grant Miller, set out to ascertain the impact of chlorination (and other water filtration techniques) between 1900 and 1930, the period when they were implemented across the United States. Because extensive data existed for rates of disease and particularly infant mortality in different communities around the country, and because chlorination systems were rolled out in a staggered fashion, Cutler and Miller were able to get an extremely accurate portrait of chlorine’s effect on public health. They found that clean drinking water led to a 43 percent reduction in total mortality in the average American city. Even more impressive, chlorine and filtration systems reduced infant mortality by 74 percent, and child mortality by almost as much.
It is important to pause for a second to reflect on the significance of those numbers, to take them out of the dry domain of public health statistics and into the realm of lived experience. Until the twentieth century, one of the givens of being a parent was that you faced a very high likelihood that at least one of your children would die at an early age. What may well be the most excruciating experience that we can confront—the loss of a child—was simply a routine fact of existence. Today, in the developed world at least, that routine fact has been turned into a rarity. One of the most fundamental challenges of being alive—keeping your children safe from harm—was dramatically lessened, in part through massive engineering projects, and in part through the invisible collision between compounds of calcium hypochlorite and microscopic bacteria. The people behind that revolution didn’t become rich, and very few of them became famous. But they left an imprint on our lives that is in many ways more profound than the legacy of Edison or Rockefeller or Ford.
Chlorination wasn’t just about saving lives, though. It was also about having fun. After World War I, ten thousand chlorinated public baths and pools opened across America; learning how to swim became a rite of passage. These new aquatic public spaces were the leading edge in challenges to the old rules of public decency during the period between the wars. Before the rise of municipal pools, women bathers generally dressed as though they were bundled up for a sleigh ride. By the mid-1920s, women began exposing their legs below the knee; one-piece suits with lower necklines emerged a few years later. Open-backed suits, followed by two-piece outfits, followed quickly in the 1930s. “In total, a woman’s thighs, hip line, shoulders, stomach, back and breast line all become publicly exposed between 1920 and 1940,” the historian Jeff Wiltse writes in his social history of swimming, Contested Waters. We can measure the transformation in terms of simple material: at the turn of the century, the average woman’s bathing suit required ten yards of fabric; by the end of the 1930s, one yard was sufficient. We tend to think of the 1960s as the period when shifting cultural attitudes led to the most dramatic change in everyday fashion, but it is hard to rival the rapid-fire unveiling of the female body that occurred between the wars. Of course, it is likely that women’s fashion would have found another route to exposure without the rise of swimming pools, but it seems unlikely that it would have happened as quickly as it did. No doubt exposing the thighs of female bathers was not in the forefront of John Leal’s mind as he dumped his chlorine into the Jersey City reservoir, but like the hummingbird’s wing, a change in one field triggers a seemingly unrelated change at a different order of existence: a trillion bacteria die at the hands of calcium hypochlorite, and somehow, twenty years later, basic attitudes toward exposing the female body are reinvented. As with so many cultural changes, it’s not that the practice of chlorination single-handedly transformed women’s fashion; many social and technological forces converged to make those bathing suits smaller: various strands of early feminism, the fetishizing gaze of the Hollywood camera, not to mention individual stars who wore those more revealing suits. But without the mass adoption of swimming as a leisure activity, those fashions would have been deprived of one of their key showcases. What’s more, those other explanations—as valid as they are—usually get all the press. Ask your average person on the street what factors drive women’s fashion, and they’ll inevitably point to Hollywood or glossy magazines. But they won’t often mention calcium hypochlorite.
—
THROUGH THE NINETEENTH CENTURY, the march of clean technologies had largely unfolded on the terrain of public health: big engineering projects, mass filtration systems. But the story of hygiene in the twentieth century is a much more intimate affair. Just a few years after Leal’s bold experiment, five San Francisco entrepreneurs invested a hundred dollars each to launch a chlorine-based product. It seems with hindsight to have been a good idea, but their bleach business had been aimed at big industry, and sales didn’t develop as quickly as they had hoped. But the wife of one of the investors, Annie Murray, a shop owner in Oakland, California, had an idea: that chlorine bleach could be a revolutionary product for people’s homes as well as factories. At Murray’s insistence, the company created a weaker version of the chemical and packaged it in smaller bottles. Murray was so convinced of the product’s promise that she gave out free samples to all her shop customers. Within months, bottles were selling like crazy. Murray didn’t realize it at that time, but she was helping to invent an entirely new industry. Annie Murray had created America’s first commercial bleach for the home, and the first in a wave of cleaning brands that would become ubiquitous in the new century: Clorox.
Clorox bottles became so commonplace that the remnants our grandparents left behind are used by archaeologists to date dig sites today. (The pint-glass chlorine bleach bottle is to the early twentieth century what spear tips are to the iron age or colonial pottery is to the eighteenth century.) It was accompanied by other bestselling hygiene products for the home: Palmolive soap, Listerine, a popular antiperspirant named Odorono. Hygiene products like these were among the first to be promoted in full-page advertisements in magazines and newspapers. By the 1920s, Americans were being bombarded by commercial messages convincing them that they were facing certain humiliation if they didn’t do something about the germs on their bodies or in their homes. (The phrase “often a bridesmaid, never a bride” originated with a 1925 Listerine advertisement.) When radio and television began experimenting with storytelling, it was the personal-hygiene companies that once again led the way in pioneering new forms of advertising, a brilliant marketing move that still lingers with us today in the phrase “soap opera.” This is one of the stranger hummingbird effects of contemporary culture: the germ theory of disease may have reduced infant mortality to a fraction of its nineteenth-century levels, and made surgery and childbirth far safer than it had been in Semmelweis’s day. But it also played a crucial role in inventing the modern advertising business.
Today the cleaning business is worth an estimated $80 billion. Walk into a big-box supermarket or drugstore, and you will find hundreds, if not thousands, of products devoted to ridding our households of dangerous germs: cleaning our sinks and our toilets and floors and silverware, our teeth and our feet. These stores are effectively giant munitions depots for the war on bacteria. Naturally, there are some who feel that our obsession with cleanliness may now have gone too far. Some research suggests that our ever cleaner world may actually be linked to increasing rates of asthma and allergies, as our childhood immune systems now develop without being exposed to the full diversity of germs.
Clorox advertisement
—
THE CONFLICT BETWEEN MAN and bacteria that played out over the past two centuries has had far-reaching consequences: from the trivial pursuits of swimwear fashion all the way to the existential improvements of lowered infant mortality rates. Our growing understanding of the microbial routes of disease enabled cities to burst through the population ceilings that had constrained them for the entirety of human civilization. As of 1800, no society had successfully built and sustained a city of more than two million people. The first cities to challenge that barrier (London and Paris, followed shortly by New York) had suffered mightily from the diseases that erupted when that many people shared such a small amount of real estate. Many reasonable observers of urban life in the middle of the nineteenth century were convinced that cities were not meant to be built on this scale, and that London would inevitably collapse back to a more manageable size, as Rome had done almost two thousand years before. But solving the problems of clean drinking water and reliable waste removal changed all of that. A hundred and fifty years after Ellis Chesbrough first took his grand tour of European sewage, cities such as London and New York were approaching ten million residents, with life expectancies and infectious disease rates that were far lower than their Victorian antecedents.
Of course, the problem now is not cities of two million or ten million; it’s megacities such as Mumbai or São Paulo that will soon embrace thirty million human beings or more, many of them living in improvised communities—shantytowns, favelas—that are closer to the Chicago that Chesbrough had to raise than a contemporary city in the developed world. If you look only at today’s Chicago or London, the story of the past century and a half seems to be one of incontrovertible progress: the water is cleaner; the mortality rates are much lower; epidemic disease is effectively nonexistent. And yet today there are more than three billion people around the world who lack access to clean drinking water and basic sanitation systems. In absolute numbers, we have gone backward as a species. (There were only a billion people alive in 1850.) So the question before us now is how we bring the clean revolution to the favelas, and not just Michigan Avenue. The conventional assumption has been that these communities need to follow the same path charted by Snow, Chesbrough, Leal, and all the other unsung heroes of our public health infrastructure: they need toilets connected to massive sewer systems that dispose of waste without contaminating reservoirs that pump out filtered water, delivered through an equally elaborate system direct to the home. But increasingly, these new megacities’ citizens—and other global development innovators—have begun to think that history need not repeat itself.
However bold and determined John Leal was, if he had been born just a generation earlier, he would have never had the opportunity to chlorinate the Jersey City water, because the science and the technology that made chlorination possible simply hadn’t been invented yet. The maps and lenses and chemistry and units of measure that converged in the second half of the nineteenth century gave him a platform for the experiment, so much so that it is probably fair to assume that if Leal hadn’t brought chlorination to the mainstream, someone else would have done it within a decade, if not sooner. All of which leads to the question: If new ideas and new technology can make a new solution imaginable, the way the germ theory and the microscope triggered the idea of chemically treating water, then has there not been a sufficient supply of new ideas since Leal’s day that might trigger a new paradigm for keeping our cities clean, one that would bypass the big-engineering phase altogether? And perhaps that paradigm might be a leading indicator of a future that we’re all destined to share. The developing world has famously bypassed some of the laborious infrastructure of wired telephone lines, jumping ahead of more “advanced” economies by basing their communications around wireless connections instead. Could the same pattern play out with sewers?
In 2011, the Bill and Melinda Gates Foundation announced a competition to help spur a paradigm shift in the way we think about basic sanitation services. Memorably called the “Reinvent the Toilet Challenge,” the competition solicited designs for toilets that do not require a sewer connection or electricity and cost less than five cents per user per day. The winning entry was a toilet system from Caltech that uses photovoltaic cells to power an electrochemical reactor that treats human waste, producing clean water for flushing or irrigation and hydrogen that can be stored in fuel cells. The system is entirely self-contained; it has no need for an electrical grid, a sewer line, or a treatment facility. The only input the toilet requires, beyond sunlight and human waste, is simple table salt, which is oxidized to make chlorine to disinfect the water.
Those chlorine molecules might well be the only part of the toilet that John Leal would recognize, were he around to see it today. And that’s because the toilet depends on new ideas and technology that have become part of the adjacent possible in the twentieth century, tools that hopefully can allow us to bypass the costly, labor-intensive work of building giant infrastructure projects. Leal needed microscopes and chemistry and the germ theory to clean the water supply in Jersey City; the Caltech toilet needs hydrogen fuel cells, solar panels, and even lightweight, inexpensive computer chips to monitor and regulate the system.
Ironically, those microprocessors are themselves, in part, the by-product of the clean revolution. Computer chips are fantastically intricate creations—despite the fact that they are ultimately the product of human intelligence, their microscopic detail is almost impossible for us to comprehend. To measure them, we need to zoom down to the scale of micrometers, or microns: one-millionth of a meter. The width of a human hair is about a hundred microns. A single cell of your skin is about thirty microns. A cholera bacterium is about three microns across. The pathways and transistors through which electricity flows on a microchip—carrying those signals that represent the zeroes and ones of binary code—can be as small as one-tenth of a single micron. Manufacturing at this scale requires extraordinary robotics and laser tools; there’s no such thing as a hand-crafted microprocessor. But chip factories also require another kind of technology, one we don’t normally associate with the high-tech world: they need to be preposterously clean. A spec of household dust landing on one of these delicate silicon wafers would be comparable to Mount Everest landing in the streets of Manhattan.
Bill Gates inspects the winning entry in the 2011 “Reinvent the Toilet Challenge.”
Environments such as the Texas Instruments microchip plant outside Austin, Texas, are some of the cleanest places on the planet. To even enter into the space, you have to don a full clean suit, your body covered head-to-toe with sterile materials that don’t shed. There’s something strangely inverted about the process. Normally when you find yourself dressing in such extreme protective outfits, you’re guarding yourself against some kind of hostile environment: severe cold, pathogens, the vacuum of space. But in the clean room, the suit is designed to protect the space from you. You are the pathogen, threatening the valuable resources of computer chips waiting to be born: your hair follicles and your epidermal layers and the mucus swarming around you. From the microchip’s point of view, every human being is Pig Pen, a dust cloud of filth. Washing up before entering the clean room, you’re not even allowed to use soap, because most soaps have fragrances that give off potential contaminants. Even soap is too dirty for the clean room.
There is a strange symmetry to the clean room as well, one that brings us back to those first pioneers struggling to purify the drinking water of their cities: to Ellis Chesbrough, John Snow, John Leal. Producing microchips also requires large quantities of water, only this water is radically different from the water you drink from the tap. To avoid impurities, chip plants create pure H2O, water that has been filtered not only of any bacterial contaminants but also of all the minerals, salts, and random ions that make up normal filtered water. Stripped of all those extra “contaminants,” ultrapure water, as it is called, is the ideal solvent for microchips. But those missing elements also make ultrapure water undrinkable for humans; chug a glass of the stuff and it will start leeching minerals out of your body. This is the full circle of clean: some of the most brilliant ideas in science and engineering in the nineteenth century helped us purify water that was too dirty to drink. And now, a hundred and fifty years later, we’ve created water that’s too clean to drink.
The interior of Texas Instruments
Standing in the clean room, the mind naturally drifts back to the sewers that lie beneath our city streets, the two polar extremes of the history of clean. To build the modern world, we had to create an unimaginably repellent space, an underground river of filth, and cordon it off from everyday life. And at the same time, to make the digital revolution, we had to create a hyper-clean environment, and once again, cordon it off from everyday life. We never get to visit these environments, and so they retreat from our consciousness. We celebrate the things they make possible—towering skyscrapers and ever-more-powerful computers—but we don’t celebrate the sewers and the clean rooms themselves. Yet their achievements are everywhere around us.