CHAPTER 12 The Chicken Little Problem: Distant and Improbable Threats

An ominous video, originally released in 1999 as a VHS tape, features an all-black-clad Leonard Nimoy (of Spock fame) speaking portentously about the future:

“There is an ancient myth of what may have been the most highly advanced civilization ever to dwell on the planet Earth.… But the legend also ends suddenly, with the revelation that this entire ancient civilization vanished. That their great island sank into the sea because their technological innovations were too far ahead of their human judgments, human foresight, and simple human frailties. This legendary civilization was, of course, Atlantis.

“Yet the problem for us, in the year 1999, is that… we are now facing very real global issues related to power supply, satellite communications, water, health care, transportation, distribution of food, and other items vital to everyday human survival. These global issues are the direct result of an equally real human oversight many people now refer to as the Y2K or Year 2000 problem, which derives from the fact that billions of lines of computer code and embedded microchips that now run the very technologies we all depend upon may fail in the briefest moment between December 31, 1999, and January 1, 2000.

“So we recall the fate of Atlantis. The primary question for our civilization as we approach the year 2000 is this: Have we allowed our own highly advanced technological innovations to far outpace our human abilities to control those innovations, and most importantly, to foresee their ultimate consequences?”

As it turns out—spoiler coming—the Y2K bug did not end civilization on January 1, 2000. What did happen, though? Was civilization saved, or did it never need saving at all?

In this chapter, we’ll veer away from the kinds of problems we’ve spent most of our time studying—primarily recurring ones such as the dropout rate, homelessness, disease, and more. These problems aren’t mysterious: We can observe them directly, and we can measure their incidence. But now we’ll examine upstream efforts to address problems that are unpreventable (like hurricanes) or uncommon (like an IT network being hacked) or downright far-fetched (humanity being extinguished by new technologies).

Y2K was a one-off problem—a new kind of computer bug that humanity had never faced before and wouldn’t face again. John Koskinen was the man tasked with preventing the worst from happening. Koskinen had worked in the private sector turning around failed companies and, from 1994 to 1997, had been a senior leader at the Office of Management and Budget. Twenty-two months before the new millennium, in February 1998, Koskinen had accepted President Bill Clinton’s invitation to be the nation’s Y2K czar.

The Y2K czar role was a classic no-win job, as Koskinen knew. “If everything went smoothly, people would say: ‘What was that all about? What a waste of time and money.’ On the other hand, if everything were to go poorly, if the power went out, the stoplights didn’t work, the phones were dead, the financial systems quit functioning and the communications systems went dark, everyone would want to know: ‘What was the name of that guy who was in charge of preventing this?’ ”

With less than two years to go, and a small staff, Koskinen knew that he had no hope of fixing the government’s systems directly. All he could do was convene the right people, get them talking, and encourage them to share information. Early in his tenure, he organized 25 working groups, each reflecting a different sector of the economy: power companies, telecommunications, state and local governments, health care, and more. Each working group was led by a federal agency—the Department of Transportation, for instance, worked with airlines, railroads, truckers, and shipping companies.

A colleague had objected to this approach: Our job is to fix the Y2K bug in the federal government—not the entire American economy. Koskinen’s reply was “But you know if the federal systems all work and, come January first, the electrical grid fails, the first question everybody’s going to ask is: ‘What did you do to keep that from happening?’ And the answer can’t be, ‘It wasn’t my job.’ ”

The working groups had an inauspicious start. Many of the companies’ lawyers were concerned that, if their firms collaborated closely, they could be at risk of antitrust or liability suits. Koskinen’s team actually had to rush a law through Congress to address these concerns. Eventually, though, the groups began working effectively, sharing information freely.I Meanwhile, Koskinen had begun to appreciate that he was actually addressing not just a technical problem but a psychological one. Public panic was as much of a threat as the technical bugs.

Consider that, according to Koskinen, at any given time about 2% of ATM machines aren’t working. They’re broken or out of money. But on January 1, 2000, a nonfunctioning ATM might be interpreted as a Y2K problem, fueling fear. One of everyone’s biggest concerns was the possibility of a bank run. If customers worried about not being able to get money, or if they worried about banks failing, they might start pulling out money in the weeks before the millennium. And if other customers saw that, they might worry, in turn: Those people are probably being paranoid, but I don’t want them taking all the money before I can get some, so I better make some withdrawals myself.

Given the fractional banking system in the US, in which a bank might only keep a small percentage of its assets available in cash, it wouldn’t take many paranoia-fueled withdrawals to exhaust a local bank’s supply. Just imagine the panic that would ensue when rumors start swirling that the bank is out of money. In this way, irrational fears of a bank failure could produce an actual bank failure. How seriously did the government take these fears? The Federal Reserve ordered $50 billion in new currency printed and added into circulation nationwide. That’s about $500 for every household in the United States.

In the months leading up to the new millennium, Koskinen grew increasingly certain that the Y2K bug would not cause major disruptions. His public communications and interviews were calm and confident. Still, on December 31, 1999, he was not anxiety-free. He worried about the situation globally—every country with IT systems was theoretically at risk from the Y2K bug, and the United States had become the de facto leader of the work internationally. Would there be a foreign country that had neglected Y2K work and saw a critical system collapse? That visible failure—made hysterical by the media—could be enough to spark panic-driven problems in the United States.

As the first day of the new millennium began, the first reports were from New Zealand. One US journalist had flown there to report live on the air about whether his ATM card worked. It did. (Must have been a long flight back.) Koskinen’s team breathed a sigh of relief.

Koskinen held press conferences every four hours, and it was an uneventful day. Mostly. The Japanese experienced glitches in monitoring the safety of their nuclear plants. Later, the Defense Department lost touch for several hours with some intelligence satellites. The other issues were more minor: delayed paychecks, stalled payments, repeated charges on credit cards, and so on.

This example, from the final report of the Y2K team a few months later, captures well the day’s lack of drama: “Low-level Windshear Alert Systems (LLWAS) failed at New York, Tampa, Denver, Atlanta, Orlando, Chicago O’Hare, and St. Louis airports during the date rollover. The systems displayed an error message. Air transportation system specialists at each site were forced to reboot LLWAS computers to clear the error.” (Later, the screenplay about this incident, Forced to Reboot, was sold for zero dollars.)

The new millennium arrived. Civilization endured. People returned sheepishly to cities from their rental cabins in the woods.

As Koskinen had predicted, his team’s work went uncelebrated. “It was probably no later than forty-eight hours later that people were saying, ‘Well, that went pretty smoothly. Must not have been a problem,’ ” he said.

But might it be possible those skeptics were right—that the Y2K bug was never really much of a threat at all? Some observers, such as the Canadian computer-systems analyst David Robert Loblaw, had been saying that all along: “Planes will not fall out of the sky, elevators will not drop, governments will not collapse. The Year 2000 is going to arrive with a yawn.”

When his prediction was proven right, Loblaw took his victory lap. On January 6, 2000, he wrote a piece for the Globe and Mail headlined “You Got Conned and I Told You So.” “In fact, few systems actually depend on the calendar year, including some of those that were the source of so much hysteria, such as hydro and air-traffic control,” he wrote.

Many of the IT leaders who handled the Y2K preparation still get incensed when they hear it called a hoax. “The reason nothing happened is that a huge amount of work was done because people had made a huge amount of fuss,” said Martyn Thomas, who worked on Y2K-related issues from within the UK as a consultant and an international partner at (what was then) Deloitte & Touche. He considers the Y2K bug a near-miss—a catastrophe narrowly avoided thanks to a successful global mobilization of talent and energy.

Who’s right? It’s hard to know, though my own impression is that it was more of a near-miss than a hoax. This uncertainty is a frustrating aspect of upstream work, especially in situations where you’re addressing a novel problem. In situations with recurring problems, there’s less ambiguity. If there were 500 high school dropouts for 5 years in a row, and then you started a new program, and this year there were only 400 dropouts, then you can have some confidence that your work made an impact. But with Y2K, there’s just one data point: January 1, 2000. And, fortunately, by virtue of fortune or preparation or both, it turned out to be no big deal.


Y2K was a situation where we prepared for disaster, and when disaster didn’t come, we questioned whether the preparations had been necessary. Think of the opposite scenario: You prepare for a disaster—and it’s incredibly destructive anyway. Do you conclude afterward that you blew the preparations, or do you decide that things could have been even worse if you hadn’t tried?

A real-world version of that scenario began in early 2004, when two disaster experts met in Washington, DC, for a discussion: Madhu Beriwal, the founder and CEO of Innovative Emergency Management (IEM), a private contractor that helps governments prepare for and respond to disasters, and Eric Tolbert, the director in charge of emergency response for the Federal Emergency Management Agency (FEMA).

Beriwal asked Tolbert, Out of all the disasters you’re considering, which one keeps you up at night? Tolbert replied: A catastrophic hurricane striking New Orleans.

It was the geography of New Orleans that spooked experts. The city rests below sea level and is situated between levees that keep at bay the waters of the Mississippi River and Lake Pontchartrain. Picture the city as at the bottom of a bowl. If the levees were breached, water would rush into the city and stay there.

In the years after 9/11, FEMA’s primary focus had been on acts of terrorism, but Tolbert had been lobbying for money to develop plans for natural disasters. When a few million dollars was approved for that purpose in 2004, Beriwal’s company, IEM, got a contract for $800,000. The assignment: Create hurricane response plans for New Orleans and the surrounding region.

IEM created a planning exercise at breakneck speed, taking 53 days to complete a process that would ordinarily take much longer. Hurricane season was looming. For a week in July 2004 in Baton Rouge, IEM convened approximately 300 critical players, including representatives from FEMA, over 20 Louisiana state agencies, 13 parishes, the National Weather Service, over 15 federal agencies, volunteer groups, and state agencies from Mississippi and Alabama. (Surround the problem.) They were brought together to face Hurricane Pam, a simulation dreamed up by the IEM team.

“Born in the Atlantic Ocean, [Hurricane Pam] hits Puerto Rico and Hispaniola and Cuba, and it grows bigger as it moves through the warm waters of the Gulf of Mexico,” wrote Christopher Cooper and Robert Block of the Hurricane Pam simulation in Disaster: Hurricane Katrina and the Failure of Homeland Security, an indispensable account of how Katrina was handled. They continue:

Though there is plenty of time to flee, many residents along the Gulf Coast stay put. And just as predicted, this storm makes a straight track for the tiny camp town of Grand Isle, Louisiana, obliterates it, and moves north toward New Orleans. The hurricane moves upriver for nearly sixty miles, leaving catastrophe in its wake. It passes right over New Orleans, and as it does, the storm tilts nearby Lake Pontchartrain like a teacup and dumps it into the city. A quick rush of brackish water drenches New Orleans and leaves it sitting in as much as twenty feet of water. And then the hurricane is gone, and everything lies in ruins.

During the simulation in Baton Rouge, the participants formulated their responses in real time, breaking into subgroups according to their specialties: search and rescue, water drainage, temporary housing, triage centers, and more.

One of Hurricane Pam’s key organizers, Colonel Michael L. Brown,II had decreed that, in making their plans, there would be “no fairy dust,” as Cooper and Block wrote:

If a job called for 300 boats, participants would have to find those boats and not just wish them to exist. If planners needed fifteen semitrucks to haul generators to New Orleans, they had to identify where they would get them, or at least make a realistic guess at the source. “They were supposed to plan with the resources that were available or that could presumably be brought in,” said Beriwal. “They were not supposed to be thinking that magically 1,000 helicopters would show up and do this.”

After an intense and dramatic week of grappling with Hurricane Pam, the group had cobbled together a set of emergency-response plans: some richly detailed, some barely fleshed out. It was a start.

Thirteen months after the Hurricane Pam simulation, in late August 2005, Hurricane Katrina hit New Orleans. In her Senate testimony roughly five months after Katrina, Beriwal showed a chart comparing the simulation to the reality:

“HURRICANE PAM” DATA

ACTUAL RESULTS FROM HURRICANE KATRINA

20 inches of rain

18 inches of rain

City of New Orleans under 10 to 20 feet of water

Up to 20 feet of flooding in some areas of New Orleans

Overtopping of levees

Levees breached

Over 55,000 in public shelters prior to landfall

Approximately 60,000 people in public shelters prior to landfall

Over 1.1 million Louisiana residents displaced

1 million Gulf Coast residents displaced for the long-term; majority are Louisiana residents

786,359 people in Louisiana lose electricity at initial impact

881,400 people in Louisiana reported to be without electricity the day after impact

The similarities are uncanny. So the obvious question: What in the world happened? How could you gather together exactly the right people, for the sake of rehearsing exactly the right scenario, and then, when the real thing happens a year later, the response is a failure?

“Failure” is understating it—the Katrina response was a national disgrace. Here’s an account by journalist Scott Gold of the scene at the Superdome, the stadium used as a shelter:

A 2-year-old girl slept in a pool of urine. Crack vials littered a restroom. Blood stained the walls next to vending machines smashed by teenagers. The Louisiana Superdome, once a mighty testament to architecture and ingenuity, became the biggest storm shelter in New Orleans the day before Katrina’s arrival Monday. About 16,000 people eventually settled in. By Wednesday, it had degenerated into horror.… “We pee on the floor. We are like animals,” said Taffany Smith, 25, as she cradled her 3-week-old son, Terry. In her right hand she carried a half-full bottle of formula provided by rescuers. Baby supplies are running low; one mother said she was given two diapers and told to scrape them off when they got dirty and use them again.

Here is where I am going to test your patience by asking you to consider how two dissonant ideas might both be true: First, that the disaster response for the people stranded in New Orleans was unspeakably bad, and second, that many thousands of lives were saved because of the planning that was sparked by Hurricane Pam. In short: Hurricane Katrina’s effects were terrible, and they could have been much worse.

Because there were two final rows in the chart that Beriwal showed the Senate—two rows that show the biggest points of difference between Hurricane Pam and Hurricane Katrina:

“HURRICANE PAM” DATA

ACTUAL RESULTS FROM HURRICANE KATRINA

Over 60,000 deaths

1,100 deaths reported to date in Louisiana; over 3,000 still missing

36% evacuated prior to landfall

80% to 90% evacuated prior to landfall

In 2019, Beriwal said of Hurricane Pam, “We predicted the consequences almost to the scientific bull’s-eye. One thing we got completely wrong was the number of deaths. Our projection was that somewhere over 60,000 people will die. And horrible as it is, the number of deaths was 1,700. So the difference between the two is contraflow.”III

Contraflow” is an emergency procedure in public transportation in which all the lanes of a highway are temporarily switched to flow in the same direction. This sounds logical in theory: All traffic should flow out of a disaster area, after all. But imagine the complexity of reversing the direction of an interstate highway! Every entrance ramp headed the wrong way has to be blocked and monitored; the public has to be informed what’s happening; emergency crews have to be on hand to respond quickly to stranded vehicles so that they don’t create logjams. And what happens when the contraflow interstate hits the state border and must transition back to a regular-flow interstate? These issues may sound like logistical minutiae, but keep in mind: Beriwal is arguing that contraflow is the main reason that 1,700 people died in Katrina, not 60,000. The details were vital.

New Orleans had experimented with contraflow the prior year during Hurricane Ivan, a less powerful hurricane that hit the Gulf less than two months after the Hurricane Pam simulation. The process had been a fiasco. The highways clogged quickly, leaving some drivers stranded on elevated roadways for up to 12 hours. And then Ivan veered east, missing New Orleans. If it hadn’t, thousands of drivers—facing an interstate that had turned into a giant parking lot—might have had to leave their cars behind and seek shelter.

In response to the Hurricane Pam simulation—and the real-world failure with Ivan—the state had overhauled its contraflow plans. Some of the key lessons included tighter collaboration with officials from neighboring states and better communication with the public. For Katrina, the American Red Cross printed up 1.5 million maps to distribute to explain the contraflow process. Other improvements were more subtle: During Ivan, drivers were stopping frequently to ask cops questions, and the cops thought that they were helping by giving good answers. But those conversations were actually creating bottlenecks and contributing significantly to the traffic jam. For Katrina, the lesson was clear: no talking, wave ’em forward.

On Saturday, August 27, 2005, with Hurricane Katrina in the Gulf threatening New Orleans, Louisiana governor Kathleen Blanco ordered contraflow to begin at 4:00 p.m., and it continued nonstop for 25 hours before it was suspended. The traffic flows were far better than with Hurricane Ivan—the trip to Baton Rouge, usually a 1-hour drive, didn’t take longer than 3 hours throughout the contraflow period. The flow rate of cars—the number of vehicles per hour—was almost 70% higher than in rush-hour traffic, yet the cars moved steadily. In total, more than 1.2 million people were evacuated, with no significant delays.

The Hurricane Pam simulation is a model example of upstream effort: convening the right people to discuss the right issue in advance of a problem. “The good thing is we know we made a difference,” said Ivor van Heerden, the former deputy director of Louisiana State University’s Hurricane Center and a participant in the Hurricane Pam simulation. “We know that we saved thousands of lives.”

It was the right idea, but unfortunately it was the only time all the major players came together. No single training, no matter how ingenious, is sufficient to prepare for a catastrophe. IEM, the contractor that invented Hurricane Pam, had planned multiple additional exercises in 2005 to push the work forward. “But in a breathtaking display of penny-wise planning,” the authors of Disaster wrote, “FEMA canceled most of the follow-up sessions scheduled for the first half of 2005, claiming it was unable to come up with money for the modest travel expenses its own employees would incur to attend. FEMA officials have since said that the shortfall amounted to less than $15,000.”

FEMA said no to $15,000. Congress ultimately approved more than $62 billion in supplemental spending for rebuilding the Gulf Coast areas demolished by Katrina. It’s the perfect illustration of our collective bias for downstream action. To be fair, no amount of preparation was going to stop the Gulf Coast from being damaged by a Category 5 hurricane. But the proportions are so out of whack: We micromanage thousands or millions in funds in situations where billions are at stake. Preparing for a major problem requires practice. In theory, that’s not complicated. What makes it complicated in reality is that this kind of practice runs contrary to the tunneling instinct discussed earlier in the book. Organizations are constantly dealing with urgent short-term problems. Planning for speculative future ones is, by definition, not urgent. As a result, it’s hard to convene people. It’s hard to get funds authorized. It’s hard to convince people to collaborate when hardship hasn’t forced them to.

Building a habit is one way to counteract this downstream bias. IT leaders, for instance, have learned that, when it comes to network security, the weakest links are often their colleagues. Phishing schemes—in which people are sent fraudulent emails that trick them into supplying personal information such as credit card numbers or passwords—have become common, involved with 32% of the security breaches examined by the 2019 Verizon Data Breach Investigations Report. A cottage industry has sprung up to send fake phishing emails to employees in hopes of training them not to fall for the real attacks. (Sign of the times: There’s an industry for fraudulent fraudulence.)

Don Ringelestein, director of technology of West Aurora School District No. 129, in Illinois, was concerned about phishing attacks, so he accepted a free trial from a vendor called KnowBe4. In January 2017, he sent his first phishing test to the district’s staffers from a weird email address they’d never seen before. The email announced that a suspected security breach had happened earlier in the week and encouraged them to click a link to change their passwords. Ringelestein had frequently warned his staff about such schemes and figured most people would see through the scam. No: 29% of his colleagues clicked it.

“Surprised is one word. Panic was another,” he said of his reaction. Phishing is a particular concern in school districts because—beyond the value of the district’s financial data—the students’ personal data can be “pure” for identity theft purposes. A thief might use a student’s information for years to open up accounts before the student ever realizes there’s a problem, according to the FBI and others.

“There’s no way we can block all this email with hardware—there’s no hardware that will do it,” said Ringelestein. “So really the best way for us to close that last door—that last opportunity for phishing schemes to work—is to train our people.”

He began crafting emails that tempted his colleagues to click. A free Amazon Prime subscription, just for you—click here! A free drink from Starbucks—download this coupon! You’re way overdue on your E-ZPass toll charges—click to pay now! The click rate on that latter one was 27%, which was particularly discouraging, since Illinois doesn’t have E-ZPass. It’s I-Pass. (Had Ringelestein offered “free interns to grade student papers,” the click rate might have cleared 90%.…)

When someone clicks on one of these links, the system diverts the person to a screen where he or she is schooled about internet safety practices. Meanwhile, Ringelestein could monitor which staffers were clicking, and it soon became clear that there were some people on staff who were almost infinitely gullible. Even his least-creative efforts were sufficient to draw their clicks. Ringelestein would drop by their schools to discreetly offer a tutorial.

For more than two years, Ringelestein has been testing and educating his colleagues, and they have slowly raised their guard. The disastrous 29% click rate on the first email has declined to averages of more like 5% in recent attempts.

It’s progress. And it’s intended to be generalized progress—the goal, in other words, is not to arm employees only against fake Starbucks promotions, but to boost their defenses against scams of many colors. If a West Aurora teacher got a suspicious phone call asking for sensitive information, they’d be on guard, Ringelestein hopes, even though the medium was different.

That’s the vision, too, for disaster preparedness. Emergency simulations aren’t supposed to be perfect predictions, just credible ones, and ideally, the parties involved get multiple opportunities to practice. Because they’re building knowledge and skills that the parties involved will need in any emergency. When disaster strikes, they will already know the players involved. They’ll understand the linkages in the system. They’ll know where to go for resources. One person I interviewed, who’d been part of a community-wide preparedness event, said it well: “You don’t want to be exchanging business cards in the middle of an emergency.”

In these efforts to prepare for uncertain or unpredictable problems—like Y2K or hurricanes—we’re seeing familiar themes. An authority convenes the right players and aligns their focus. They escape their tunnels and surround the problem. And they try to make tweaks to the system—like improvements to contraflow—that will boost their readiness for the next disaster.

But now a much more difficult question: What if, for certain kinds of problems, being “prepared” isn’t good enough? What if avoiding a problem requires perfection?

Think again about Ringelestein’s colleagues, who started with a fool-me rate of 29% and improved through education to 5%. That’s a big change by behavioral standards. But is it enough? “Education doesn’t work when security depends on your weakest link,” said Bruce Schneier, a computer security expert, commenting generally on defense against hacking. In other words, if a hacker was dead set on breaking into West Aurora School District No. 129—or any other specific institution, for that matter—then the difference between 29% and 5% is immaterial. For many hacking purposes, you just need one open door. Just that one gullible person who will click on anything.

Nick Bostrom, a Swedish philosopher at the University of Oxford, contemplates whether technological innovation has left modern society on the verge of a similar kind of vulnerability—a situation in which the fate of everyone could hinge on a single bad break or bad actor. The context of his comments is mankind’s tendency to keep pushing for new innovations almost without regard for the consequences. Scientists and technologists rarely cross a formal threshold where they ask themselves, Should this thing be invented? If it can be invented, it will be. Curiosity and ambition and competitiveness push them forward, forward, forward. When it comes to innovation, there’s an accelerator but no brake.

Sometimes their discoveries are of immense value: antibiotics, say, or the smallpox vaccine. Other times, the inventions are a mixed bag: guns, the automobile, air conditioning, Twitter. We never really know in advance what these technologies will yield, whether they will be mostly good or mostly bad. We just fumble our way forward and deal with the consequences.

Bostrom conjured a metaphor for this fumbling-forward habit: Imagine that humanity is pulling balls out of a giant urn, where the balls represent inventions or technologies. The urn contains some white balls, which represent beneficial technologies like antibiotics, and some gray balls, which represent the mixed-blessing types. The point is: When we reach into the urn, we don’t know which color we’re going to draw. We just keep reaching in; it’s our compulsion. But what if one of those balls turns out to be catastrophic? In his paper “The Vulnerable World Hypothesis,” Bostrom considers whether there might be a black ball in the urn, representing a technology that will destroy the civilization that invents it.

Bostrom notes that we haven’t drawn a black ball so far, but “The reason is not that we have been particularly careful or wise in our technology policy. We have just been lucky.… Our civilization has a considerable ability to pick up balls, but no ability to put them back into the urn. We can invent but we cannot un-invent. Our strategy is to hope that there is no black ball.”

This black ball idea may sound absurdly sci-fi: a technology that can destroy civilization. But it’s hardly far-fetched: Bostrom contends that our civilization could be put at risk if we ever draw a ball from the urn that puts mass destruction in the hands of small groups. This is, in essence, the “ISIS with a nuclear weapon” scenario. It requires only two conditions: first, a set of actors who would welcome mass destruction, and second, a technology that makes mass destruction available to the masses. Does anyone doubt that the first condition holds? The presence of countless terrorist groups and school shooters and mass murderers provides convincing proof.

As for the second condition—mass destruction being available to the masses—Bostrom asks us to consider an alternate history in which nuclear weapons had not required the sophistication and resources of nation-states to develop. What if, instead, there had been “some really easy way to unleash the energy of the atom—say, by sending an electric current through a metal object placed between two sheets of glass.” If people could assemble a nuclear bomb with materials acquired from Home Depot, who doubts the disastrous consequences? Could it be one of our species’ luckiest breaks that nuclear weapons turned out to require a lot of money/expertise/resources to harness?

Bostrom’s point is that there’s no guarantee that we will continue to get lucky in the same way. Already, at this moment, there are DNA “printers” that allow companies to produce stretches of DNA quickly and cheaply for research purposes. Imagine if, someday, those DNA printers could be brought into the home—perhaps in the spirit of offering genetically tailored medicine—and someone could home-cook a copy of the 1918 Spanish flu. One human being could trigger the end for all of us.

We began the chapter with this quote from Leonard Nimoy: “So we recall the fate of Atlantis. The primary question for our civilization as we approach the year 2000 is this: Have we allowed our own highly advanced technological innovations to far outpace our human abilities to control those innovations, and most importantly, to foresee their ultimate consequences?” I’ll admit, when I first saw this video, in all of its cheesy synthesizered glory, there was nothing but mockery in my heart. Now, though, the smirk is gone. Spock might be right.

There’s a concept called “the prophet’s dilemma”: a prediction that prevents what it predicts from happening. A self-defeating prediction. What if Chicken Little’s warnings actually stopped the sky from falling? The Y2K bug was an example of the prophet’s dilemma. The warnings that the sky would fall triggered the very actions that kept the sky from falling. Maybe what society needs is a new generation of enlightened Chicken Littles. Not the conspiracy theorists who use hate to sell gold and vitamins. Not the fear-entrepreneurs using hysteria to hock consulting services. But people like Bostrom, who founded the Future of Humanity Institute to attract interest in research about existential risks and humanity’s long-range future. Or writers like the computer security guru Bruce Schneier—quoted earlier about the “weakest link” problem in network security—whose book Click Here to Kill Everybody is essential reading for anyone involved in setting the policy or norms for networked technology.

And maybe we need to start building a system that can act on the warnings of these enlightened Chicken Littles. Does every inhabitant of Earth need access to a DNA printer? And should it be the companies that produce DNA printers that get to make that choice—and if not, whose should it be?

Believe it or not, we have a historical model that can provide some inspiration: an effort in which parties around the globe came together in the 1950s and 1960s to address an ambiguous scientific threat. The threat? The possibility of bringing back destructive alien life from a mission to the Moon. “Thousands of concerned citizens wrote NASA letters, worried that they were at risk from Moon germs,” wrote Michael Meltzer in his fascinating book When Biospheres Collide.

It might be tempting to mock these fears now, with the infallibility of hindsight, but this concern was no joke. We simply did not know what was on the Moon. And existential risk was in the air. It was the era of the Cold War, nuclear fallout shelters, biological warfare agents, the Cuban missile crisis, “duck and cover” exercises in schools. (Feeding the fears was a 1969 bestseller by Michael Crichton, The Andromeda Strain—released about two months before the Moon landing—which concerned a deadly alien organism brought back to earth by a fallen satellite.)

In the 1950s, just before the launch of the USSR’s Sputnik program, a group of scientists began to warn of the dangers of contamination from space exploration. The scientists, including the biologist J. B. S. Haldane and the Nobel laureates Melvin Calvin and Joshua Lederberg, warned of two types of contamination: backward and forward. “Backward contamination” is the contamination of Earth by a returning spaceship—aka the Andromeda scenario—and “forward contamination” is the contamination of another planet with organisms from Earth. (We are in far upstream territory here.)

The interest in these issues sparked a new scientific field that Lederberg labeled “exobiology.” (It’s now called astrobiology.) “Exobiology profoundly influenced the way space exploration was conducted,” wrote the astronomer Caleb Scharf in Nautilus. “Strict protocols were developed for the sterilization of spacecraft, and for quarantines to restrict what they might bring back. NASA built clean rooms, and technicians swabbed and baked equipment before sealing it up for launch. Scientists got to work and hurriedly computed the acceptable risks for biological contamination of other worlds.”

When the Apollo astronauts came back from the Moon, they were immediately put into quarantine. To be clear, most scientists did not think the Moon was capable of supporting life. They weren’t unduly worried that the astronauts would bring back deadly Moon bugs. But, to their credit, they worried about what they didn’t know. Why take life-and-death chances in a domain (space travel) we barely understand? They put in place a number of obsessive protocols to try to protect against an improbable risk. Humanity wasn’t forced to do this; we did it voluntarily. Perhaps these were our first baby steps upstream to work collectively on the civilization-threatening problems we may face in the years ahead.

The person in charge of these efforts was a NASA employee called the Planetary Protection Officer (originally the Planetary Quarantine Officer). The office still exists; the Planetary Protection Officer in 2019 was Lisa Pratt. One of her predecessors, Catharine Conley, said something striking about the office’s history: “So far as I can tell, planetary protection is the first time in human history that humans as a global species decided to prevent damage before we were capable of doing something.”

May there be a second time.

I. Upstream game plan: First, soothe lawyers’ concerns about potential lawsuits. Second, save civilization.

II. This is not the Mike Brown of “Brownie, you’re doing a heckuva job” fame. Different Mike Brown. This Mike Brown’s wife is named Pam, and the simulation was named after her.

III. She’s citing 1,700 deaths, rather than the 1,100 deaths in her Senate testimony, because the toll grew as some of the people who’d been missing were confirmed dead.