Chapter 3

I Think Security Is Very Tight, but I’m Still Concerned

Protect this house.

—slogan of athletic apparel company Under Armour, introduced in 2003

It was not enough for America to go abroad on a mission of revenge. The homeland, as people now called it for the first time in the country’s history, had to prepare itself for war as well. Part of the humiliation of September 11 was the nagging feeling that the country should have seen it coming, that it should have done more ahead of time to prepare for the worst. How could the country’s intelligence agencies have missed the signs of impending disaster? How could airport security have allowed the hijackers to carry knives and box cutters onto the planes? Why didn’t the cockpits have reinforced doors, and why didn’t New York have a plan to rescue people from the rooftops of its two tallest buildings? There were certainly real intelligence failures to criticize. To mention just one, in July 2001 an FBI agent working out of Phoenix had alerted the bureau’s headquarters that an unusual number of possible al-Qaeda affiliates were attending commercial aviation schools in Arizona. No FBI managers saw the memo until after September 11, and nobody acted on the agent’s recommendations.[1] These failures should be kept in perspective, however: No terrorist had ever hijacked a commercial jet in order to crash it into a building, and no skyscraper in human history had ever collapsed for any reason. But the 9/11 Commission Report, the government’s official account and analysis of the attacks, dissected America’s security failings in obsessive, self-flagellating detail. It noted every failure of communication among the various air traffic control centers that knew of the multiple hijackings.[2] It anatomized the FBI’s and the CIA’s failures to share information on impending threats.[3] It quoted memos that were ignored, suspects who slipped out of the government’s view, and airport security screeners whose work was “marginal at best.”[4]

Congress declined to hold any government officials responsible for these failures, but the 9/11 Commission Report ended with a flurry of recommendations for securing the country against the follow-up attacks everyone was sure would be attempted in the future. These recommendations included using biometric technology to screen travelers trying to enter the country, making it harder to obtain birth certificates and driver’s licenses, instructing private-sector companies to make substantial investments in securing their own facilities, and mandating that airlines supply the government with the passenger information required to maintain national no-fly lists.[5] It was to be a comprehensive effort to harden the country from the inside. At the press conference introducing him as the first head of the brand-new Homeland Security Office (which would eventually become a cabinet-level department), the former Pennsylvania governor Tom Ridge said, “We will find something for every American to do.”[6]

Ridge was true to his word. Americans soon found themselves learning how to navigate complicated security procedures in all kinds of public spaces. The first really prominent exhibition for the country’s new lockdown mindset took place in New Orleans, where Super Bowl thirty-six was played on February 3, 2002. This was the first post-9/11 Super Bowl, and inside the stadium, the NFL put on an exuberant celebration of America at war. The pregame festivities began with Barry Manilow singing “Let Freedom Ring” from a stage at the fifty-yard line. As Patti LaBelle, Wynonna Judd, and James Ingram joined Manilow onstage, a pageant unfolded around them on the field. Police officers, firefighters, and members of the armed forces stood in a line holding dozens of American flags. A couple hundred women dressed in red, white, and blue Statue of Liberty costumes walked around on the Superdome’s proprietary blend of artificial turf (officially named Mardi Grass) in tight formations. Teenagers in football pads wheeled an enormous replica of the Liberty Bell on to the field, with dozens of children dressed as soldiers, doctors, cheerleaders, and first responders trailing behind them. They were followed, in turn, by a gospel choir. At the song’s climax, a little boy in head-to-toe camouflage hit the fake Liberty Bell with a mallet, turned to the camera, and waved.[7]

That was the first of four performances that preceded the game. Mary J. Blige and Marc Anthony sang “America the Beautiful.” Paul McCartney performed an original number called “Freedom,” which began with the words, “This is my right / A right given by God.” He sang in front of a ninety-foot-tall mural by Keith Haring depicting the Statue of Liberty. “Anyone tries to take it away,” he sang, “they’ll have to answer.” (One columnist was pleased to see that McCartney displayed “no hint of the naive peacenik of yore.”)[8] Finally, Mariah Carey sang the national anthem to the accompaniment of the Boston Pops Orchestra. Three American flags made prominent appearances during her performance. One was presented by a color guard comprising a Marine, a Port Authority police officer, one representative each of the NYPD and FDNY, and a sailor assigned to the USS Cole, a destroyer that had been attacked by al-Qaeda in October 2000. The second was held by soldiers reenacting the famous photograph of Marines raising a flag over Iwo Jima during World War II. The third was the tattered flag that had survived the attack on Ground Zero.

There was more. Before kickoff, Fox aired a video package in which players from the two participating teams, the St. Louis Rams and the New England Patriots, read famous quotations by former presidents. Kurt Warner, the Rams’ star quarterback, delivered one from John F. Kennedy: “A nation reveals itself not only by the men it produces but also by the men it honors, and the men it remembers.” Troy Brown, a wide receiver for the Patriots, was assigned Teddy Roosevelt: “We admire the man who embodies victorious effort.” These quotations were laid over video clips of Americana. A sepia-toned classroom of children reciting the Pledge of Allegiance gave way to a shot of a cheerful boy delivering the morning paper over a white picket fence. These scenes led into clips of soldiers, pilots, and lumberjacks, which alternated with images of Patriots and Rams finding glory on the field. As the play-by-play announcer Pat Summerall praised “the indelible spirit of Americans everywhere,” the video ended with a shot of a soldier returning home and sweeping his daughter into his arms.

The network maintained this militaristic bombast and apple-pie sentimentality throughout the actual game. The broadcast’s cutaway graphics and animations were studded with soldiers. Between plays during the game’s opening drive, Fox put up partially computer-generated clips in which real members of the armed forces, identified by name and rank and looking dead serious, pushed buttons on futuristic displays to bring up each team’s lineup. There was also the halftime show in which the Irish band U2 played through a few of its hits while the names of everyone killed on September 11 scrolled up a huge video board behind them, as though ascending to heaven. Even better, the on-field action lived up to the larger spectacle’s nationalistic packaging. When the underdog New England Patriots, who were led by the clean-cut quarterback Tom Brady and had insisted on being introduced as a team rather than as individuals, upset the Rams, American sports had its first post-9/11 fairy-tale ending, something it had been pining for ever since the New York Yankees had been denied a World Series win by just a single game in November 2001.

But the only reason those inside the Superdome and the millions watching at home could spend the evening taking in this celebration of freedom was that the neighborhood surrounding the stadium had been turned into one of the least free areas in the United States. The year 2002 was by no means the first time that the Super Bowl had been turned into a militaristic spectacle. In 1991, just over a week into America’s first Gulf War, tens of thousands of spectators had waved little flags and F-16s had flown over the stadium while Whitney Houston belted out a memorable rendition of the national anthem. But never before had the big game been protected by such an enormous, complex, and intrusive security apparatus. The 2002 Super Bowl was the first sporting event ever to be declared a National Special Security Event (NSSE) by the White House. This designation had previously been handed out to presidential inaugurations, national political conventions, general assemblies of the U.N., and, in 1999, the celebration of the fiftieth anniversary of NATO. NSSE status put the Secret Service in charge of the whole operation, which involved thousands of personnel from the Louisiana National Guard, the Louisiana State Police, and the New Orleans Police Department, among other agencies. The NFL consulted with the FBI, the Environmental Protection Agency, and the Centers for Disease Control and Prevention in the run-up to the game. The Federal Emergency Management Agency (FEMA), which became notorious a few years later for leaving thousands of New Orleanians stranded after Hurricane Katrina, oversaw the evacuation plan in case terrorists did manage to strike.[9] The mayor of New Orleans estimated that forty-eight different agencies were involved.[10]

If the inside of the Superdome was a football- and America-themed amusement park, the outside was fortified like a military base in enemy territory. The stadium was surrounded with concrete barriers and an eight-foot-tall fence, and National Guard soldiers carrying M-16 rifles patrolled the grounds along with bomb-sniffing dogs. Government agencies imposed a no-fly zone over the entire city, which meant there were no aerial shots of the stadium during the game—the skies were reserved for patrolling fighter jets.[11] The section of Interstate 10 that runs past the stadium was closed to trucks, and streets in the surrounding neighborhood were blocked off by Army vehicles. The teams’ players, coaches, and staff were all subjected to metal detectors, pat downs, and identity checks. For the thousands of vendors, reporters, and other workers who make an event like the Super Bowl tick, the security gauntlet also included background checks beforehand and photo identification tags that had to be worn around the neck at all times.[12] Despite the NFL’s assurances that spectators would enjoy a “seamless” game day experience,[13] the security process was expected to be so onerous that the stadium’s gates were opened at 12:30, five hours before kickoff, and fans were encouraged to show up as close to 12:30 as they could.[14] (The expectation that tens of thousands of people would spend the afternoon trapped in the Superdome may explain why the NFL staged so many pregame musical performances; those fans had paid extravagant sums for their tickets, and they expected to be entertained the whole time.) Fans were also instructed to arrive with as few personal possessions as possible. Small cameras and binoculars were allowed, but the cases in which they might be carried were prohibited, as were any cameras with lenses measuring larger than six inches. Electronic devices were strongly discouraged, as inspecting them would take a while and slow down the line. Large bags, novelty footballs, foam “We’re No. 1” hands, and banners and noisemakers of any kind were all confiscated at the gates.[15]

These highly visible security measures were impossible to miss. They were covered extensively by the press in the weeks leading up to the game, and they defined the experiences of those who attended the game in person. Yet government officials went to some lengths to suggest that there was a second network of invisible security measures in place as well. “There are some very interesting things being discussed,” a spokesperson for the governor of Louisiana said two months before the game, “but I cannot get into specifics.” The Secret Service echoed the sentiment. “Security at this Super Bowl will be like no other,” a spokesperson said. “I cannot comment on any specifics, but most of the measures we deploy, fans will not see.” Defense analysts working in the private sector helped reporters to fill in these tantalizing blanks. “It can be as simple as Army guys in ninja suits, lurking on tall buildings,” one said. Others raised the possibility that the military would deploy a ring of portable anti-aircraft missile launchers on rooftops around the stadium. They warned, however, that while “such missiles could easily destroy smaller aircraft like a single-engine Cessna or a helicopter,” they might be insufficient to take down “a wide-body commercial jet whose pilots were on a suicide mission.” The problem with deploying larger arrays of more powerful weapons that could shoot down a passenger jet was that they would have to be stationed on city streets, where they would “stick out like a sore thumb.”[16]

There was a prominent element of fantasy to the whole exercise. None of the articles outlining these security procedures suggested that the government’s plans had been crafted in response to specific threats that actually existed. Instead, the Secret Service seemed to have made a list of every kind of attack that could occur and then deployed its personnel and technology with the intention of foiling every possible plot at once. The government “[had] not ruled out any possibility,” according to the NFL’s senior director of security. “We’re looking in 360 degrees,” one Secret Service chief said. “We’re not protecting against a threat; we’re protecting against all potential threats, whether it’s Genghis Khan and his horde, Islamic terrorists, or Bill Jim Bob.”[17] By game day, the government’s list of potential threats apparently included attacks that could have been lifted from a James Bond movie. One Patriots fan said the men she was with had been made to change the time on their watch to prove that it wasn’t a cleverly disguised bomb detonator. The only problem she had with this was that the same level of scrutiny had not been applied to women. “The women walked right in,” she said. “So if there are women terrorists out there, we could walk right in.” The same security that was supposed to make people feel safer about attending the game also emphasized all of the things that could go wrong. “I think security is very tight,” another fan said, “but I’m still concerned.”[18]

The woman’s concerns about female suicide bombers highlighted an insoluble contradiction at the core of America’s post-9/11 militarism: Every effort to make people feel safer by ramping up security also makes people more conscious of both real and imagined potential threats. Metal detectors, fenced-off city blocks, surveillance cameras mounted on lampposts, and camo-clad soldiers patrolling train stations with assault rifles all communicate a simple and unsettling message: Something bad might happen here. During the first decade of the war on terror, the United States built up internal fortifications the likes of which the country had never seen, including in many places where the realistic chances of a terrorist attack were essentially zero. These fortifications transformed many aspects of daily life, from commercial air travel and live entertainment to the experience of walking on a city street and the kinds of cars people bought. From a citizen’s perspective, the results of this security buildup were wasteful in every sense. Not only did it fail to make people safer—it made them feel less safe as well. The result was a built environment in which Americans increasingly viewed the people around them as potential threats.


Given the ticket prices, most people were never going to experience Super Bowl security in person. But around 40 percent of Americans took at least one commercial flight each year, and for millions of white-collar workers, air travel was one of the basic parts of doing the job.[19] More than any other sector of the economy, September 11 had been a disaster for the airline industry. The attacks had begun the moment the terrorists set foot on the curb outside the departures area. Airline employees had checked the terrorists into their flights and handed over their boarding passes, and private contractors working for the airports had waved them through security. They had used airline property to kill nearly three thousand people and destroy one of the most densely populated neighborhoods in the country.

In the months after September 11, passengers and crews alike were terrified to fly. Flight demand dropped by around a third and didn’t recover to pre-9/11 levels for two years, prompting a wave of bankruptcies, layoffs, and mergers that ultimately left around three-quarters of domestic flights in the hands of just four carriers.[20] One writer recalled taking his first post-9/11 flight into Reagan National Airport in Washington: “It was in the evening, already dark, and the grim voice of the pilot announced over the loudspeaker that no one was allowed to stand up or leave their seats for the last 30 minutes of the flight. Not even for the bathroom. No exception, he said repeatedly….Looking out the window at the lights of the city at night felt like a transgression.”[21] A flight attendant remembered that when she returned to work in late September, she and a colleague placed their own hands around their necks during takeoff, so that a hijacker sneaking up from behind wouldn’t be able to slit their throats.[22] If air travel was ever going to recover—and by the time the federal government approved a $15 billion bailout for the industry less than two weeks after the attacks, there were plenty who thought it never would—the whole experience, all the way from check-in to security, takeoff, and landing, would have to be transformed. Airports became theaters in which the government staged performances demonstrating its new mindset. For passengers, arriving at the intended destination now depended on how well they could read their new lines and play their new parts.

The immediate effects of September 11 on the country’s travel infrastructure were chaotic. Ticket prices plunged as airlines cut flights from their schedules and laid off workers.[23] Business associations canceled annual conventions because of the uncertainty surrounding whether attendees would be able to make it. Car rental companies were in disarray, their fleets scattered all over the country by travelers who drove home after the attacks stranded them at airports.[24] Security was managed by a scattershot network of private firms that varied from airport to airport, which meant there was no unified guidance to heed or standards to adopt as the industry tried to get back on its feet. Passengers found themselves at the mercy of screeners who didn’t know what they were supposed to be doing now, only that it was not whatever they had been doing before. Often the screeners just telegraphed the newfound gravity of their jobs by spending a lot of time scrutinizing every object in everyone’s bags. Half-baked suggestions came in from all directions. The secretary-general of Interpol wrote that airlines might need to consider “building a safe zone between the cockpit and the passengers,” failing to understand that for a struggling industry that needed to squeeze ticket revenue out of every available square foot of cabin space, this was a total nonstarter.[25]

With wait times skyrocketing and public confidence in the industry at historic lows, Congress passed the Aviation and Transportation Security Act in November 2001. The bill ended the state of affairs whereby individual airports and airlines hired private security firms to handle passenger screening. Going forward, airport security would be federalized, with the newly created Transportation Security Administration (TSA) overseeing the whole process (as well as security procedures for trains and intercity bus travel). Airports would now be divided into “sterile” and “non-sterile” zones. Passage into the sterile zone would require a trip through metal detectors, inspection of carry-on items by TSA agents, and X-ray examinations of all checked bags. Ticket counter agents were also required to pose two questions to anyone who wanted to check a bag: (1) “Have any of the items you’re traveling with been out of your immediate control since the time you packed them?” and (2) “Has anyone unknown to you asked you to carry an item on board the aircraft?” Standards for employment as a screener were minimal, requiring only that applicants speak English and “possess basic aptitudes and physical abilities,” including the ability to perceive colors.[26]

Over the following years, the TSA built a large and complex security apparatus on top of this foundation. Protective measures were removed from the larger program only on rare occasions; the normal procedure was to keep layering new safeguards on top of the ones that were already in place. In December 2001, a man named Richard Reid tried and failed to detonate explosives packed into the shoes he’d worn on a flight from Paris to Miami. The TSA responded by asking passengers to send their shoes through the X-ray machines along with their carry-on bags. By April 2003, pilots who wanted to were allowed to carry handguns on their flights, the TSA had begun training crew members on how to use firearms mid-flight, and cockpit doors were reinforced. The government also deployed increased numbers of plainclothes air marshals who would sit on commercial flights with handguns concealed under their sport coats and watch for hijackers. In the summer of 2006, following the disruption of a plot to bomb North America–bound flights with liquid explosives concealed in brand-name plastic beverage containers, the TSA banned liquids, gels, and aerosols from carry-on luggage. When this new restriction prompted an outcry, the TSA modified its requirements, allowing non-solid hygiene products so long as they were held in containers of 3.4 ounces or less, and so long as those containers were held in a one-quart clear plastic bag with a zip top. The TSA added canine teams to its airport security program in 2008. In 2009, an al-Qaeda operative named Umar Farouk Abdulmutallab tried to bomb a Christmas Day flight to Detroit with explosives concealed in his underwear. From that point on, the TSA began subjecting passengers to full-body scans, which allowed them to perform a visual pat down on everyone looking to enter the gate area. Anyone unwilling to step inside the full-body scanner machines was allowed to request a physical pat down instead, but this did not always prevent tensions from boiling over. In 2010, a thirty-one-year-old libertarian blogger made the national news when he recorded himself telling a TSA agent, “If you touch my junk, I am going to have you arrested.” The TSA agent paused for a few seconds to consider this. “Actually,” he said, “we are going to have a supervisor here because of your statement.”[27] (There were also any number of reports that women had been inappropriately touched by TSA agents in the security line, but it was the male blogger’s story that got the most attention.)

The post-9/11 security program became so complex that some people worried about whether Americans had the brainpower to comply even if they wanted to. “The message is a pretty sophisticated one,” the chairman of the Business Travel Coalition said, “and it’s a lot for the average person who is traveling for the first time in months to remember.”[28] The TSA made some attempts to simplify the process, mounting a public information campaign around its “3-1-1” policy to help travelers understand the rules (three-ounce bottles in one clear plastic bag, with one bag allowed per person). But the rules changed so frequently, and were enforced in such a scattershot way, that airports became very tense places.

The airports themselves, as physical structures, were somewhat to blame for this. Airports take a very long time to build. They require huge amounts of land, which can be hard to come by in a metropolitan area large enough to justify having an airport in the first place. Complicated negotiations among city governments, the airlines whose planes will use the airport, and the vendors that will fill the retail space can drag on for years, even decades. The result is that new airports are often out of date by the time they open. When the 700,000-pound Boeing 747 jet was introduced in 1969, people had only just come to grips with the needs of the earlier generation of jets that preceded it, and airports like Atlanta’s found themselves having to expand the runways and parking aprons of their brand-new facilities in order to accommodate the behemoths.[29] Similarly, America’s airports were not ready for post-9/11 security. In the fall of 2001, the newest airport in the country was Denver International, which opened in 1995. It was the country’s first completely new large airport in more than two decades, and as of this writing it remains America’s newest. With no space allocated for the crowds that built up as wait times increased, lines could stretch all the way back to the ticket counters. Body scanner machines took up so much more space than their metal-detector predecessors that some airports had to decrease the number of lines moving through the security area. Someone who had trouble taking off their shoes while standing could cause a bottleneck. All of this meant that passengers who were already more frightened of flying than they had been before September 11 were now also frustrated and angry when they got to the gate.

This was a problem for the TSA, because crowds of agitated passengers made the other elements of their security program more difficult to implement. The TSA wasn’t satisfied with searching people’s luggage, shoes, and toiletries. It also wanted to scrutinize passengers themselves—their body language, their gestures, the way they did or did not make eye contact when buying a ticket at the counter—in the hope of sussing out who might try to mount an attack even if they didn’t have a box cutter in the side pocket of their roller bag. For years before September 11, Massachusetts State Police had used various behavior detection techniques to identify people who might be transporting drugs through Logan International Airport in Boston. So in 2002, the department hired Rafi Ron, who had run security at Ben Gurion International Airport in Tel Aviv, and Paul Ekman, a social scientist who claimed to have developed a list of facial cues that indicated someone was lying. The TSA liked the results of the trial run, and so in 2005 it expanded the program to twelve other airports. The program was called SPOT, for Screening Passengers by Observation Techniques. TSA agents selected for the program received four days of classroom instruction and three days of field training.[30]

The idea that criminals might reveal their intentions in advance of any wrongdoing began to take its place in pop culture with the 2002 film Minority Report, and by the end of the Bush administration it was inspiring TV shows. In the opening scene of Lie to Me, which premiered on Fox in 2009, Dr. Cal Lightman, an expert on body language and “microexpressions” who contracts out to various law enforcement agencies, interrogates a white supremacist and divines the location of a pipe bomb in less than two minutes, much to the chagrin of the skeptical FBI agents who watch the interview from behind a one-way mirror. “Emotion looks the same whether you’re a suburban housewife or a suicide bomber,” Lightman says. “The truth is written on all our faces.”[31] That’s almost exactly how the head of the TSA, Kip Hawley, justified the agency’s use of behavioral detection: “It doesn’t matter what race, ethnicity, age or whatever a person is. It’s got to do with the human condition, that humans express certain emotions unknown to them that you can detect.”[32]

But if everyone at the airport was stressed out, agitated, and angered by the difficulties of navigating the security line, then it wouldn’t be possible for behavioral detection agents to spot the one passenger who is agitated because he’s on a suicide mission. As Hawley described it, the SPOT program picked out suspicious passengers on a relative basis rather than an absolute one. “We know what normal is,” he said. “We know what a hassled business traveler looks like. We know what somebody who’s just had a fight with their girlfriend or boyfriend or whatever looks like. We know what the normal experience is. So you build on top of that.” That meant the passenger who merited more focused scrutiny would not be the one whose behavior could be judged in a vacuum; it would be the one who behaved differently from everyone else, who demonstrated “deviations from baseline behavior,” as the TSA put it.[33] In this sense, the TSA had decided that behavioral detection aimed at large crowds was analogous, more or less, to a lie detector test, in which the interview subject is asked to provide truthful answers to ordinary questions about their life and job in order to provide a baseline against which to judge the answers where the subject would be more likely to lie. This meant that airports needed to lower the emotional temperature of the airport as a whole. “We need to calm down the checkpoint,” Hawley told a group of aviation reporters. “If we can calm down the process so the baseline data is lower, then it makes it easier for the other to pop out.”[34] This need brought new changes to airport design. Terminals built a few years or more after September 11 allocated more space to the security area. Post-security retail stores went upscale, in an acknowledgment that shopping is a common way for Americans to relieve stress (if the shops could be seen from the security line itself, all the better—passengers might find it easier to remain calm if they could see the treat they would get once they made it through the line). The TSA installed blue lighting at a security checkpoint in Indianapolis International Airport, believing it would have a “calming effect.”[35]

Combined with the technological security measures, the SPOT program meant that for passengers hoping to make it through the airport with a minimum of hassle, it was not enough to simply not be someone who intended to commit terrorism. One also had to look and sound and act like someone who did not intend to commit terrorism. Airports became stages. Passengers did their best to follow the government’s script for how an innocent traveler behaved, and the TSA pounced on performances that weren’t up to the correct standard. The TSA was explicit about this. “You already know you’re not a threat,” the narrator said in an instructional video shown near airport checkpoints during the 2007 holiday season. “Show us by packing smart.” Packing “smart” meant folding the clothes in your suitcase and separating clothes and electronics into separate layers so that TSA agents could rifle through your things more quickly and efficiently. It also meant having your boarding pass and identification card already out and ready for inspection when you approached the checkpoint. The TSA video included a scene where an attractive white woman takes forever digging through her purse, annoying the agent at the kiosk as well as all of the delayed passengers stuck behind her in line.[36] Passengers were expected to make themselves easy to inspect, to be as transparent as possible, to help the government’s agents find out who they were and where they were going and what they were carrying with them. It wasn’t just that the TSA was going to invade your privacy—you were going to behave as though you liked it, too.

Passengers were also expected to take the process seriously. The slightest criticism of a TSA agent, the smallest hint of irritation at having to go through the body scanner rather than the metal detector, could be enough to have a supervisor called over to provide extra scrutiny. Jokes were totally out of the question, as a matter of official policy. In 2004, a young British woman joked to a TSA agent in Miami that she had “three bombs” in her luggage. When asked to repeat herself, she repeated the joke two more times, at which point she was arrested. “After what happened on 9/11, you cannot just come on flights and make these types of jokes,” a Miami police detective said. “They will not be tolerated and we have to take enforcement action.”[37] After another incident in which a French pilot joked about having explosives in his shoes, a TSA spokesperson emphasized to the media that “we have zero tolerance for these kinds of comments.”[38] For passengers who missed that particular press conference, signs were posted at airports around the country. A placard at LAX read, “ATTENTION! Making any jokes or statements during the screening process may be grounds for both criminal and civil penalties. All such matters will be taken seriously. We thank you for your restraint in this matter.”[39]

As you might expect, the No Joking signs and other security measures were terrific fodder for jokes. A theater group in Kansas City took to a popular outdoor shopping mall, posted signs reading No Joking Zone on one side and Thank You for Not Laughing on the other, silently stood guard, and watched people react. The New Yorker writer Calvin Trillin went on The Daily Show after the attempted shoe bombing and said, “There’s one Arab terrorist with a sense of humor, and he said, ‘I bet I can get them all to take their shoes off in airports.’ If the next one is called, because of his M.O., ‘the underwear bomber,’ you’ll know I’m on to something.”[40] Three years later, when a real underwear bomber appeared in the form of Umar Farouk Abdulmutallab, Trillin wrote an article titled “Crystal Ball.” “Nobody has mentioned that I predicted this turn of events,” he wrote. “How many dead-on predictions does a person have to make to get a little credit around here?” The terrorists, he wrote, were “engaged in an elaborate scheme to embarrass us to death.”[41]

For those who didn’t stick to the script, however, the results could be quite serious. The British woman who joked about bombs in her luggage at the Miami airport was threatened with a fifteen-year prison sentence. A woman was charged with harassment after complaining about a lengthy interrogation at Honolulu International Airport,[42] a place where screeners referred to some of their co-workers as “Mexicutioners” because of their habit of flagging Hispanic passengers for questioning.[43] A mother trying to travel out of Reagan National Airport in Washington was detained and threatened with arrest after she spilled water from her son’s sippy cup. Air marshals shot an unmedicated bipolar man at Miami International Airport. An “agitated” passenger in Vancouver was cuffed and tased multiple times, and he died lying on the terminal floor. A woman in Phoenix on her way to a rehab center was late for a connecting flight, became angry with the gate attendant, and was then arrested and put in a holding cell, where she died.[44]

A ruthless utilitarian might argue that these deaths were worth it given the hundreds or thousands of other deaths that were prevented by thwarting terrorism, but there is little to no evidence that any of the post-9/11 security measures made air travel safer. In a 2010 report by the U.S. Government Accountability Office, the TSA said it was “not known if the SPOT program has ever resulted in the arrest of anyone who is a terrorist, or who was planning to engage in terrorist-related activity.”[45] Given the government’s eagerness to publicize every instance in which a terrorist plot was stopped in its tracks, it is safe to infer from that statement that the SPOT program never caught anybody. A Department of Homeland Security project manager testified before Congress that he wasn’t even sure that behavioral detection worked at all. “The research in this area is fairly immature,” he said. “We’re trying to establish whether there is something to detect.”[46] Passenger and baggage screeners have a similarly thin record of success. In 2003, a college student named Nathaniel Heatwole decided he would try to help the TSA identify the weaknesses in its security program. He repeatedly bought plane tickets and then packed box cutters, knives, and liquid bleach in his luggage, which he successfully smuggled through the checkpoint on each occasion. When he emailed the TSA to let them know, he was fined $500 and put on probation.[47] Not wanting to rely entirely on the public to find out how ineffective their screeners were, the TSA established its own unit of undercover testing agents, which it called the Red Team. In operations carried out at LAX and O’Hare, the Red Team tried to smuggle fake bombs through security. Seventy-five percent of them made it through unnoticed in Los Angeles, and 60 percent slipped through in Chicago.[48] In 2015, the TSA did even worse: In sixty-seven out of seventy cases, Red Team agents were able to get weapons through the checkpoints, a failure rate of 96 percent. That same report also found that despite spending more than half a billion dollars in six years to buy the latest technology for screening checked luggage, the TSA had failed to make any improvements in that area, either.[49]

In this light, people’s frustrations with airport security seemed more or less justified. If the TSA wasn’t using its scanners and X-ray machines to actually find weapons, and if its behavior detection agents weren’t spotting any terrorists—in short, if the government hadn’t managed to stop a single terrorist plot at its airport security checkpoints—then what was the point? Why not just let passengers’ relatives back into the gate area to greet or say goodbye to their loved ones? Why not let people keep their shoes on? Who cares if someone forgot and left a pair of scissors in their toiletry bag? After all, there was nothing to stop a terrorist from detonating a suicide vest while standing in the security line. Checkpoints were now the most crowded parts of the airport, and such an attack would cause almost as much terror as another midair hijacking.

Yet nothing like that had happened, which made one begin to suspect that the terrorist threat as a whole had been overhyped. People started to use the phrase “security theater” to describe the situation, the idea being that successfully staging the appearance of safety was a higher priority than actually making the airports secure. But this rationale doesn’t hold up under scrutiny, either. While public opinion polling is an inexact science at best, the evidence suggests that the post-9/11 security program did not make a meaningful difference in how people perceive the safety of air travel, a fact that might be explained by noting that security and safety are not the same thing.[50] Safety is the feeling that you are not threatened. Security is the feeling that you are threatened but that you are protected from the threat. That makes security a double-edged sword; it emphasizes the dangers people face as much as the steps taken to protect against them. Safety means living in a place where you feel that you can leave your doors unlocked when you leave the house or go to sleep. Security means you shell out for the best locks money can buy and spend a lot of time peering at the street through your blinds.


Along with the halftime show and the game itself, there is a third crucial element to the Super Bowl’s status as America’s biggest media spectacle: the commercials. A thirty-second slot during the big game is the most valuable advertising real estate in the world. For large corporations like Anheuser-Busch, Coca-Cola, and the Ford Motor Company, there is no better opportunity to remind everyone in the country that you are still on top. Upstart firms, meanwhile, can blow a whole year’s media budget on a single ad and go from total unknown to household name overnight. The Super Bowl of football is also the Super Bowl of American consumerism. In February 2002, thirty-second ad slots sold at an average price of $1.9 million.[51]

Advertisers found themselves in a complicated spot after September 11. On the day itself, the major news networks had all decided to forgo commercials entirely. Nobody wanted to hear about the stain-fighting power of Tide laundry detergent (or whatever) while they watched Manhattan burn. On the other hand, consumerism occupies an exalted place in American culture. No other country spends more on consumer goods, and nowhere else do people derive so much meaning from the things they buy. By the time news networks returned to regular commercial programming on September 15, 2001, it felt as if a welcome sense of normalcy was being reestablished. When the Super Bowl came around five months later, advertisers had their first big chance to grapple with how to sell things to people who were now at war.

The commercials that aired during Super Bowl thirty-six took a cautious approach, with only three directly addressing terrorism. The first saw New York City Mayor Rudy Giuliani thanking America for its support, after which the name of Monster.com, a job-seeking website, flashed briefly on the screen. The second, an ad for Budweiser beer, saw a group of beautiful horses pulling a wagon through the American landscape, traversing snow-covered fields and making their way through small towns before finally arriving in New York and respectfully bowing to the Statue of Liberty. The third was part of a government antidrug campaign and listed the amounts of money the September 11 hijackers spent on various parts of their plan: $3,000 for a fake ID, $2 for box cutters, $100 for phones, and so on. “Where do terrorists get their money?” the ad asked. “If you buy drugs, some of it might come from you.” That commercial was laughed out of the room, and the campaign was short-lived. It is indicative of the expectations Americans have for each year’s batch of ads that one national newspaper led its coverage of the commercials with a headline remarking on how few of them were explicitly pro-America: “Patriotism Barely Gets off the Bench.”[52] But this was something of a misreading. Post-9/11 anxieties could be felt in several of the ads, even those that didn’t wave a flag in your face and then slap a logo on it. The frontier myths that organized the country’s understanding of what the attacks meant were all over three commercials for Cadillac’s new lineup of cars and SUVs. In one, a man driving a vintage Cadillac convertible sits in gridlocked city traffic. Steam billows out of manhole covers, office drones trudge by in identical trench coats and fedoras, and except for the lipstick-red body of his car, everything in sight is the same shade of dull gray-brown. He looks around, fed up with the monotony of the grind, and drives out of the city as Led Zeppelin fades in on the soundtrack. Suddenly he is in the American Southwest, speeding through a wild landscape of cacti, mesas, and dusty two-lane highways. In the place where America’s pioneers found new lands in which to settle, abundant natural resources, and natives who could be dispossessed with a minimum of fuss, the driver finds brand-new Cadillacs. A blond woman speeds by him in a silver sedan, a guy at a roadside gas station fuels up his black SUV, and as he waits for a freight train to thunder past at a railroad crossing, a miraculous kind of transposition takes place, lifting him out of his old ride and into a 2003 convertible. By leaving the filthy, soul-destroying city behind and returning to the place where America forged its national identity, he is able to redeem his individuality, cast off the shackles of the past, and drive into the future behind the wheel of an XLR. The tagline at the end sat uneasily with the fact that every person in the commercial was white: “The legendary bloodline is about to boil.”

Consumerism served two purposes after September 11, one of them general and the other more specific. In general, the Bush administration just wanted Americans to spend as much money as they could. The first dot-com crash had sent the country into a recession, and the attacks had thrown the stock market, along with America’s travel and tourism industries, into chaos. The tech-heavy Nasdaq stock exchange didn’t bottom out until late 2002, and one research institute estimated that the attacks cost the United States more than 650,000 jobs, with those losses concentrated in tourism, travel, and entertainment.[53] The government needed to stoke consumer demand by any means, and with the country experiencing its biggest surge of patriotic feeling since the first Gulf War, why not connect the two? The issue was so pressing that President Bush brought it up as soon as the evening of September 11, 2001, trying to reassure Americans that the country remained “open for business.” Dick Cheney, in his inimitably unfeeling way, expressed the hope that Americans wouldn’t let the attacks “in any way throw off their normal level of [economic] activity.” Businesses themselves were also quick to yoke their efforts to the war everyone knew was coming. The Ad Council argued that patriotic consumption was well established in American history, reminding people that the council “was originally founded as the War Advertising Council during World War II in the aftermath of the bombings of Pearl Harbor.”[54] The Federal Reserve dropped interest rates by more than a full percentage point by the end of the year in the hope of spurring investment and spending, and rates wouldn’t return to their pre-9/11 levels until 2005. The Steve Madden shoe company started selling a line of sneakers called the Bravest. They were white trainers with a chunky sole and a rhinestone American flag on the side. They retailed for $49.95, and the company pledged to donate all proceeds to the families of dead firefighters.[55]

The results of these efforts were mixed. On the one hand, media commentators derided Bush and Cheney’s pleas for Americans to fight terrorism by opening their wallets as insensitive and crass, a bit of moneygrubbing when what people needed were appeals to the country’s loftiest democratic ideals. On the other hand, Bush and Cheney might have been more clear-eyed about what America stood for than the pundits. There could be no doubt, after all, that al-Qaeda hoped to strike at the America-led global economic order. Why else would they have attacked the two buildings that most symbolized the country’s financial power? In that light, why not ask Americans to do their part by consuming a little more?

The various charity initiatives that businesses supposedly undertook to support victims and their families also sometimes fell apart under scrutiny. In February 2002, it came out that although Steve Madden had generated more than half a million dollars in profit from its star-spangled sneakers, none of that money had actually found its way to the families of any dead firefighters. When journalists called up the company to ask about where the money was going, Steve Madden pledged that 10 percent of those profits would go to the fire department. And when asked whether 10 percent wasn’t a rather stingy amount, the company’s CEO was defiant. “We have stockholders, so we walk the line between what is good for the stockholder and the company and doing these good deeds,” he said. “The most patriotic thing we can do is make money.”[56] Despite the Federal Reserve lowering interest rates, investment remained very sluggish for at least a year. By the end of 2002, the U.S. economy’s growth rate was an anemic 1.7 percent, and it remained below 3 percent through 2003.[57]

On the other hand, there is some evidence that while corporations failed to keep up their end of the bargain after September 11, American consumers went above and beyond in opening their wallets, which might have cushioned the economy against some of the worst effects of the attacks. Consumer spending from October through December 2001 was up 6 percent over the same period a year earlier, a massive increase.[58] To hear retailers tell the story, this could be at least partially explained by the fact that Americans just found it comforting to shop. A Walmart manager spoke to the media and said that “the day of the attacks, we had many people who were alone come into the store because they wanted to be around other people and have someone to talk to.”[59] And in addition to the camaraderie, there were deals. People jumped at the chance to finance new car purchases with no interest. General Motors made the first move, launching a patriotic advertising campaign offering 0 percent interest to “keep America rolling,” the slogan echoing the famous “Let’s roll” rallying cry that was thought to have launched the passengers’ effort to fight the hijackers on Flight 93. Ford soon followed suit, promising to “do their part to keep America moving forward” by offering the same interest-free financing. Thirty-year mortgage rates also fell sharply, reaching an all-time low of 5.26 percent by the middle of 2003, helping to fuel a boom in new home purchases and construction (we’ll discuss some of the consequences of this housing boom in a later chapter).[60] During World War II, Americans were asked to help the war effort by saving their money, restricting personal consumption, and forgoing certain luxuries they’d become accustomed to regularly buying. During the war on terror, they were asked to spend up to and then beyond their available means. In both cases, they answered the call.

Consumerism’s second purpose in the post-9/11 period was to encourage Americans to open their wallets and buy things that made them feel more prepared to confront the dangerous, violent world in which they now lived. Gun sales increased, with the FBI announcing that background checks for gun purchases had increased by nearly half a million over the prior year. Alighting on the angle that was the most likely to frighten his readers, a New York Times columnist reported that even northeastern college feminists were scooping up firearms. “A generation ago, women here at Mount Holyoke College defied convention by burning bras and moving in with boyfriends,” Nicholas Kristof wrote in March 2002. “These days, some women here are shocking the campus by embracing something even more dangerous than men—guns.”[61] Gun manufacturers knew full well, however, that the majority of their clientele were not university radicals. Beretta advertised a new nine-millimeter pistol called United We Stand that included an attractive wooden finish on the grip and an American flag etched onto the side, and Tromix won some notoriety for itself by previewing a new .50-caliber rifle called the Turban Chaser (it’s unclear whether the gun was ever actually sold).[62] The media reported on this increase with trepidation, claiming that the sales increase couldn’t be explained away as the country’s long-standing gun owners rounding out their arsenals with an AR-15. “This is different,” one Los Angeles shooting range worker said. “The mindset of people has changed.” Gun store owners around Southern California said that the majority of first-time permit seekers were women, and that these women were also buying up gas masks so as to protect their families from anthrax attacks.[63]

Changing mindsets among women were also thought to be a driving force behind the most dramatic shift in post-9/11 consumer habits. By 2003, for the first time, data indicated that women were the main decision makers behind more than half of all car purchases in the United States, and market research had demonstrated for years that for women the ability to sit high above the road was an important factor in deciding what to buy. Auto manufacturers began to cater more to women’s preferences than they had in the past, with Jeep designing a new SUV specifically to keep women happy. “The real key for women was to sit high,” a Jeep executive said. “The new Jeep Liberty feels like I’m driving around on stilts, and there’s a very important marketing reason for that.”[64]

The notion that women were gravitating toward SUVs because they made them feel safe dovetailed nicely with another figure who came to prominence in the run-up to the 2004 presidential election between George Bush and Senator John Kerry: the so-called security mom. Her vague outline first appeared as the United States invaded Iraq, with The Washington Post, Washington Monthly, Philadelphia Inquirer, Time magazine, and other publications all running articles claiming that women were more concerned than men about the possibility of future terrorist attacks.[65] And she officially stepped out in June 2003, when Time ran a cover story headlined “Goodbye, Soccer Mom. Hello, Security Mom”:

She used to say she would never allow a gun in her house, but now she feels better if her airline pilot has one. She wanted a nuclear freeze in the 1980s and was a deficit hawk in the 1990s, but she now believes the Pentagon should have whatever it wants. Her civil liberties seem less important than they used to, especially compared with keeping her children safe.

One of the security moms interviewed for the article had voted for Bill Clinton twice, prioritizing liberal policies on abortion and welfare spending in choosing her favored candidates. That was all over. “Since 9/11,” she said, “all I want in a President is a person who is strong.”[66] There’s evidence that the security mom was more of a media invention than a real phenomenon, but she exerted a strong cultural influence during the Bush administration. Picture the security mom in your mind, and you are likely to see an affluent suburban woman. She is competent, trim, and well put together, and she is sitting behind the wheel of the biggest passenger vehicle you’ve ever seen. Between September 11 and the election of Donald Trump, SUVs and pickup trucks became the most popular cars in the country.

The shift from station wagons and minivans to the SUV as the car of choice for suburban families was already under way before September 11, but the war on terror accelerated that shift and made it stick. The SUV got its start in World War II, when the Army decided it wanted a small truck that could transport troops plus a heavy machine gun. The “jeep” (as soldiers eventually nicknamed the vehicle) fit their needs, and the Army ordered half a million of them. Attempts to market jeeps to civilians after the war ended were unsuccessful, but in the mid-1980s a group of researchers at Ford decided the time was right for a four-door SUV that would be marketed to families rather than tradesmen, people who could get by just fine with a minivan or station wagon but would be happier if their car didn’t immediately identify them as suburban drones with yammering children hanging from their shirtsleeves. The first Explorer rolled off the factory line in 1990. Because these cars would be marketed primarily to people living in urban and suburban areas, most of the features that distinguished SUVs from minivans were totally unnecessary. As one marketing executive at Jeep said, “All of the SUV market was psychological, there was no actual customer need for four-wheel drive.”[67] But that wasn’t the point. By catering directly to the collective midlife crisis of the country’s affluent boomer parents, the Ford Explorer launched a revolution in how Americans got themselves from place to place. The rest of the country’s automakers spent the rest of the decade trying to catch up.

September 11 put the SUV into a new, even more profitable context. Its appeal no longer had to rely on perpetually unfulfilled fantasies of using your vacation days to tame the wilderness instead of visiting the in-laws. Now the world was a dangerous place where terrorists could strike without warning, and SUVs, because they were taller and heavier than anything else on the road, made people feel secure. The vehicles’ military origins returned to the foreground—this was the car to drive when the homeland was a potential war zone. An arms race began, with SUVs and then pickup trucks growing in size, putting on weight, and sporting ever more intimidating and aggressive designs with each passing year. In 2001, a Ford Explorer weighed between thirty-eight hundred and forty-one hundred pounds, depending on which features were included. By 2010, the lightest Explorer available for purchase weighed almost forty-five hundred pounds, and by 2015 the heaviest came in at nearly forty-nine hundred. Meanwhile, the Ford F-150, the bestselling truck in the country, ballooned to around fifty-five hundred pounds by 2010. A glance at how the design of these vehicles changed over that period makes the appeal obvious: They look more like combat vehicles with each passing year. Their grilles got taller and more imposing, and what had been sinuous curves at the end of the 1990s hardened into sharp angles.

The advertising used to sell these behemoths only amplified the impression given by the vehicles themselves, which is that America’s streets and parking lots were battlefields. In 2014, Ford described the body of its new F-150 model as “military grade” and guaranteed its “toughness” based on the “torture-testing” to which it had been subjected.[68] Market research repeatedly confirmed that people liked SUVs because they thought they needed protection from the outside world. “The world is becoming a harder and more violent place to live, so we wrap ourselves with the big vehicles,” said one SUV owner in California. Another said, “It gives you a barrier, makes you feel less threatened.”[69] Another study even found that drivers liked how high SUVs sat up off the ground because “it’s easier to see if someone is hiding underneath or lurking behind it.”[70] SUVs and pickups were not just family automobiles but urban assault vehicles, fortresses, survival tools. Their model names often recalled the glory days of the militarized frontier: Trailblazer, Defender, Pathfinder, Warrior, Cherokee, Navajo, Tahoe, Yukon, and so on. The vehicle that went the furthest in embracing the SUV’s military styling was the Hummer H2, which weighed around sixty-five hundred pounds and used up a gallon of gas every twelve miles. With ads touting its “military-derived DNA,” it became the bestselling large luxury SUV in the country.[71]

SUVs were controversial throughout their ascendancy, pitting those who found their appeal to be self-evident against those who thought they symbolized an unappealing and specifically American mix of arrogance, wastefulness, and insecurity. Perceptions of increased safety were central to SUVs’ appeal as the category took off in the 1990s, but just as the purported benefits of tightened airport security tended to evaporate under scrutiny, it soon became clear that SUVs were significantly less safe than ordinary cars. The fact that they sat so high off the ground made it easy for drivers to see the roads around them, but it also gave the vehicles such high centers of gravity that they sometimes rolled over while making turns and killed their occupants. That problem was eventually addressed, but lower rollover rates did not solve the problem that while SUVs might have offered safety to their occupants, they were extremely dangerous to anyone who wasn’t sitting inside them. Because women tend to drive smaller cars than men, they are now seriously injured in car wrecks at higher rates than the men piloting the equivalents of small tanks from their homes to the golf course.[72] The tall, squared-off grilles that have become de rigueur in SUV design also produce enormous blind spots directly in front. One local TV station conducted an investigation and discovered that the blind spot in front of a 2019 Cadillac Escalade was more than ten feet long. To translate that into the number of people who might be killed because of a design choice, the reporter was able to put twelve seated children in a line directly in front of the car before the driver was able to see the top of the thirteenth child’s head.[73] SUVs also nudged the people sitting inside them toward more dangerous driving behavior. A 2017 study found that men and women were both more likely to not wear their seat belts, blow through red lights at intersections, and look at their phones while behind the wheel of an SUV than while driving a smaller car.[74] For some people, the only way to feel safe with so many of these monsters careening around the streets was to give in and buy a Suburban or F-150 of their own. One social worker from Maine told a reporter that while he’d prefer to drive a “compact car,” he has post-traumatic stress disorder from a prior crash, so he drives a big truck, too. “Everyone else in my town does,” he said, “and it’s the only way I can get around without feeling like I’m gonna die.”[75]

The most controversial thing about SUVs and pickups during the early years of the war on terror, however, was their fuel efficiency. Poor gas mileage has been a signature feature of SUVs since the 1970s, when their designation as light trucks instead of passenger vehicles allowed them to skirt congressional fuel efficiency standards. By the early twenty-first century, that regulatory regime had been significantly weakened, and SUVs and pickups could use up all the fuel they wanted. In 2005, a Ford Explorer got fifteen miles to the gallon, a Chevrolet Tahoe got sixteen, and the SUV/pickup hybrid version of the Cadillac Escalade managed just thirteen miles.[76]

As the United States launched an aggressive war in Iraq that many of its critics believed was motivated by the Bush administration’s desire to secure American dominance of some of the Middle East’s largest oil reserves, the country’s growing embrace of SUVs came to look like the manifestation of some collective death wish. Everyone knew that America’s dependence on foreign oil made it vulnerable to the political vicissitudes of the Middle East, and Al Gore’s 2006 documentary film and book, both titled An Inconvenient Truth, had put the dangers of climate change in the national spotlight for the first time in years. Why, then, was America doubling down on its love affair with these extravagantly wasteful machines?

It didn’t make sense, and protests broke out at various points, both online and on the streets. The Hummer was the favored target, giving rise to websites such as StopSUVs.org and FUH2.com. In August 2003, someone from the radical environmentalist group Earth Liberation Front went further, breaking into a Hummer dealership in the middle of the night and torching twenty new H2s, spray-painting “gross polluter” and “fat, lazy Americans” on many of the burned-out wrecks for good measure. That was a step too far for some of the more prominent critics of SUVs. “What these people are doing isn’t activism—it is vandalism, and I strongly oppose it,” said the media mogul Arianna Huffington, who was running for governor of California at the time. The dealership owner, a man named Ziad Alhassen, said he would not be deterred by the “terrorist acts.” “They burn gas—so what?” Alhassen’s lawyer added. “That’s not the way to look at it. A lot of people buy these cars because they’re safe. A lot of women buy them for that reason. They’ve got a patriotic feel to them, especially after 9/11.”[77]

The key sentence there is “They burn gas—so what?” SUVs appealed to some consumers because they were wasteful, not in spite of it. Americans had made a collective promise that September 11 wouldn’t be allowed to change their “way of life,” and part of that way of life was uninhibited, unembarrassed consumption of material goods. SUVs were not just expensive and eye-catching toys guaranteed to turn heads as you drove down the street. They were consumers in their own right, chugging down gasoline in utter defiance of the geopolitical vulnerabilities. Anyone who had a problem with that could take it up with the U.S. military, whose own Hummers had taken up defensive positions around the oil fields of Iraq. SUV advertisers knew full well that excess was part of the appeal. “OK, it’s massively over-engineered for the school run,” one Jeep ad conceded. “And the problem with that is what, precisely?”[78] An ad for the H2 read, “Excessive. In a Rome at the height of its power sort of way.” That text accompanied a close-up picture of the vehicle from directly in front of its left headlight, as though the cameraman had just managed to get the shot off before being run over.[79]


Super Bowl security wasn’t content to remain inside and around the arena. According to the logic of the war on terror, victory required more than hunkering down and sealing the borders; the country also needed to project force out into the world. In a similar way, the security apparatus that protected the big game each year slowly expanded out beyond the turnstiles and parking lot and into the neighborhoods of the country’s cities.

It started somewhat modestly. For Super Bowl thirty-six in New Orleans, the government established an eight-mile no-fly zone over New Orleans, shut down a stretch of the interstate that ran by the stadium, and barricaded the streets in the surrounding neighborhoods. This was a major inconvenience to New Orleanians, albeit a temporary one; once the Patriots finished celebrating on Bourbon Street and flew back to Boston for their parade, the skies and the highways reopened. With each passing year, however, the security program expanded. By the beginning of the following season, local authorities in Baltimore were running evacuation and triage drills in case someone released chemical weapons into the air at the Ravens game. “We have never done anything on this scale before,” a hospital spokesperson said. “After September 11 this has all taken on more of a sense of importance.”[80] In 2005, Coast Guard ships patrolled the waters off Jacksonville, Florida, in preparation for Super Bowl thirty-nine. Jet Skis were banned from the St. Johns River, and divers inspected the undersides of five cruise ships docked at the city’s port.[81] The year after that, authorities imposed a thirty-mile no-fly zone over Detroit, which stretched across the border into Canada, and brought in some ten thousand people to protect Ford Field. In 2010, thirty-three federal agencies were involved in security preparations for Super Bowl forty-four in Miami, which was “eight more than the previous Super Bowl.”[82] In 2011, Indianapolis installed locking manhole covers throughout the city, at a cost of $2,000 each, in advance of Super Bowl forty-five.[83]

City governments compete to host the Super Bowl just as hard as NFL teams compete to play in it. It is an unjustified but persistent article of faith among municipal officials and “pro-growth” lobbyists that hosting a Super Bowl will generate jobs, bolster people’s sense of civic pride, and make a city more attractive to tourists picking out their destination for the next year’s holiday. The NFL has a notoriously long and detailed list of requirements for what cities must provide to the league in exchange for the privilege of hosting the year’s biggest game. Some of these requirements are nothing more than demands for handouts. For Super Bowl fifty-two in 2018, for example, the NFL’s list of demands included “the reservation of three (3) top quality 18-hole golf courses, at the same site or in close proximity to one another, for use by the NFL Foundation Golf Classic”; “the reservation of up to two (2) top quality bowling venues at no rental cost for use by NFL Foundation”; the reservation of “all convention centers, arenas, and concert sites in the Host Community with one thousand (1,000) or more seats”; and a commitment from the city’s fire and building departments that any requests for help from the NFL would take “top priority.”[84]

This orgy of taxpayer-funded party favors comes to an end once the game clock hits zero each year, but requirements pertaining to security can be longer lasting. The NFL now requires that new stadiums for NFL teams meet the league’s requirements for hosting a Super Bowl at some point down the road, and the security measures demanded by the league are extensive. Cities must be able to establish and enforce “clean zones” around the stadium and other key downtown areas in which only officially licensed NFL merchandise will be sold. In Jacksonville, city officials hired private security firms to install “approximately 100 VPN encrypted video cameras throughout the city,” a system “designed to ‘expand,’ ‘stay for decades,’ and ‘go beyond the Super Bowl for other needs.’ ”[85] Facial-recognition cameras and surveillance programs that can track specific license plates as they move around a city have also been installed at the NFL’s behest. In order to host the Super Bowl in 2018, Minneapolis had to confirm that its police department had the resources and equipment to establish a “hardened security perimeter,” including concrete barriers, around the entirety of the stadium.[86] And by requiring cities to have all of this equipment ready and on hand for the Super Bowl, the NFL tacitly encourages the cities where its teams play to keep using it for years once the big game has pulled up stakes and left town. By militarizing the biggest game on its annual calendar, the NFL has helped to militarize public space around the country.


For a decade after September 11, no public space carried more symbolic weight than Ground Zero. After the Alfred P. Murrah Federal Building in Oklahoma City was bombed by two white supremacists in 1995, the government demolished the building’s remains and built a memorial in its place. What had been destroyed would be remembered but not rebuilt. That was never an option for New York. A sixteen-acre plot of land in one of the densest neighborhoods of the country’s largest city could not just be left empty—the humiliation would be too great. The rebuilding effort turned the World Trade complex into the most publicized and most hotly debated construction project of the twenty-first century. People believed, in the words of one anthropologist who wrote a book about the process, “that whatever we decided to build here would reveal nothing less than what makes us American.”[87]

There seemed to be no limit to the number of interest groups with strong views on what should be built and how. There was Larry Silverstein, the developer who had purchased the Twin Towers in July 2001 for $3.2 billion, which was then the largest real estate deal in the history of New York. He needed whatever replaced the towers to make him money. He agreed that a memorial should be part of the rebuilt site, but that wasn’t his focus. Less than two weeks after the attacks, he gave a press conference at which he announced that the Twin Towers would be replaced by quadruplets: four towers, fifty stories each. Thus would all of the lost office space be replaced without creating attractive new targets for al-Qaeda to attack.[88] There were also the families of the firefighters and police officers who had died in the failed rescue attempts on September 11. The buildings collapsed with such force that many people’s remains had never been recovered. In the eyes of some families, that meant the complex needed to be recognized as a grave site in perpetuity—they did not want the final resting place of their loved ones to be rebooted as a hotbed of commercial activity. City and state officials hoped to mediate these conflicts as quickly and efficiently as possible. The last thing they needed was for the rebuilding process to get swallowed up in bureaucratic labyrinths of red tape, zoning laws, and political bickering. And there was also everyone else in America. They’d been attacked by al-Qaeda as well, albeit not physically, and they wanted a say in how the country responded. When the Lower Manhattan Development Corporation held an open competition for 9/11 memorial designs, it received more than five thousand entries, including submissions from forty-nine states and sixty-three nations.

New York’s governor, George Pataki, eventually forged an uneasy truce among the project’s various power brokers. David Childs, chairman of the architectural firm Skidmore, Owings & Merrill, would be in charge of designing the “Freedom Tower,” a 1,776-foot-tall skyscraper that would replace the Twin Towers as the climax of lower Manhattan’s skyline. Larry Silverstein would retain the rights to build several office towers on the site, but the Freedom Tower itself—which later took the official name One World Trade Center—would belong to the Port Authority of New York and New Jersey, which had a long-standing presence at the WTC complex. Daniel Libeskind, the architect who had won the competition to design the master plan for the sixteen-acre site, would “meaningfully collaborate” with Silverstein and Childs, and he would also continue to be one of the public faces of the project, the man who lent the whole endeavor the artistic credibility it needed to avoid being seen as totally dominated by commercial concerns.[89] The Israeli American architect Michael Arad would design the memorial, two enormous pools outlining the Twin Towers’ vanished footprints, with water cascading down their vertical gray walls into an abyss. The first responders’ families won a victory as well. Since their loved ones had run into the buildings while everyone else had tried to run out, they thought that firefighters, police officers, and medical personnel should receive special recognition. The media treated them with a mixture of sympathy and irritation; one day they were victims whom the city was duty-bound to support in perpetuity, the next people who stubbornly refused to let any other considerations intrude upon the wild intensity of their grieving. Arad’s memorial design eventually wound up with all of the victims’ names carved into the stone that surrounded the pools, but he agreed that first responders would be grouped and identified by their unit numbers.

This plan was vastly scaled down from Libeskind’s original ambitions. In the design he’d submitted to the competition, the Freedom Tower would be topped by an enormous, twisting spire that shot up out of the place where the building’s west side joined the roof, a pointed symbolic echo of Lady Liberty holding her torch aloft out in New York Harbor. That design was scrapped due to the cost, and Childs’s reworked tower was a glass-sheathed, corporate banality, unremarkable in every way except for its size. But it was a plan that everybody could live with, and it even made an attempt to solve some of the biggest problems of the original World Trade Center complex. While the Twin Towers now symbolize the greatest loss in New York’s history, many architects hated them while they stood. The problem wasn’t entirely with the towers themselves. They had been built around a vast marble plaza, which, though very impressive in photographs, was alienating in person. Loudspeakers pumped canned music over its expanse all day, and the winds that accelerated as they funneled between the towers were so bad that people sometimes had to use ropes just to walk across it. The new design restored the neighborhood’s older street grid, which would hopefully steer people and life toward a part of lower Manhattan that many people worried would languish for decades under a cloud of grief and fear. Even One World Trade Center wasn’t so bad if you squinted hard enough at the renderings. No, it wasn’t going to win any architectural awards, but it could have been worse, and at least it sent the clear message that New York still wanted to be home to the tallest buildings in the world.

But people couldn’t shake the anxiety that as soon as the tower’s final steel girder was hoisted into the air and welded into place, al-Qaeda would bring the whole thing down again. This fear had been expressed for years by pundits and city officials worrying that even if the city did build a monumental skyscraper to replace the Twin Towers, they wouldn’t be able to find anyone to fill the office space. The economy was anemic, firms had been relocating to midtown Manhattan, and were companies really going to ask their employees to go to work in what would surely be the most attractive target for terror attacks in the country? It took nearly ten years for the magazine publisher Condé Nast to announce that it would be moving its operations from Times Square down to 1 WTC, a major coup for the project. With Vogue’s editor, Anna Wintour, stalking the halls and The New Yorker’s soft-spoken, elegant eccentrics tending to their drafts while gazing down on the Hudson River, the Port Authority could feel much better about the prospects of its investment paying off. The tower itself would be a focal point for the glamour of the city’s media industry, and the larger complex would teem with workers and tourists.

In reality, though, this vision had died well before Condé Nast signed its lease. The earliest iterations of the Freedom Tower’s design planned for a building that would meet the security standards for a federal courthouse. Shortly after Governor Pataki laid the tower’s cornerstone at a press conference, however, the NYPD let it be known that federal courthouse standards weren’t going to cut it. They thought the building was too close to West Street, a busy, six-lane thoroughfare that turned into the West Side Highway as it wound up Manhattan along the Hudson. Setting buildings closer to streets makes them more a part of the public life that surrounds them, and the Freedom Tower’s original design also called for wide staircases to cascade down to street level from the entirety of the building’s west side, which would have invited people in from the sidewalk. But police thought that the tower’s proximity to the street, along with the gently sloping stairs, would make the building a more inviting target for car and truck bombs like the one terrorists had detonated in the parking garage under the Twin Towers in 1993. Childs and Libeskind had initially taken that worry into account by planning to reduce West Street from six lanes to four, with an express road tunneled underneath that would shuttle cargo trucks up to the West Side Highway. But the investment bank Goldman Sachs objected to that plan. It felt the tunnel entrance would cause traffic buildups in front of the new headquarters it was planning to build on the other side of West Street, and it threatened to pull out of lower Manhattan entirely if the tunnel wasn’t killed. That wasn’t acceptable. Goldman’s presence was seen as essential to the economic revitalization of the area. So the Pataki administration killed the tunnel and provided Goldman Sachs with a fifth of the $8 billion in tax-free bonds that New York had received from Congress for rebuilding. West Street would keep all six of its lanes.[90]

That satisfied Goldman but worried the police. In addition to pushing the tower back from street life, the NYPD wanted some of the architectural features of the tower’s base to be changed. The original plan was for the first twenty floors to be wrapped in prismatic glass, which would allow people to see into the building from the street, make parts of its steel structure visible, and cast attractive flecks of rainbow light into the lobby. The rest of the tower would then emerge gracefully out of a base of shimmering iridescence. The department’s deputy commissioner for counterterrorism thought the glass had to go; in the event of an explosion, it would shatter into thousands of deadly shards, maiming and killing anyone unfortunate enough to be in the lobby or on the adjoining sidewalks when the bomb went off. If Silverstein, Childs, and Libeskind wanted their building to survive September 11 part two, then they would have to redesign the tower in accordance with different, stricter standards: those applied to American embassies overseas. In other words, the tower needed to be built to survive in a war zone.

From an architectural standpoint, the results were catastrophic. Instead of a light-filled base, the bottom twenty floors were wrapped in blast-resistant concrete, making the building look as though it were sitting on top of a granite pedestal. The architects tried to justify the redesign on artistic grounds by claiming modernist sculpture as a source of inspiration, but the architecture critic for The New York Times accurately dismissed that argument as sophistry: The redesigned tower looked like a bunker. “The darkness at ground zero just got a little darker,” he wrote.

Somber, oppressive and clumsily conceived, the project suggests a monument to a society that has turned its back on any notion of cultural openness. It is exactly the kind of nightmare that government officials repeatedly asserted would never happen here: an impregnable tower braced against the outside world.[91]

His judgment was harsh but correct. The diaphanous quality that was supposed to evoke the transparency of democratic pluralism was gone. The concrete base was intimidating, and even when the designers tried to improve things by covering it in glass panels that were louvered in the manner of window blinds, the panels still repelled more than they attracted, flashing aggressively in the sunlight without allowing anyone to see what was happening inside. Even at the two main entrances, one couldn’t see what was going on in the lobby; tall partitions inside the doors blocked the view of anyone passing by. The steps that were supposed to welcome people in from the sidewalk running along West Street were also gone, replaced by an elevated plaza that could be accessed only by walking around to either end of the block. The tower stood away from and above the street, as though it were scared of the urban bustle it was supposed to anchor. Looking at it today, one wonders why it doesn’t just move to a suburb, where it would at least be able to relax.

Things didn’t get any better as you walked through the rest of the rebuilt complex, either. The streets that Libeskind’s master plan had been so careful to restore so as to undo the mistake of the original Trade Center’s windswept plaza were rebuilt, but not as actual streets that the public could use. They looked like streets, with asphalt surfaces and lane markings and traffic lights, but all of them were closed off to everyday traffic. Booths housing security guards were placed on their corners, with surveillance cameras mounted on poles nearby. The streets were also equipped with giant toothed barricades that emerged out of the concrete to block off access. In theory, those barricades can retract down into the ground. In practice, they are almost always up.

People who lived in the area were dismayed by these developments. Even before September 11, the Trade Center’s vast plaza had made them feel cut off from the rest of lower Manhattan; stitching their neighborhood back into the larger fabric of the island was going to be crucial to any real community recovery. “We can examine innovative ways to manage streets and traffic downtown, reinforcing the feeling that this is one place,” New York City’s mayor, Michael Bloomberg, had said in December 2002. “Getting around easily means community, and that’s what we’re trying to create.” Now residents saw that expansive vision slipping away. The former vice president of the Lower Manhattan Development Corporation was particularly acerbic regarding the police department’s plan, saying the security measures made “a complete mockery” of the work people had done over the preceding few years. The police themselves were quite touchy about these criticisms. Richard Daddario, the department’s deputy commissioner for counterterrorism, told a New York Times journalist that “pedestrians and bicyclists will be able to enter the site and travel along its streets and sidewalks just as they can everywhere else in the city. Vehicles and tour buses having business at the site will have access after screening to guard against the threat of a car bomb.”[92] These remarks were printed in full in the Times article, but a different deputy commissioner then felt moved to send a letter to the editor in which he repeated everything his colleague had said the first time around, nearly word for word. “Pedestrians and bicyclists will be able to enter the site and travel its streets and sidewalks as never before,” he wrote. “Taxis, cars and tour buses having business at the site will have access after screening to guard against the threat of a vehicle bomb.” He then chided the Times for making “only passing reference to the trade center’s destruction on 9/11,” as though the people who lived in the neighborhood (or anyone else, for that matter) might have somehow forgotten. He did not address the fact that his description of the situation, which he presented as a self-evident improvement on the previous state of affairs, was exactly what residents were angry about.[93]

Time demonstrated that the residents’ concerns were not misplaced. As late as 2019, the Port Authority’s official stance was that “the streets of the World Trade Center campus are all managed streets with controlled access. There is no intention to ever re-open the streets to public vehicle traffic.”[94]

The Trade Center remains almost entirely vehicle-free today, and in a certain light that might appear to be a good thing. New York’s streets have been choked by traffic for years, and with research showing that decreasing the number of cars improves quality of life for a city’s residents, large municipalities are trying out various schemes to give their roads back to pedestrians and cyclists. But the new World Trade Center is not a pedestrian’s paradise or exemplar of twenty-first-century urbanism. Mazes of police barricades that seem to shift around at random from day to day make it difficult to navigate, and in the middle of a city that otherwise continues to give cars free rein, the impression given by the complex is ominous, not welcoming. Surveillance cameras become visible everywhere as soon as you start to look for them, and the heavy police presence never seems to flag. It is a simulacrum of a public space rather than the real thing. The memorial and adjacent September 11 Museum are usually thronged with tourists, and white-collar workers disappear into One World Trade Center in their thousands each morning, but people who live in New York do not spend time there. As a public space, the rebuilt World Trade Center is a dead zone.

Or, to be more precise, it is a security zone. That is the term for an urban area in which governments have decided that security concerns take precedence over street-level commerce, cultural expression, eating lunch on the steps of a building, the right to free assembly, hanging out, and the hundreds of other uses to which city residents might want to put their public spaces. Security zones have proliferated throughout the United States in the years since September 11, and their existence constitutes a direct assault on the public’s ability to encounter, understand, and express itself as a public. Just as airports are now environments in which your physical surroundings let you know that it’s unsafe to act like anything other than the stereotypical business traveler, so are security zones places that tell their inhabitants that strange, odd, or flamboyant behavior will not be tolerated (if you are nonwhite and poor, it may be best to avoid these places entirely, no matter how you behave). They are characterized by the constant presence of law enforcement, intensive use of CCTV cameras and other surveillance technologies, and physical barriers that restrict when, where, and how people are allowed to move through and occupy public space.

In 2010, a pair of academics studied two prominent neighborhoods in New York City—the Financial District, home to the New York Stock Exchange, and Civic Center, where city hall and other government buildings are located—and quantified exactly how much of the available public space had been given over to security. Their findings were astonishing. In Civic Center, more than 20 percent of the district’s available public space had been closed, and when the researchers included the areas in which access had been limited if not entirely closed off, the figure rose to more than a third. Even more surprising to the study’s authors was the fact that more than 17 percent of the public space in the Financial District had been converted into security zones, even though there are few government buildings in the area. This led them to the insight that corporations and other private firms have been just as enthusiastic as governments about the post-9/11 security regime. While security zones popped up around all buildings that might be considered obvious targets of an attack, such as the Stock Exchange, they also appeared around “high-value private buildings” such as the headquarters of Goldman Sachs, built thanks to the State of New York’s panicked largesse. “The diminution of the public realm was palpable in both neighborhoods,” they wrote. “Few open public spaces remain for the hundreds of thousands of workers and residents who live in or near Civic Center and the Financial District.” The authors argued that security zones were now so ubiquitous that they should be considered an entirely new category of land use.[95]

New York and Washington, the two cities attacked on September 11, have been among the most enthusiastic proponents of destroying public spaces by militarizing them. New York now has cameras that can read license plate numbers installed on all bridges and tunnels in and out of Manhattan, and navigating the fenced-off streets of D.C. on foot, especially in the vicinity of the National Mall, requires uncommon amounts of patience, agility, problem solving, and stamina. But the securitization of public space is a national phenomenon. After September 11, Chicago proposed a budget including $76 million in funds for terrorism prevention and security. Nationally, a majority of American cities with more than 100,000 residents—there are more than three hundred such cities—have tightened security for infrastructure, government buildings, airports, and schools.[96] Neighborhood Watch, a volunteer crime prevention program that has deputized suburbanites to be on the lookout for suspicious people and activity since 1972, has long been notorious for encouraging hostility toward people of color who wind up walking down a sidewalk in the “wrong” neighborhood. The program was partially rebranded as a terrorism prevention effort after September 11, and between 2002 and 2005 the number of local watch groups in the United States increased by 85 percent.[97]

In cities like San Francisco, the combination of security-minded local governance and rapidly expanding tech companies looking to cordon off more of the city for themselves brought the erosion of public space to crisis levels. In 2015, an activist group spent six months putting together a map of all the public spaces that had disappeared from San Francisco over the previous several years. Increased police presence at BART stations made them less safe for homeless people—who are also part of the public, as much as some people would like you to believe otherwise—looking for somewhere to spend the night. Large stretches of downtown San Francisco were closed each year so that tech companies could pitch huge tents for their conferences and fairs. A private developer used a city grant to install surveillance cameras along the historic Mission Miracle Mile. Recycling centers, an important source of income for the city’s homeless population, were closed down to make way for private development, and public bus stops and parking spaces were commandeered by tech companies that wanted to transport affluent, city-based workers to their suburban office buildings.[98] The result was a city in which non-wealthy residents got squeezed on both sides. Spiking rents and real estate prices made homes more and more difficult to afford, while the securitization and privatization of public space also made it harder to gather outside the home. This is an environment in which the public can barely function at all.

The twenty-first century saw “new urbanism,” with its goals of revitalized cities centered on mixed use, walking, and public transportation, become a kind of gospel among urban planners and liberal politicians. But under the unlucky star of the terrorist threat, the results have frequently reproduced the forms of new urbanism without any of their content, resulting in parks and plazas that gleam appealingly from a distance but are unpleasant, difficult, or impossible for the public—which is to say, everyone, not just the white and/or affluent—to actually use. The armed guards and surveillance cameras that now ring so many of the country’s public spaces let their inhabitants know that use of those spaces is a privilege, not a right, and that the privilege can and will be revoked at any moment. Government officials responsible for imposing these security measures have always justified them as necessary to protect the public, but the experience of the past twenty years suggests that they really do the opposite, that militarism and a robust public life are at odds with each other by definition.

One of the paradoxes of post-9/11 security zones is that everyone secretly knows they don’t work. The bollards, surveillance booths, and sally ports surrounding the new World Trade Center don’t matter, because anyone with explosives strapped to their body could still walk (or bicycle) down one of the brand-new, car-free streets, give a wave to some of the cops as they pass by, take up a position in a crowd of tourists by the memorial, and hit the detonator. This is true all over the country. The militarism that suffused public life in the United States after September 11 did nothing to make people safer, and it didn’t make people feel safer, either. If anything, by emphasizing and reemphasizing the potential dangers of being out in public, it did the opposite. In 2006, a researcher writing for the Journal of Homeland Security and Emergency Management noticed that “many antiterrorism measures may actually intensify and reinforce public perceptions of vulnerability and fear.” He found that the more visible the security elements in a public area were, the worse his study participants felt about being there. The researcher seemed to be puzzled by these findings, speculating that “such responses may be caused by a comparative lack of understanding of the nature and predictability of terrorism.”[99] But it’s more likely that people understood the situation just fine. When the terrorist threat has been so wildly exaggerated, the only remaining explanation for the security zones is that their purpose is to monitor the public itself.