7 | ![]() |
After the Revolution |
“What’s Edison? Is that a city? I just never hear of it,” responded New York’s former mayor Edward Koch on learning in the summer of 2003 that the federal Office of Management and Budget had renamed the nation’s largest concentration of population the New York–Newark–Edison metropolitan area. The name change was yet another sign of the revolution that had occurred since 1945. A New Jersey township of strip malls, subdivisions, and office parks was now officially deemed worthy to be joined with New York City as a metropolitan presence. Yet unlike the hubs of the past, it was a relatively invisible presence, unknown to a leading public official who had spent his entire life within fifty miles of the township. “For all too long, the major so-called suburbs or edge cities have been lost in the shadow of New York,” Edison’s mayor commented. “We’re not lost anymore. We’ve wandered in from the wilderness.”1 But to Koch and many others, Edison was still lost, an obscure place they had to search for on a road map. By 2003, metropolitan America was no longer a world of easily identifiable places known locally and nationwide but an agglomeration of population and business with some historically famous old centers and many anonymous municipalities. The self-proclaimed “crossroads of New Jersey,” Edison won its metropolitan distinction because it was at the intersection of expressways, and expressway interchanges had largely supplanted cities as metropolitan foci. Like other metropolitan residents, Koch had heard of the highways but not the place, for the place-names of cities and towns counted for much less in the post-urban world of the twenty-first century.
Trapped by past conceptions of the city and of metropolitan centers, the Office of Management and Budget felt compelled to insert the name Edison. But the linking of Edison and New York City as seeming equals simply highlighted the outmoded thinking of the federal agency. Edison and New York City were two different phenomena, apples and oranges, one the product of the pre-1945 age of cities and the other an example of the post-urban era of the late twentieth century. Metropolitan America was no longer organized around single dominant centers, as it had been in 1945; neither was it truly polycentered, with a few readily identifiable hubs. To identify metropolitan regions by the names of supposed centers was an anachronism, for metropolitan America was increasingly centerless. Yet it was not a featureless sprawl of indistinguishable elements or a uniform expanse of low-density settlement reminiscent of Frank Lloyd Wright’s Broadacres. Instead, it was rich in diversity, a historical accretion of settlement patterns and lifestyles that reflected the felt needs of millions of Americans of the past and present. Metropolitan America included the remnants of traditional cities like Boston, New York, Pittsburgh, and San Francisco, as well as pre–World War II posh enclaves like Scarsdale, post–World War II residential behemoths such as Levittown, edge cities of the 1980s, and an array of malls, big-box stores, office complexes, and chain restaurants and hotels built at the close of the twentieth century.
Traveling through a metropolitan region, one did not pass from one hub to another but instead through layers or patches of settlement, each different and catering to different residents and workers and different lifestyles. Metropolitan America was edgeless and centerless; its place-names denoted governmental units like Edison Township rather than cohesive, clearly defined communities. It defied the traditional logic regarding cities, yet it suited the diverse lifestyles of the post-urban era. Freed from past strictures on sexual behavior, gays could congregate in some central-city neighborhoods, and childless professionals could likewise enjoy what they deemed a desirable urban lifestyle in the historic centers. Families still gravitated to the good schools of suburbia, and, subsidized by Social Security, retirees found happiness in walled communities of their own. There were Hispanic enclaves and suburban Chinatowns with signage incomprehensible to English speakers but welcoming to thousands of newcomers seeking to transplant Taipei to southern California.
Rather than a city with a readily mapped core and edge, metropolitan America was a mélange reflecting the social and cultural diversity of the nation. Liberated from reliance on a centripetal public transit system that funneled everyone and everything to a common core, auto-borne Americans of the late twentieth century escaped to separate spheres suited to their needs. Some headed for Edison; others, for Manhattan. A continuous zone of dense population stretched along the mid-Atlantic, the Florida peninsula, and the southern California coast. Yet there was no longer a coherent city, simply a mass of settlement accommodating a variety of lifestyles and people whose paths no longer intersected at a shared center.
The Edgeless City
In 2003 urban scholar Robert Lang issued a new communiqué from the nation’s little-understood metropolitan expanse. Challenging Joel Garreau’s decade-old prediction that edge cities were reestablishing dense, mixed-use, identifiable centers in the metropolitan mass, Lang claimed that instead the prevailing pattern was the edgeless city, “a form of sprawling office development that does not have the density or cohesiveness of edge cities” but accounted “for the bulk of the office space found outside downtowns.” According to Lang, “Sprawl is back—or, more accurately, it never went away.” “Isolated office buildings or small clusters of buildings” were spread over “vast swaths of metropolitan space,” and as a prime example Lang offered Edison’s central New Jersey, where “edgeless cities stretch over a thousand square miles of metropolitan area.”2 In other words, the multicentered metropolis was seemingly as passé as its single-centered ancestor. Metropolitan American was continuing its relentless advance across the countryside, eschewing concentration for sprawl.
Anyone driving the highways of New Jersey, Georgia, Florida, Illinois, Texas, or California would have strongly seconded Lang’s findings. Commercial outlets spread in all directions and small office buildings with large parking lots proliferated at a faster pace than suburban high-rises served by multilevel garages. Strip centers skirting highways and giant discount stores convenient to motorists proved more attractive to time-conscious shoppers than many of the older malls that had dominated retailing for the past four decades. For the many customers who did not want to linger or stroll, a dense concentration of businesses had little appeal. If commerce was spread out along the highways, motorists could move as rapidly as possible along these asphalt conveyor belts, collecting goods and services as they passed. After all, one did not go to the dentist, grocery, or video store to experience some city planner’s notion of diverse urbanity or to partake of some uplifting ambience. The idea was to get in, do one’s business, and get out as quickly and conveniently as possible. Taking advantage of drive-through windows at drugstores, banks, and fast-food restaurants, customers might not even need to leave the comfort of their cars but instead could experience to the fullest the automobility of the edgeless city.
Meanwhile, housing subdivisions sprouted in barren fields, and the rate of residential sprawl seemed to accelerate. Once quiet suburbs emerged among the ranks of the nation’s most populous cities, leading urban commentators to dub them boomburbs. By 2000, Virginia Beach could boast of 425,000 residents, up from 5,000 in 1950 and almost twice the population of Norfolk, the historic hub of tidewater Virginia. The Phoenix area included seven “suburban” cities with populations over 100,000, led by Mesa, with almost 400,000 inhabitants. Moreover, the growth of the Arizona boomburbs was not abating; in the mid-1990s, houses were reportedly consuming the Arizona desert at the rate of an acre per hour.3 Both Virginia Beach and Mesa were already more populous than such traditional hubs as Minneapolis, Pittsburgh, Saint Louis, and Cincinnati. In California, there were twenty-five boomburbs with populations of more than 100,000, and the Denver metropolitan area was the site of three of these outlying giants, the largest being Aurora, with 276,000 people. Dallas was ringed by seven boomburbs that topped the 100,000 mark, headed by Arlington, whose population grew from 8,000 in 1950 to 333,000 in 2000. In these cities, growth was a way of life during the late twentieth century. Between 1986 and 1989, a breakneck annexation campaign more than doubled Aurora’s area to 140 square miles. A Denver newspaper accused Aurora of being “bent on annexing Kansas and beyond.”4 “Arlington is a pro-growth town,” observed a former planning official of the Texas city at the close of the 1980s. “Always has been, always will be.”5 Many observers felt that the physical evidence of unthinking, sprawling growth was all too obvious in many of the boom-towns. An early-twenty-first-century visitor to Aurora wrote of the Colorado giant: “It has no discernible downtown, no town center, just mile after mile of strip malls, small mom-and-pops, ethnic restaurants, and ranch-style housing developments.”6
The boomburbs were not only large, and growing larger, but most were becoming more diverse, defying long-standing notions of ethnic and lifestyle homogeneity in suburbia. A University of Michigan study completed in 1999 judged Aurora “the most integrated city in the United States,” and Aurora’s school system claimed that sixty-eight languages were spoken in the city’s households. In 2000, 20 percent of the population was Hispanic, and 12 percent was African American. The Aurora community services directory was printed in English, Spanish, Korean, Vietnamese, and Russian. “Everywhere you go, it’s like you’re in a different country, with all the cultures and people,” observed an enthusiastic Aurora resident. “I feel sorry for people who live in all-black or all-white neighborhoods because they don’t know what they’re missing.”7
Not only were there booming young cities welcoming people from around the world, but small towns and unincorporated areas in outlying counties were exploding with newcomers who had few or no ties to the region’s historic hub city. By 2000, three suburban counties in the Atlanta area had over 500,000 residents, with Gwinnett County’s population having soared from 32,000 in 1950 to 588,000 fifty years later. In 2004 Gwinnett County’s school system was the largest in Georgia, with a more diverse student population than that of the predominantly black city of Atlanta. Twenty-three percent of the students were African American, 17 percent were Hispanic, and 10 percent were Asian American. Moreover, at the turn of the century, there seemed no prospect that Gwinnett’s growth or that of surrounding counties would soon cease. In 1999 Time magazine ran a picture of new houses in Gwinnett County with the caption “Spread Alert” and questioned whether this was “part of the fastest widening human settlement ever.”8
Many residents of the ever more populous political units of the edgeless city maintained a carefully controlled lifestyle and a sense of grassroots rule by resorting to the private governments of homeowner associations. Large, diverse counties or boomburbs could not provide government tailored to a single subdivision, but the associations could, thus preserving the local control traditionally valued in suburbia even while populations soared. In the crime-ridden, threatening metropolitan world of the late twentieth century, the proliferating homeowner associations also offered the sense of security associated with small towns and traditional residential suburbs. This was especially evident in the growing number of gated communities, walled subdivisions that were off-limits to nonresidents. Homeowner associations maintained the surrounding walls as well as the community’s private streets and the recreation areas open only to subdivision residents and their guests. Many such communities hired guards to staff gatehouses or imposed a system of key cards or entry codes that ensured only authorized persons could enter the subdivision. Originally popular among Sun Belt retirees, gated communities attracted a growing portion of the population during the 1990s, especially in the boomburb regions of California, Arizona, Texas, and Florida. In 1994 approximately one-third of southern California’s new communities were gated, and according to one estimate the nation’s gated community population soared from 4 million in 1995 to 16 million in 1998. A study based on the 2001 American Housing Survey found that over 7 million American households were within walled communities.9
Various factors encouraged this retreat behind walls and the resort to private homeowner governments. Fear of outsiders and the crimes they might commit motivated some to seek a life behind gates and security guards. A South Florida developer observed: “People are a little neurotic. [Those] who have suffered from crime or know someone who has are sitting there all day like Chicken Little waiting for the sky to fall in.” A resident of a southern California subdivision that installed gates in the 1990s explained: “Before it was gated I had to keep everything locked. There were transients coming through, walking up and down the street.”10 Yet prestige was an added advantage. “People like to live within walls because they give the illusion of security,” remarked a Dallas security consultant. “And it has acquired a certain social connotation as well. It’s become the thing to do, like having a doorman or a chauffeur.”11 Moreover, many residents believed that the walls and subdivision associations created a more neighborly community, a small-town feeling. “In any homeowner association I think you’d have more community spirit than just on a block,” commented a California resident. “I guess the gates make it a family.” And a Dallas developer asserted: “The number one issue as I see it is that people want a sense of community. I think that is more what the gate is about, more so than security.” He believed that “the main thing is ‘I want a small town atmosphere in my big city. I want to be part of a community where I can be friends with all these people who are similar to my background.’”12
In the seemingly limitless, centerless expanse of metropolitan America, walled subdivisions created identifiable, defined communities for their residents. For their residents, the walls provided needed edges in the edgeless city, boundaries that distinguished neighbors from intruders, the privileged from the poor, and the protected from the vulnerable. Implied in the concept of community was the notion that some belonged and some did not. In the amorphous sprawl of turn-of-the-century America, walled subdivisions made residents feel they were part of a community, fenced off from the dangerous and the undesirable. Many observers deplored the gated community phenomenon. A leading urban planner warned: “These walls and gates are leading to more segregation and more isolation, and the outcome is going to be tragic for all of us.” But a resident of a gated community in the Dallas area boomburb of Irving thought otherwise: “It seems like a secure, established neighborhood where our kids can run around without having to worry about traffic.” Claiming that in a city neighborhood “you never know what’s going to happen,” he concluded that “in a gated community you can control some of that.”13
Another emerging element of the landscape of the edgeless city was the giant windowless retail outlet known as the big-box store. By the close of the twentieth century, fewer large, enclosed malls were being built, and some were standing derelict as shoppers turned from the mall’s department store anchors and headed instead for the big-box emporiums of discounters that lined suburban highways. With low prices and huge inventories, these discount chains lured millions of bargain-hungry shoppers. In 2000 Wal-Mart, with 4,190 stores, was the world’s largest retailer, reaping four times the revenues of the nation’s second-largest chain and ten times the sales of Federated Department Stores, the owner of Macy’s. Number three in the United States and four in the world was the home improvement giant Home Depot, and the upscale discounter Target ranked number six in the nation (figure 7.1).14 Target was founded in 1962 as the discount outlet for Minneapolis’s Dayton’s Department Store. By the close of the century, Dayton’s had purchased Detroit’s premier department store, Hudson’s, as well as Chicago’s Marshall Field’s. Yet in 2000, 83 percent of the company’s pretax profits came from its discounter. Acknowledging the triumph of its big-box discount offspring over its aging department stores, the parent company changed its name from Dayton Hudson to Target.15
The triumph of big-box retailers was yet another stage in the shift of shopping from a centripetal pursuit to a centrifugal one. For the largest selection of merchandise at the best prices, the shopper of the early twenty-first century headed not toward the city center but outward from the historic hub to suburban highways. At the beginning of the century, there was no Target store in Manhattan, but Manhattanites commuted outward to suburban outlets. “I’m beyond obsessed with Target,” confessed one Manhattan business owner, who traveled to suburban New York or New Jersey each weekend to satisfy her obsession. Another well-heeled Manhattanite told of her discovery of Target in suburban Long Island: “I came out with two shopping carts full of stuff. They had to help me out the door. It’s so cheap! It’s amazing!”16
FIGURE 7.1 Early Target store in Minnesota, with the sprawling low-rise structure and expansive parking lot characteristic of big-box stores. (Norton & Peel, Minnesota Historical Society)
Wal-Mart best exemplified this reversal of the traditional pattern prevailing before 1945. Founded by Arkansas’s Sam Walton, it first thrived by catering to underserved small-town customers and then expanded into suburbia. Finally, by the turn of the twenty-first century, it was entering the central cities. In 2003 it had outlets in seven of the top ten urban markets; the exceptions were Chicago, Detroit, and New York City. A retail analyst observed: “Urban areas are the last frontier for Wal-Mart other than international markets.”17 The same year, the giant discounter announced plans to open its first stores in the city of Chicago. Local labor unions responded with bitter attacks on the chain, which relied exclusively on nonunion labor. Yet many African Americans living near the proposed Wal-Mart sites favored the prospect of a big-box store in their neighbor-hood. A local pastor claimed that 99 percent of his congregants already shopped at suburban Wal-Marts, driving miles to satisfy their retailing needs. Expressing the opinion of many of her constituents, the city council representative for a black West Side neighborhood asserted: “If our money is good to spend in the suburbs, then it’s good to spend here.” “We need a Wal-Mart around here,” remarked a West Side shopper. “I can’t find any place in this neighborhood that sells decent clothes or furniture I can afford.”18
Both well-heeled Manhattanites and poor Chicagoans were headed outward from the underserved central city to the great edgeless-city emporiums of the twenty-first century. In 1945 Macy’s, in the heart of Manhattan, was the world’s biggest retailer; in 2000 Bentonville, Arkansas, was the headquarters of the world’s preeminent retailer. In 1945 shoppers traveled downtown to find the most merchandise and seek the best prices; in 2000 the treasure troves of shoppers were along New Jersey interstates and on Illinois acreage that had produced corn only a few years earlier. At the beginning of the twenty-first century, New York City and Chicago were the last frontiers of the world’s largest retailer, the last places it chose to locate. The metropolitan revolution had turned the retailing world inside out.
Although millions of Americans were flocking to big-box stores, gated communities, and boomburbs, a cadre of highly vocal critics was keeping alive the tradition of antisuburban diatribes. “The United States has become a predominantly suburban nation, but not a very happy one,” pronounced critic Philip Langdon in 1994. According to him, suburbs were “fostering an unhealthy way of life,” and, echoing the screeds of the 1950s, he believed that suburbanization had produced “a bitter harvest of individual trauma, family distress, and civic decay.”19 Among the most vehement of the turn-of-the-century foes of suburban sprawl was James Howard Kunstler, who in a series of books leveled an unrelenting barrage of rhetoric on the edgeless city. It was “depressing, brutal, ugly, unhealthy, and spiritually degrading,” a landscape of “jive-plastic commuter tract home wastelands, … Potemkin village shopping plazas with … vast parking lagoons,” and “Orwellian office parks featuring buildings sheathed in the same reflective glass as the sunglasses worn by chain-gang guards.” It was “destructive, wasteful, toxic,” a blight on the nation and a plague on its people.20 At the close of the century, Time magazine concluded with some exaggeration: “Everybody hates the drive time, the scuffed and dented banality, of overextended suburbs.”21
Whereas many of the earlier diatribes of the 1950s, 1960s, and 1970s had focused on the suburbs’ destructive impact on the central city, the attacks of the 1990s emphasized the edgeless city’s toll on the natural environment. Influenced by the widespread embrace of environmentalism, critics in the late twentieth century claimed that suburban developers were at war with nature itself. The advance of suburban sprawl seemed to be accelerating, consuming the nation’s fields and forests at an ever-increasing rate. In a 1999 article on suburban “hypergrowth,” Newsweek reported that in the Denver area farmland was “falling to sprawl at a rate of 90,000 acres per year”; in Austin from 1982 to 1992, there was a “35 percent increase in open space lost to development”; and between 1990 and 1996, metropolitan Akron experienced a “37 percent decrease in population density and [a] land area increase of 65 percent.”22 According to an account of the sprawl problem published in 2000, America was “presently experiencing an unprecedented loss of ‘open space’—productive crop and pasture lands, along with forest woodlands, fragile wetlands, and other natural wildlife habitats.”23
Exacerbating the problem were the poisonous fumes and debilitating traffic snarls produced by the mounting number of automobiles transporting edgeless-city commuters. Kunstler described the Atlanta metropolitan area as “one big-ass parking lot under a toxic pall from Hartsfield [Airport] clear up to the brand-new completely absurd Mall of Georgia.”24 Everywhere the traffic seemed to be getting worse, polluting the air and damaging the lives of the auto dependent. The federal government found that between 1990 and 1995, the amount of time mothers spent behind the wheels of their cars rose by 11 percent. U.S. News & World Report concluded: “Moms spend more time driving than they spend dressing, bathing, and feeding a child.” A California psychologist reported that about half the married couples he counseled suffered from commuter-related stress. “They come in having only a dim awareness that commuting is the problem,” he noted. “Instead, they say we’re quarreling too much, and the affection’s gone, and so is the sex.”25
Sexual intercourse, connubial affection, motherly devotion, atmospheric purity, flora and fauna, civic loyalty, and individual happiness all seemed to be victims of the relentless sprawl of the edgeless city. The indictment was as powerful in the 1990s as in the 1950s. Gwinnett County, however, was the new offender par excellence, replacing Levittown as public enemy number one. “They ran the environmental people out of here a long time ago,” reported a Gwinnett County developer in 2000. “You’ve got no trees. You’ve got no streams. You’ve got no mountains. It’s a developer’s paradise.”26 But for Kunstler, Langdon, and a growing number of environment-conscious Americans, Gwinnett was hell.
A few foes of edgeless-city sprawl were willing to take violent action to curb development. Between January 2000 and August 2001, ecoterrorists set fire to more than a dozen new houses in New York, Indiana, Colorado, and Arizona.27 On Long Island, they also spray-painted “Stop Urban Sprawl!” on a new home. “The Earth isn’t dying, it’s being killed,” read a threatening communiqué from the pro-sabotage Earth Liberation Front, “and those who are killing it have names and addresses.”28
More often, however, foes of sprawl resorted to the ballot box in repeated attempts to impose restrictions on development and preserve open spaces. In November 1998, 240 state and local measures to preserve or purchase open space were on the ballot in states across the nation, with voters approving 72 percent of them. Eight of the ten statewide measures to set aside open land won voter endorsement.29 For example, New Jersey’s electorate agreed to spend $1 billion over a ten-year period to preserve half the remaining undeveloped land in the state as open space. “Americans are finally realizing that once you lose land, you can’t get it back,” remarked New Jersey’s governor.30 Meanwhile, in 1997 Maryland adopted “Smart Growth” legislation that permitted state subsidies for new roads, sewers, and schools only in “priority funding areas,” zones deemed suited for development. Developers seeking to plant new subdivisions in rural areas, thus perpetuating sprawl, could not count on state support for the infrastructure necessary to accommodate new homeowners. Similarly, in 1998 Tennessee approved legislation requiring municipalities and counties to draft comprehensive plans designating urban growth boundaries; state funding would be restricted to the development zones within these limits.31
Oregon, however, was the preeminent antisprawl state and a much-admired model for foes of the edgeless city. In the 1970s, it had adopted a policy of urban growth boundaries, designed especially to keep the Port-land metropolitan area from sprawling southward into the rich farmland of the Willamette Valley. Basically, beyond these boundaries developers could not build. The consequence was denser development within the limits. Whereas in the late 1970s the average lot size in the Portland area was 13,000 square feet, by the late 1990s the figure was down to 6,700 square feet. In 1999 Time magazine reported: “Outside [the growth boundary], where open land is strictly protected, there’s mostly just the uninterrupted flight of greenery we call nature. Unspoiled stretches of the Willamette River Valley start 15 miles from city hall.”32 James Kunstler also approved, lauding Portland as the exception to the dismal American rule. “Because people live there at a high density,” he asserted, “the city can support a variety of eating places, bars, cafes, clubs…. The texture of life is mixed, complex, and dense, as a city ought to be.” In his opinion, Oregonians were having “to find new ways of doing things: of making a living without destroying land, building real towns and city neighborhoods instead of tract housing pods and commercial strip smarm, [and] eliminating unnecessary car trips and commutes.”33
The Portland experience was also welcome news to an emerging planning movement known as the New Urbanism. Led by architects Andres Duany and Elizabeth Plater-Zyberk, New Urbanism was the planning arm of the antisprawl crusade, dedicated to creating traditional-style neighborhoods with smaller lots, narrower streets, front porches, and corner groceries. Density and walkability were to replace the sprawl and automobile dependence of the edgeless city. Basically, New Urbanists sought to recreate the neighborhoods of the pre-1945 era before Levittown, Southdale Center, McDonalds, and the interstate highway system had corrupted American life. In their manifesto on New Urbanism, Duany and Plater-Zyberk urged their followers to remember the refrain: “No more housing subdivisions! No more shopping centers! No more office parks! No more highways! Neighborhoods or nothing!”34
A scattering of New Urbanist communities attracted considerable attention at the close of the twentieth century. The initial New Urbanist offering was Seaside, Florida, a small resort community designed by Duany and Plater-Zyberk in the 1980s. They followed this with Kentlands in suburban Maryland. The Disney Corporation signed on to the movement, constructing the New Urbanist community of Celebration adjacent to its Florida Disney World. In each of these communities, the planners eschewed the cul-de-sacs, expansive lawns, and front-facing garages of standard edgeless-city subdivisions, building instead houses with small yards and garages on back alleys within walking distance of stores and schools. Rejecting stark modernist architecture and opting instead for houses outwardly reminiscent of the eighteenth and nineteenth centuries, the designers of these communities created an environment that some found antiseptically cute. Kentlands and Celebration were carefully contrived refuges from the edgeless city that seemed about as real as Disney World’s Main Street. Yet as the chorus of complaints about sprawl grew louder, the New Urbanist devotion to density and reduced reliance on the automobile attracted many adherents among planners and architects.
Some, however, questioned propaganda regarding sprawl and rebelled against the arrogance of planners and architects who claimed to know what was best for seemingly dim-witted Americans who for decades had preferred a big yard, a big car, and the ample private space afforded by suburbia rather than the density of the central city. Randal O’Toole denounced the planning tyranny of his home state of Oregon, claiming that all urban lifestyles from high density to low density were “valid lifestyle choices and they work for the people that live there.” Yet, according to O’Toole, “if smart-growth planners had their way, almost everyone except a few rural workers and their families would be confined to high-density, mixed-use urban neighborhoods…. The arrogant notion that a small elite can and should make important lifestyle choices for everyone else is at the heart of the war on the suburbs.”35 An economist with the Reason Public Policy Institute seconded this notion when he observed: “You can’t develop a public policy around stopping people from moving to the communities and homes they want to live in, at least not in the United States. Not yet.”36 In the pages of the New Republic, Greg Easterbrook made much the same point when he noted: “The reason Americans keep buying more housing, more SUVs, more swimming pools, and other space-consuming items is that they can afford those things…. If prosperity puts the four-bedroom house within reach for the typical person, it’s hard to see why public policy should look askance at that.”37
Not only did the antisprawl, smart-growth crusade smack of arrogant planning dictation, but it seemed motivated by the selfishness of those who already enjoyed open space and wanted to keep their less fortunate brothers and sisters out. Easterbrook noted that “if communities take the kind of steps that would really stop sprawl, they would confer a windfall on those already entrenched while damaging the prospect of those who long to attain the detached-home lifestyle.”38 Another observed: “Suburbanization by other people is what’s unpopular; people love living in the suburbs, they just don’t want anyone else out there with them.”39 Too often it seemed as if wealthy estate owners, gentlemen farmers, and those already established in rambling manses on two-acre lots were trying to keep house-hungry newcomers from wreaking “environmental” damage on their zones of privilege. Moreover, too often it appeared that urban growth boundaries would simply inflate the cost of land open to development, dooming middle-class purchasers and upwardly mobile immigrants from Mexico, China, and India to a life in an attached townhouse with a yard just large enough for a flower bed and walls that failed to keep out the sound of wailing babies and blaring radios from the unit next door.
In any case, the New Urbanists and their ilk seemed as yet not to represent the American norm. Despite all the rhetoric about banal subdivisions and the soulless highway culture, new home sales in edgeless-city developments were not abating and business at Wal-Mart and Target was booming. Gas-guzzling sports utility vehicles were big sellers for the auto industry, and the line at the drive-up window at McDonalds was not dwindling. Americans did not welcome the prospect of new houses blocking their scenic suburban views or additional cars on the highways slowing their journeys to the big-box store. Yet they themselves did not want to give up life in the edgeless city. If Americans wanted to live in a traditional neighborhood, they could have moved there. Plenty of homes were for sale in pre-1945 neighborhoods where one could stroll the sidewalks to public transit lines that would carry one downtown. Some were selecting this lifestyle option, but, the New Urbanists and sprawl busters to the contrary, more Americans seemed to prefer living in the edgeless city.
The Perpetual Renaissance
While the edgeless city sprawled outward with new Wal-Marts, subdivisions, and office parks, the older central cities survived and continued their seemingly perpetual search for renaissance. In fact, at the close of the century, the news from the historic urban core was unusually upbeat. The 2000 census showed that the population decline in the hub cities had slowed, leading a Brookings Institution study to write of an “urban turnaround.”40 In both Chicago and Minneapolis, the population increased 4 percent during the 1990s, the first rise in either city since the 1940s. During the 1950s and 1960s, Boston, Providence, and Worcester had been counted out as dying centers of a region whose heyday had long passed, but in the 1990s each of these cities posted population gains, demonstrating that they could not be dismissed as has-beens. Even the cities that continued to lose population did so at a reduced rate. In the 1970s, the number of residents in Cleveland had dropped 24 percent; in the 1990s, the decline was only 5 percent. Similarly, Detroit’s population plummeted 20 percent in the 1970s but only 8 percent in the 1990s. Most of the decline in the 1990s seemed to be the result of smaller households rather than wholesale abandonment of structures such as plagued the older centers in the 1970s. In 2000 there was nothing to compare with the devastation of the South Bronx a quarter century earlier.
Some of the supposed turnaround might have been owing to improved census coverage in 2000, as compared with 1990. It was widely believed that the Census Bureau had failed to count many urban dwellers in 1990, thus shortchanging older hubs. In 2000, however, Census Bureau efforts seem to have improved, producing a more accurate count. Yet the influx of immigrants from Latin America, Asia, and Europe seeking inexpensive housing in the inner cities also appeared to explain some of the change. New York City’s population rise of 9 percent clearly was owing to newcomers from abroad who were reinforcing the city’s role as a multicultural mecca. Moreover, gentrification probably accounted for a share of the good news. The city had not lost its allure for at least some of the young and childless. Popular television programs such as Seinfeld and Friends broadcast an appealing picture of Manhattan apartment life that partially erased the adverse images of mugging, rioting, and decay inherited from the late 1960s and the 1970s. For millions of television viewers in the 1990s, the city was a place of laughs and romance where attractive young people struggled only with the tribulations of situation comedy and not with the traditional urban conflicts arising from ethnicity, poverty, crime, or class strife.
In addition, the older hubs retained their long-standing grip on the American psyche. Million of suburban Americans still referred to themselves as being from New York, Philadelphia, Chicago, Atlanta, and San Francisco even though they had never lived within those cities and rarely set foot in them. Although they did not contribute to the tax base, census data, and retail sales figures of the central city, in some way it was still their city, part of their identity when they defined where they were from. They departed from and arrived at airports bearing the names of the historic hubs, watched television newscasters who reported what the local big-city mayor said and did, and read the metropolitan daily newspapers published in the old downtown. The historic hubs were not what they used to be, but they were not forgotten and could not be wholly ignored.
Despite the persistent significance of the core municipalities and the supposed urban turnaround, the role of the older centers in American life remained insecure, and mayors and downtown business leaders continued to seek an elixir that would provide lasting revitalization. Downtown was no longer the focus of retailing and no longer the hub of office employment. The so-called central cities had to remain the center of something if they were to continue to merit the label “central.” Moreover, they had to fashion a lucrative role for themselves if they were to generate enough tax revenues to support public services and create enough jobs to support their residents. Thus city leaders remained dedicated to making their cities destinations, places where people would come and spend money. The center had to have some magnetic attraction for those with cash, and in the 1990s, as in earlier decades, the search for this attraction preoccupied urban policy makers.
A favorite element in efforts to recenter metropolitan America was the sports team. By the end of the century, little united the residents of a metropolitan area other than a common allegiance to the local professional sports team. Affluent white residents in Oakland County, Michigan, did not mix with poor black inhabitants immediately to the south in the city of Detroit, nor did they have many links with blue-collar whites in adjacent Macomb County. Oakland and Macomb countians did not frequent downtown Detroit; they no longer shopped or worked there. The various components of the metropolitan population of southeastern Michigan feared, resented, or ignored one another. But they were all fans of the Detroit Tigers baseball team, the Detroit Lions football team, and the Detroit Pistons basketball team. The one uniting bond that identified them as Detroiters, even though most did not live in the city of Detroit, was a loyalty to sports teams bearing the city’s name. It was this uniting loyalty that many urban leaders believed could refocus the edgeless city, diverting some of its wealth and many of its people to sports events in the historic hub. The common allegiance to the local team was one of the few good cards that central cities had left in their hands, and in the 1990s urban leaders sought to trump suburban successes by playing it.
One city after another embarked on expansive programs to build sports facilities that would draw people and money to the aging core and boost its fortunes and morale. With noteworthy consistency, scholarly studies demonstrated that the millions of public dollars spent for downtown stadiums and arenas would not yield sufficient tangible benefits to warrant the state, county, or city financial commitment. Central-city boosters claimed, however, that the sports facilities were worth the public-sector investment. In 1996 a Saint Louis economic development director contended: “There are intangible benefits … corporate recruiting, community attitude and reintroducing people to a city…. As people come downtown and get comfortable there, they are more likely to come down again, hang out and spend time.”41 In 1997 the Minnesota Twins financial adviser agreed when he told a legislative committee, “No one can tell me that it’s not better to have three million people a year coming to downtown Minneapolis to watch baseball than having none.”42 Backers of a ballot proposal for a tax levy to finance a stadium in downtown Cleveland emphasized the positive economic impact on the city’s core when they adopted the slogan, “More Than a Stadium.”43
In Cleveland, Minneapolis, and Saint Louis, it was more than just a question of sports facilities; it was also one more attempt at recentering metropolitan America and reasserting the historic hub’s once unquestioned claim to be central to at least one aspect of American life. Identifying the crux of the issue, one student of the impact of sports on cities asked: “In a city with a full set of urban challenges, is the new image created by these public investments worth the commitments if there is no direct economic impact? Is the myth or illusion of activity created by the glamour from sports and downtown crowds worth what the public sector spent?”44 Most urban leaders answered with a definite yes.
This was evident across the country. For example, in Baltimore a state sports authority built Camden Yards ballpark for baseball’s Orioles as yet another step in the city’s long-term campaign to attract visitors. Chicago’s mayor arranged a complex financing package to pay for the renovation of downtown Soldier Field, ensuring the continuing presence of the football Bears in the city’s core.45 In 1996 the public sector agreed to contribute 48 percent of the estimated $505 million necessary to build a new downtown ballpark for the Detroit Tigers and a football stadium for the Detroit Lions. The Detroit Free Press announced this deal with the euphoric headline “Detroit Comeback” and claimed that the city’s investment would pay off because of new “development expected near the project and a new image for downtown.”46 Perhaps no city relied more heavily on sports to recenter the metropolitan population than long-troubled Cleveland. As part of a downtown revitalization scheme, city leaders arranged for the construction of a new baseball park, an adjacent professional basketball arena, and a nearby football stadium. By doing so, they ensured that the baseball team would remain in town, they lured the basketball team from its previous home in suburban Richfield, and they secured a new football franchise after losing their team to Baltimore a few years earlier.47 Thus Cleveland firmly reinforced its big-league status, despite its long economic and population decline, and it attracted millions of sports fans to the downtown area. Although no longer as significant a retailing destination or as prominent a business center, downtown Cleveland was at least the unquestioned professional sports hub of northeastern Ohio.
Cleveland and its urban ilk were also resorting to other ploys to lure people back to the historic metropolitan center. During the 1990s, both the $92 million Rock and Roll Hall of Fame and the $55 million Great Lakes Science Center opened in downtown Cleveland to serve as magnets drawing additional visitors to the core.48 In 1992 the New Jersey Sports and Exposition Authority opened the New Jersey State Aquarium in downtown Camden, a gritty community perennially on the list of the nation’s most troubled cities.49 Throughout America, ever-larger convention centers were constructed to draw out-of-town spenders to the city core, with some succeeding and others struggling to attract the bookings necessary to survive.50 A new panacea was casino gambling. No city grew at a faster pace in the late twentieth century than Las Vegas, the world’s gaming capital. Given its success, some urban leaders turned to gambling as an untapped attraction that would make their cities destinations for millions of Americans. In the 1990s, casinos opened in both East Saint Louis and Gary, cities that vied with Camden for the distinction of worst in the nation.51 Although they did not spur a revolution in the economies of the two cities, the casinos did provide much needed revenues for municipal treasuries and enable local officials to finance public services. “From an economic development perspective, it has been a boon for us,” reported Gary’s economic development director, though he admitted that “Gary was in pretty dire straits when they came in.” According to the director of Gary’s chamber of commerce, the casinos at least gave the city “something else to be seen as besides another buckle in the Rust Belt.”52
Another revitalization initiative of the 1990s aimed at attracting housing to the downtown area. By boosting the residential population of the central business district, cities would supposedly ensure twenty-four-hour vitality in the core and secure new customers for the remaining downtown businesses. There were reports of new downtown residents not only in Manhattan or such traditional bastions of gentrification as Boston, Philadelphia, Chicago, or San Francisco. The centripetal movement seemed to be occurring in a broad variety of cities across the country. By December 1998, fourteen buildings in downtown Birmingham, Alabama, housed apartment dwellers, and six more conversions of commercial structures to dwellings were under way. For example, floors eight through nineteen of a twenty-one-story former bank building were being transformed into condominiums, and eleven floors of a seventeen-story office tower were being renovated as rental apartments. “There is no tradition of living in the city like you find in older, East Coast cities,” commented a Birmingham leader. “We’ve found it’s a niche market, but a bigger niche than most people anticipated, and a growing one.”53 Developers in Denver were likewise finding customers for downtown living among the niche market of young professionals. Chic LoDo was the focus of the hottest real-estate action. “LoDo has legitimacy, it has currency, it can’t be cloned in the suburbs,” remarked one real-estate broker. “When someone can live anywhere they want, and they choose to live in Lower Downtown, it’s a statement about how the city is changing,” observed an architect.54 Similarly, Cleveland’s downtown warehouse district was becoming a desirable place to live, boasting over one thousand apartments as well as trendy restaurants and bars by 2001.55 In 2004 the Columbus (Ohio) Dispatch carried the headline “Demand for Downtown Living on Rise.” Reporting on the opening of the sales office for his in-town project, a Columbus developer said: “We were inundated with people interested in urban living.”56
In the 1990s, as in the 1980s, the young, childless, bohemian, artistic, and gay were the principal newcomers to the core, enjoying a lifestyle not available in Gwinnett County, Georgia, or Mesa, Arizona. One developer who had retrofitted former office space in Houston commented: “Sex is what sells the city. It’s where the single people are, where you go to a bar to meet them…. In Houston there are few places for people to walk around and promenade—downtown is the exception.”57 A new condominium complex in midtown Atlanta appealed to prospective buyers by advertising: “Extremely hip shopping and dining is only an elevator ride away. Just press G—very cool stuff awaits you on the ground floor.”58 The New York Times reported that Denver’s LoDo had “emerged as the city’s gallery and restaurant center, with 18 galleries and about 100 bars and restaurants.” A Coloradoan reinforced this image when he noted that LoDo was “going back to the old idea of Denver as the drinking capital for the hinterland.”59 Life in the core was for those who yearned for the sexy, the hip, and the bar scene, for a lifestyle far removed from Wal-Mart and McDonalds.
At the beginning of the twenty-first century, the rise in the core population brought hope to many central-city advocates. A Brookings Institution study titled “Downtown Rebound” found that in eighteen of twenty-four sample cities the downtown population had increased during the 1990s. Houston’s downtown population soared 69 percent, Seattle’s was up 67 percent, Denver’s increased 51 percent, and Cleveland’s rose 32 percent.60 Responding to this good news from the 2000 census, USA Today carried the headline “Downtowns Make Cities Winners.”61
Yet in absolute numbers, the downtown populations generally remained small, and the dramatic percentage increases reflected the very low base populations in 1990. In 2000 fewer than 12,000 Houston residents lived downtown out of a total city population of nearly 2 million. Cleveland’s central business district had fewer than 10,000 inhabitants, and downtown Denver was home to 4,230 people, about enough to keep one small supermarket in business. Moreover, the redevelopment of downtown commercial structures for housing use was a sign as much of failure as of success. Traditionally, few people had lived downtown because real-estate values were so high in the core as to price housing out of the market. Only big department stores, major banks, and well-heeled corporations could afford core real estate. By the 1990s, however, the big spenders no longer wanted many downtown properties, making it economically feasible to rent them as housing. Birmingham’s older skyscrapers were becoming dwelling units because they could no longer attract sufficient commercial tenants. This was true throughout the nation as older office space no longer commanded a market. In New York City, 45 Wall Street, an aging tower at the hub of the nation’s premier financial district, became residences, and in Philadelphia the same fate awaited the old stock exchange building. Cincinnati’s Shillito’s Department Store building, once the center of the city’s retailing, became ninety-eight dwelling units, and in Cleveland there were plans to renovate the grand old Statler Hotel as apartments after the converted hostelry had failed as an office structure.62 Many cities, then, were making the best of a bad situation. Structures designed for more lucrative economic uses were being salvaged to play a modest role as housing. In 1945 the idea of wasting space on Wall Street for residences or using a city’s largest department store for apartments was unthinkable. Wall Street and Shillito’s were too valuable to hand over to apartment dwellers.
Despite all the rebound hype so commonplace among urban boosters since the 1950s, the news from the older central cities remained decidedly mixed. In many aging cities, the signs of decentralization were unavoidable, but nowhere were they more apparent than in Detroit. Detroit had created the automobiles responsible for the transformation of the city, and poetic justice seemed to dictate that it suffer the most from the consequences of the auto-borne flight from the core. In 1998 the giant J. L. Hudson Department Store building was finally imploded after fifteen years of standing vacant. For those seeking additional relics of the urban past, downtown Detroit offered a number of vacant office buildings, most notably the thirty-two-story David Broderick Tower. In the mid-1990s, an architectural historian pronounced the city’s abandoned office structures “gray haunting monuments” and “the most depressing sight in urban America.”63 At the close of the century, the Motor City’s thirty-three-story Book Cadillac Hotel, with over one thousand rooms and five floors of ballrooms, also survived as an empty hulk, having welcomed its final visitors in 1984. In February 2001, Detroit’s last downtown movie theater closed, although the grand Fox Theater had been saved and renovated as a performing arts center. Meanwhile, many of the inner-city neighborhoods surrounding downtown had vanished or were in the process of disappearing. In 1995 there were 66,000 vacant lots in the city, and an urban observer noted that “vast parts of the city have reverted to prairie so lush that state game wardens export Detroit pheasants to the countryside to improve the rural gene pool.”64 “Detroit is reverting to a farm,” concluded a former planning director of the city.65
Perhaps most insulting to the once grand Motor City was the proposal by the photographer and urban commentator Camilo Vergara that “a dozen city blocks of pre-Depression skyscrapers should be left standing as ruins: an American Acropolis.” No longer the hub of southeastern Michigan, downtown Detroit should survive as a derelict relic of the American lifestyle destroyed by the metropolitan revolution of the second half of the twentieth century. For tourists, the Motor City downtown could evoke a lost past, an almost forgotten era of urban glory and grandeur. In 1995 Vergara found little but ruins left. “On the streets, wanderers and madmen sit on the sidewalks or push shopping carts,” he reported. “Large numbers of skyscrapers that were planned to last for centuries are becoming derelict; a cluster of semi-abandoned structures rises like a vertical no-man’s-land behind empty lots.”66
Most central cities could take pride in the fact that at least they were not as bad off as Detroit. Yet the adverse signs of decentralization marked many other inner cities and downtowns. By the late 1990s, the urban renewal shopping mall that was intended to revive central New Haven was largely empty, Macy’s having vacated its anchor store in 1993. In 2004 the downtown Lazarus Department Store in Columbus, Ohio, closed. Traditionally the city’s preeminent emporium and the unchallenged mecca of Central Ohio shoppers in 1945, the downtown outlet could not survive in the decentralized world of the early twenty-first century. Along Nicollet Mall in downtown Minneapolis, a five-story retail center named the Conservatory opened with fanfare in 1987 but was demolished a decade later. In 2000 the retail space in downtown Minneapolis’s giant mixed-use City Center was 25 percent vacant. At the beginning of the twenty-first century government offices occupied much of the retail space in Town Square mall in downtown Saint Paul, and the city’s one remaining downtown department store was remodeling two of its five floors for offices.67 In 1996 Mary Tyler Moore came to the Twin Cities for a book-signing appearance that took place not in downtown Minneapolis, the destination for her television journey in the 1970s, but in the outlying Mall of America, the nation’s largest shopping mall. Television’s Mary Richards might have made it after all in downtown Minneapolis, but if the real-life Mary Tyler Moore wanted to market her book she needed to head for the edgeless city along Interstate 494 where the Mall of America and twenty-three hotels with 5,500 rooms beckoned to the dollars of consuming Minnesotans and out-of-state travelers.68
Moreover, Detroit was not the only city with tracts of empty land and thousands of abandoned structures. In 2000 there were about twenty thousand vacant lots in Philadelphia and more than thirty thousand abandoned dwellings. The Philadelphia Inquirer claimed that “there is no market demand for most of [Philadelphia’s] vacant land and buildings.” A visitor to Saint Louis would find expanses of unused land equal to those in Philadelphia or Detroit. Much of central Saint Louis consisted of nodes of development separated by fields standing idle. The Baltimore Sun observed that forty thousand empty row houses in the Maryland metropolis had “spread blight, crime, and despair across wide swathes” of the city. Similarly, the New Orleans planning director reported thirty-seven thousand vacant dwelling units in her city.69 Though abandonment was generally not proceeding at the devastating pace of the 1970s an inventory of unused and unwanted land and structures testified to the continuing plight of many older hubs.
Even in the lotus land of southern California, social commentators were painting a grim picture of the urban future. In a number of widely read works, critic Mike Davis most notably presented an apocalyptic vision of Los Angeles. “We live in ‘fortress cities’ brutally divided between ‘fortified cells’ of affluent society and ‘places of terror’ where the police battle the criminalized poor,” Davis wrote. Focusing on the Los Angeles of gang conflict, police repression, social and economic inequities, and public warfare on the poor, Davis depicted the end-of the-century city as a place of fear and seething tension, an environment ready to explode and worthy of destruction. Davis posited the existence of a “new class war … at the level of the built environment,” with Los Angeles as “an especially disquieting catalogue of the emergent liaisons between architecture and the American police state.”70 In other words, the glittering high-rise monuments to global capitalism so admired by many observers were actually stakes driven into the heart of the poor. The landmarks of wealth were also symbols of shame.
The social schisms of Los Angeles, the looming empty hulks of downtown Detroit, the fields of central Saint Louis, the young bar hoppers of LoDo, the Mall of America, the ubiquitous Wal-Marts and McDonalds, the gated communities, and the sprawling boomburbs all testified to the revolution that had swept across metropolitan America since 1945. The old notion of the city with a single dominant hub and a readily identifiable edge was as obsolete as the downtown department store. Americans had used their automobiles to escape the centripetal pull of the historic hub and had spread across the countryside. Throughout the latter half of the twentieth century, critics deplored this decentralization, identifying Levittown, shopping malls, big-box stores, and gated communities as symptoms and sources of societal decay, suburban neuroses, and environmental disaster. Yet most Americans seemed to disagree with these naysayers, investing in millions of suburban tract homes, filling outlying malls, and making Wal-Mart the world’s largest retailer. Despite all the paeans to traditional neighborhoods, urbanity, and the enriching diversity of the core, Americans left the central city so admired by Jane Jacobs and her ilk and bought into the lifestyle sold by William Levitt and Sam Walton. Many of those who remained in the central cities retrofitted the old buildings and old neighborhoods to suit their own needs. They created gay ghettos and Yuppie havens, displacing working-class taverns with restaurants and bars deemed “hip” or trendy. Thus in both the historic core and the emerging edgeless city, diverse groups mapped out sectors of the metropolitan turf and made them their own.
The result was not the uniform, banal sprawl that unperceptive critics deplored. Instead, metropolitan America became a vast, centerless, edgeless expanse with diverse zones adapted to various uses and lifestyles. Golden prewar suburbs such as Beverly Hills and Scarsdale survived, as did more modest postwar developments such as Levittown and Park Forest. Suburban Chinatowns were a short drive from Latino barrios, and southern California could boast of both homosexual West Hollywood and Leisure Worlds for retirees. Gentrification created communities of historic but renovated townhouses, art galleries, and expensive eateries. The jarring mix, the cacophony of communities, might not have appealed to planners and pundits, who seemed to believe that the good life required a set formula of front porches and busy sidewalks. But it accommodated the varied American population, and there seemed to be little desire among most Americans to return to the automobile-less, one-bathroom, un-air-conditioned, central-city apartment lifestyle that so many had endured in 1945. They had exploited unprecedented mobility, prosperity, and freedom to spread outward and fragment. Taking advantage of the ever-growing mass of automobiles, Social Security and government-guaranteed mortgages, liberation from age-based, gender, and sexual preference conventions as well as reduced barriers to immigration, they created a new metropolitan world far different from the constrained single-center city of the past. In response to changing lifestyle preferences, attitudes, and technology, a revolution had transformed metropolitan America. The city as conceived in 1945 no longer existed.