Chapter 2 A Fusion of Faith, Culture, and PoliticsChapter 2 A Fusion of Faith, Culture, and Politics

The Exceptional Fifties

We Americans can never quite let go of the 1950s. Time and again we return to that bygone era as if it represents some kind of benchmark by which we can measure how far society has progressed (race relations, women’s rights, gay rights are the usual yardsticks) or, alternately, how far we have declined (religion, family stability, sexual mores are most often cited). Either way, we continue to revisit the Fifties as if through that exercise we might locate where we are today.

According to the genealogy of generations by which Americans tend to identify themselves, I was—and in some ways remain—a Fifties person. My preferences in music, literature, food, and drink—even the cut and fabric of my clothes—as much as in religion, politics, and sports, to name only the basic staples of a civilized life, all betray my roots in the cultural ancien régime. Which is to say that whenever I encounter the trivializing or, what is often worse, the patronizing of the Fifties, I instinctively rise to their defense.

The Fifties, as I think back on them, persisted well into the Sixties: dicing history into decades never made much sense. Minimally, they began with the inauguration of Dwight D. Eisenhower as president in 1953 and lasted through the assassination of President John F. Kennedy ten years later. As a distinct moment in American culture, one could argue, the Fifties embraced the entire twenty years between the end of World War II and 1965, when the first American combat troops arrived in Vietnam. Although Kennedy at his inauguration in 1961 spoke of “the torch” passing to a “new generation of Americans,” it was in fact an exchange of leadership between an old hero of World War II and a younger hero of the same war. In foreign affairs, the two political parties were more alike than different.1 Americans were in a new “cold war” with communism: Russian Sputniks orbited the earth and the threat of nuclear holocaust hung over America’s otherwise bright horizon. But at home, Americans were enjoying a period of high employment and low inflation after the recession of the late 1940s, ease of upward mobility, and a level of prosperity they hadn’t experienced since the 1920s. It was, as David Halberstam would later put it, an era of “general good will and expanding affluence.”2 In that the Fifties were exceptional.

The Fifties were exceptional for other reasons as well. In literature, the Fifties saw the arrival of novelists John Updike, Philip Roth, and Saul Bellow, now American classics, as well as the blossoming of the slightly older John Cheever, Norman Mailer, Flannery O’Connor, Ralph Ellison (Invisible Man), Vladimir Nabokov (Lolita), and J. D. Salinger, to cite just a few. And toward the end of the decade the Beat poets and writers emerged with the rebellious instincts of literary outsiders. Altogether, this literary outpouring remains unmatched by any subsequent generation.

The Fifties were also a golden age for American art. First-generation abstract painters like Jackson Pollock, Franz Kline, Willem de Kooning, Jack Tworkov, Mark Rothko, Barnett Newman, Clyfford Still, and Joan Mitchell emerged right after the war along with sculptors Louise Bourgeois and Louise Nevelson. Those who first showed in the decade of the Fifties included Robert Rauschenberg, Jasper Johns, Larry Rivers, and Helen Frankenthaler in New York and Richard Diebenkorn and Sam Francis on the West Coast. No longer was the avant-garde something that originated only in Europe.

In music, newcomers like Dave Brubeck, Sonny Rollins, Chet Baker, and vocalists June Christy and Chris Connor joined John Coltrane, Miles Davis, and numerous others in a flourishing of jazz, cool as well as hot, bop as well as Dixieland revival. Serious country music arrived in the artistry of Hank Williams. Folk music erupted in the later Fifties and on the pop front rock and roll was born. And of course it was in the Fifties that the nation was introduced to a phenomenon named Elvis. Classical music? In 1955, some 35 million Americans paid to attend classical music concerts—20 million more than the paid attendance at major-league baseball games. And on Saturday afternoons, 15 million Americans tuned in to New York’s Metropolitan Opera on the radio, out of a total population of 165 million.3

Social commentary was far from bland. I think of the put-down humor of stand-up comics like the wild and profane Lenny Bruce, the in-your-face ethnic barbs of Don Rickles, and the hot-wired monologues of Mort Sahl at the “hungry i” club in San Francisco. By comparison, Jay Leno and David Lettermen were genial late-night jokesters. In the Fifties, too, there was a wealth of public intellectuals who commanded attention and a spread of journals from the Partisan Review to The New Yorker, where their work regularly appeared. We read them because we felt we had to.

In politics, the early Fifties were dominated by the Korean War and by the search for communists at home, with its attendant blacklistings and congressional hearings. The rise and precipitous fall of Senator Joseph McCarthy was the central drama, although only 10 percent of Americans tuned in to the Army-McCarthy hearings on television. But the more consequential domestic drama was the nation’s confrontation with American apartheid.

That drama unfolded episodically: in 1948, President Harry Truman desegregated the armed forces; in 1954, the U.S. Supreme Court outlawed segregated schools, and three years later President Dwight D. Eisenhower would send federal troops to Little Rock, Arkansas, to enforce that decision. Meanwhile, a pair of Negro nobodies, Rosa Parks and Martin Luther King Jr., ignited the civil rights movement with the yearlong bus boycott in Montgomery, Alabama. That was in 1955. My intention here is not to claim the civil rights movement solely for the Fifties, though that is when it began, but to remind the Sixties generation that the burden of confronting their own racism—and the moral challenge to change both laws and hearts—fell disproportionately on their Fifties parents and grandparents, the more so if they were working-class whites. That is the other, often-neglected side of “the struggle,” a subject I will address more fully in Chapter 4.

In short, a Fifties person has good reasons to regard the postwar period as a time of cultural achievement and social transformation. But this is not how the Fifties are usually remembered. According to the now-conventional wisdom, the Fifties was a bland interlude in American history, a period of social complacency and lemming-like conformity.4 So far as I can see, there are two reasons for the persistence of this cultural stereotype.

The Bogey of Conformity

First, this is how the Sixties generation has chosen to remember the Fifties, and its memory long ago congealed into accepted fact. That generation of American children was the first to be reared on television, and what they learned there about adult life was shaped by idealized family sitcoms like Ozzie and Harriet, Leave It to Beaver, The Donna Reed Show, and Father Knows Best. Once the Sixties kids came of age—in unprecedented numbers—this tele-mediated impression of the Fifties as conformist, politically silent, and complacent became the accepted anodyne to the social and political turbulence that marked their own entry into young adulthood. But whatever the precise mechanism of their transference and the reasons for it, the boomer generation has marched into retirement still convinced that everything that happened prior to their arrival on the scene was merely prologue to themselves as history’s main event.

Second, the postwar economic boom really did produce something new for thinkers to think about and critics to criticize. Beginning in 1946, the number of Americans who achieved middle-class status rose spectacularly: by 1973, the middle class had doubled in size and in real household income—a rare moment in the last half century when income grew more quickly at the bottom than at the top levels of society. As more Americans achieved middle-class status, the more they came to resemble one another in terms of achievement, lifestyle, values, outlook, and—as I will describe—of religion as well. In other words, the Fifties witnessed the flowering of a genuinely bourgeois society—and with it the bogey of conformity.

At least this much of the stereotype is true: Americans in the Fifties, especially those who took their cultural cues from popular magazines and books, were obsessed with the problem of conformity, much as later generations would obsess over diversity and multiculturalism. The loss of individuality—more often, the fear of losing it—is a long-standing trope in American letters, of course. But the engine of anxiety this time around was a series of influential and shrewdly titled analyses of postwar American society that set the terms of what we would now call the “cultural conversation.” Most of the writers were left-leaning sociologists who shared the conviction that America, like Europe (though for different reasons), was creating an undifferentiated society of “mass man.” And, as we will see, the postwar boom in church attendance was seen by some critics as proof of this mass-mindedness.

The conversation starter was David Riesman’s The Lonely Crowd, first published in 1950, and shortly transmitted to the wider public via a cover story in Time magazine, itself a reliable mirror of enlightened middle-class opinion and concerns. Riesman’s superb work, repeatedly revised and much argued over, was followed by two volumes from C. Wright Mills, White Collar: The American Middle Classes (1951) and The Power Elite (1956). Also in 1956, William H. Whyte Jr. published his widely discussed analysis, The Organization Man. In turn, many of these themes found literary expression in popular novels, like Sloan Wilson’s hugely popular The Man in the Gray Flannel Suit (1955), John McPartland’s No Down Payment (1957), Cheever’s first novel, The Wapshot Chronicle (National Book Award, 1958), Richard Yates’s Revolutionary Road (1961), and, in the Orwellian mode, Ray Bradbury’s Fahrenheit 451 (1953), a fantasy about mass society induced by television and the suppression of books themselves.

Like many other university students in the Fifties, I read these books as they made their way into paperback and their cumulative impact struck me for a time as gospel truth. Collectively, they offered a narrative of decline in what Riesman called “the American character”: from a nation of “inner-directed” individualists to “other-directed” conformists. This was particularly evident in the army of white-collar workers who, according to Mills, could no longer find work that gave them personal satisfaction. The reason (Mills again) was that power in postwar America had become concentrated in the hands of an interlocking directorate of military, corporate, and political elites who ran the country. All this led to the emergence of an ignorant and apathetic population (not unlike the “mass society” of the Soviet Union, Mills ventured) that had ceded control over their lives to bloated bureaucracies that, in return, provided them with financial security and a higher standard of living—Whyte’s Organization Man.

Thus framed, the world beyond the campus seemed fraught with peril to those of us who were still in college. After all, we wanted to be individualists. Much of what I read seemed refracted in the difference I saw between my father and his father. My grandfather was a self-taught man who understood machinery the way a painter knows pigment. In midlife he was hired away from one manufacturing plant in Youngstown, Ohio, to be foreman of another in Erie, Pennsylvania. His job was to keep a square city block of machines in working order, and everyone from the president of the company on down deferred to the judgment of Will Woodward. My father left Penn State University after two years to become a salesman for the same company’s industrial rubber products—a decision that allowed him to ride out the Depression in comparative comfort. My grandfather, I figured, was an “inner-directed” craftsman: at home he made his own tools rather than buy them, and I still have an electric tool-sharpener he fashioned from a discarded washing machine motor. My father, by contrast, was at the beck of customers and could never have enough of them. A fiercely anti-Roosevelt Republican, he saw himself as the personable salesman who carried the entire fortunes of the company on his back. But to me his constant need to please his customers (“Before you can sell anything,” he once told me, “you first must sell yourself”) epitomized Riesman’s “other-directed” man.

The “Greatest Generation” Comes Home to Nest

Of course, history never feels like history when we are living it. Looking back, I want to ask, who exactly were these postwar “conformists” whom writers worried about and intellectuals fretted over? How might their generational experiences have shaped their outlook?

The oldest of them had lived through the Depression, when many families had to pool resources in order to survive. Later, they became beneficiaries of Social Security and other government programs of Franklin Roosevelt’s New Deal that were intended as a safety net for the jobless. Many were union members who owed their livelihood to “big labor’s” hard-won right to collective bargaining. (In the Fifties, one in three American workers was unionized, compared to one in seven at the century’s end.) Younger workers served in World War II, where they were taught to act as a unit if they hoped to survive. A lot of them didn’t: some three hundred thousand Americans died fighting in Europe and the Pacific. Of those who made it back home, huge numbers went on to get college degrees only because the government paid their expenses through the GI Bill. When IBM and other postwar corporate giants demanded loyalty, prescribing even what color shirts and suits their representatives should wear, these men saluted. In short, most Americans felt they had good reasons to look outside themselves to the federal government, the labor unions, and especially postwar corporate America for identity and support.

By the Fifties, these young veterans (Tom Brokaw would later hail them as “the Greatest Generation”) had settled down to nest—still another reason why later generations think of them as conformist. Between 1946 and 1956, the average number of births per woman rose to 3.8—the boomer generation that came of age beginning in the second half of the Sixties—and a housing boom to match. The map of America changed as developers bought up huge tracts of land and laid out planned suburban communities—though only white folks could move in. Commercial jets shrank the distance between the coasts and on the ground supermarkets stocked the aisles with fresh and frozen abundance. As average incomes rose and America’s war machine retooled for domestic consumption, advertising, with its arsenal of “hidden persuaders,” stimulated common consumer tastes. Automobiles grew lower, wider, and longer than boxy earlier models, and many suburban garages sheltered two of them. Television sets appeared in living rooms, though there wasn’t all that much to watch, and in more affluent homes summers were cooled by central air-conditioning. Kitchens sprouted amazing new “work-saving” appliances—electric toasters, whirling food mixers, automatic dishwashers, and eventually ovens that cleaned themselves, like cats.

It was amid such American domestic wonders, on proud display at an exhibit in Moscow of a model American home, that Vice President Richard Nixon and Soviet premier Nikita Khrushchev staged their famous “kitchen debate” in 1959. The Soviets were ahead of the United States in the “space race” but well behind—as Nixon knew—in housing and feeding its people. Domestic affluence was our proudest boast. Two months later, Khrushchev traveled to Coon Rapids, Iowa, to inspect the Garst family farm, which the Eisenhower administration had chosen as a prime example of American agricultural know-how. (As it happened, Khrushchev and his entourage were housed in my father-in-law’s hotel in Des Moines.) It was a theatrical setup, just like the kitchen debate, and the featured actors played their parts. “You know,” Roswell Garst told Khrushchev in a burst of Cold War bonhomie, “we two farmers could settle the world’s problems faster than the diplomats.”5

It is hard to remember now how deeply the cold war between the world’s two superpowers contoured the culture and politics of the Fifties. Both sides called it, euphemistically, “peaceful coexistence,” but in fact it was a fierce competition between two ideologies, two very different ways of organizing society, two military powers circling each other like scorpions in a bottle. Above all, political leaders on both sides understood that they represented two radically different belief systems, one officially atheist and the other manifestly religious. And we all knew which side God was on.

America as “God’s Footstool”

The Fifties, I want to argue now, were exceptional for yet another important reason, one that David Halberstam’s otherwise observant eyes completely overlooked: it was awash in a culture of belief.

No one at the end of World War II anticipated this outbreak of religious belief and belonging. After other wars, and especially among other peoples, victory and affluence often cooled religious fervor. Who needs God when we can provide for ourselves? But here, in the Fifties, the piety that sustained the war effort (and was celebrated in countless postwar combat movies) carried over to the cold war against “godless” communism. Faith and freedom, God and country, were joined at the hip. The motto of my own university neatly captured the nation’s mood: “For God, Country, and Notre Dame.”

In the course of the Fifties, membership in churches and synagogues reached higher levels than at any time before or since. So did new construction of houses of worship. For Gallup and other hunter-gatherers of the public’s opinion, the Fifties became the statistical benchmark against which the religious commitments of every subsequent generation would be measured—and of course found wanting.

In sum, during the postwar era, religion was not only embedded in the landscape—in small towns, urban ethic communities, and the various religious subcultures I’ve described. It was also woven into the national ethos like the figure in a carpet. Rare was the religious sanctuary, Protestant, Catholic, or Jewish, that did not display the Stars and Stripes. As President Eisenhower famously put it, “Our government makes no sense unless it is founded in a deeply felt religious faith—and I don’t care what it is.”

In many ways, Eisenhower epitomized the Fifties’ fusion of faith, culture, and politics. A nominal Christian from a pietist Protestant background, Eisenhower never bothered joining a church during his long military career. But he did so after meeting the pious millionaires who funded Republican politics and deciding to run for president. Once elected, he assumed a priestly role in the White House by appealing to religion in building a national consensus. It was during his administration that “under God” was added to the Pledge of Allegiance and “In God We Trust” stamped on the nation’s currency. “America is the mightiest power which God has yet seen fit to put upon his footstool,” Eisenhower declared to a gathering of Protestant church leaders.6 Seizing the cultural moment, a young Billy Graham called for religious revival, as evangelists always do, but he and Eisenhower were preaching to a national choir. Although some religious leaders objected that the president was endorsing religious indifferentism, and others denounced his fusion of piety and patriotism, no one accused him, as critics later would President George W. Bush, of trying to create an “American theocracy.” On the contrary, most Americans barely noticed when, in 1955, the Republican National Committee went so far as to declare that Eisenhower “in every sense of the word, is not only the political leader, but the spiritual leader of our times.”7

In retrospect, it is obvious that Eisenhower’s appeals to religion were part of his effort to build a national consensus against the threat of communism—in China and Korea, as well as in the Soviet Union and its “captive nations” in Eastern Europe. To be American was to believe in God, seemed to be the message. But how was this generalized religious piety related to the specific faiths and boundaried belonging of the nation’s diverse religious communities?

Faith in Faith

This was the question Will Herberg set out to answer in Protestant, Catholic, Jew, a classic study that, like Eisenhower himself, has shaped our perceptions of midcentury American religion from the moment it was first published in 1955. Although Herberg subtitled his book An Essay in American Religious Sociology, he was not a trained sociologist. Rather, he was a theologically inspired social critic who aimed to explain the postwar religious revival in sociological terms that seemed, in effect, to explain it away.

Herberg recognized in the American landscape the footprints of the nation’s immigrant communities (he was, after all, ethnically a Jew), each bound by language, customs, and religion. As children from immigrant families assimilated, he argued, their inherited languages and customs gradually eroded so that religious identity became the last surviving expression of the nation’s “originating pluralism.” Where other intellectuals saw Americans dissolving into a single melting pot, Herberg discerned three kettles of convenience: Protestant, Catholic, and Jew. Under these generic and socially acceptable labels, he believed, the grandchildren of immigrants had found a way of retaining their forebears’ sheltering sense of group cohesion even as they merged in a common identity as Americans. In short, the postwar revival of religion could be explained as the way Americans chose to differentiate themselves within the mass of Riesman’s “lonely crowd.”

Reactions to Herberg’s book were swift and various. Jews recognized their own social experience in Herberg’s narrative and generally welcomed the parity he accorded them with the nation’s Christian majority. Catholics found in his book proof that they had at last overcome the nativist tradition of anti-Catholicism. Protestants, though, worried about the decline of denominational distinctiveness. As The Christian Century, the voice of the liberal Protestant Establishment, put it in an editorial, the “domesticated” religion Herberg described threatened to “let us all disappear into the gray-flannel uniformity of the conforming culture.”8

Of the three, only the Protestants caught the real drift of Herberg’s book. To become fully American, he was saying, was to embrace a bogus “civic faith” in freedom, democracy, and prosperity among other “supreme values” of what he called “the American Way of Life.” Worthwhile as these values may be, Herberg insisted, they were a pallid substitute for an authentic faith in God. On this point, he waxed passionately prophetic:

Yet it is only too evident that the religiousness characteristic of America today is very often a religiousness without religion, a religiousness with almost any kind of content or none, a way of sociability or “belonging” rather than a way of reorienting life to God. It is thus frequently religiousness without serious commitment, without real inner conviction, without genuine existential decision. What should reach down to the core of existence, shattering and renewing, merely skims the surface of life, and yet succeeds in generating the sincere feeling of being religious.9

“Secularization of religion,” he concluded, “could hardly go further.”10

Herberg was an intriguing foil to Eisenhower and his conflation of piety with patriotism. As a former Marxist theoretician and Communist Party member, Herberg understood that communism was a secular faith with specific beliefs and a demanding personal as well as communal discipline. He expected as much from those who believed in God and belonged to a community of faith. To Herberg, authentic religion was the kind that shook and sustained the biblical prophets, as his reference to “shattering and renewing” makes clear. By evoking the prophetic tradition, he set the bar for authentic religion very high—so high that I doubt that many believers in any period of American history could reach it.

Even so, Herberg’s lofty perch allowed those of us who read him to see just how much of the Fifties revival was really a “faith in faith” that fostered an instrumental use of religion. For example, one of the signature bestsellers of the Fifties (and still a hardy spiritual perennial) was Norman Vincent Peale’s The Power of Positive Thinking, which Herberg rightly criticized as a manipulation of Christian language in the service of psychological well-being. The genre is today more robust than ever. Had Herberg included pop music in his inventory of faith in faith, he might well have cited Frankie Laine’s lachrymose ballad of 1953, “I Believe,” which topped the national hit parade for five months. In this litany of praise for belief in belief itself, the only affirmation of God is “I believe that Someone in the Great Somewhere / Hears every word.” You can’t be more inclusive or noncommittal than that.

Revisiting Herberg is worthwhile because he was among the first to notice the importance of religious identity as an independent category worthy of sociological analysis. He also helps us see how Americans, even now, can simultaneously be both pervasively secular and persistently religious—still a puzzle to Europeans. And he clearly anticipated the public brooding over the American “civil religion” that erupted in the late Sixties and waxed again during the first term of President George W. Bush. Indeed, when Bush declared in defense of the war in Iraq that “liberty is God’s gift to man” he explicitly evoked one of the values that Herberg identified in “the American Way of Life.”

But it would be a mistake, I think, to view Fifties religion solely through Herberg’s eyes or those of any other contemporary commentator. Certainly he caught the Fifties’ drift toward a vaporous kind of religiosity that is with us as much as ever. But, lacking an interest in the empirical, he overlooked the enormous shifts in population created by the postwar economy and the impact these had on American religion. These mass migrations, as much as anything else, disrupted the older, embedded patterns of religious belief, behavior, and belonging, and enlarged the space for more generalized personal religious needs and expression.

Location, Location, Location

Beginning in the decade of the Forties, eight million Americans moved to the West Coast, many of them from small and religiously homogeneous towns in the South and Middle West. It was the largest migration in American history, involving nearly 20 percent of the population. Most of these migrants either toted their religion with them, like household furniture, or found new affiliations that better fit their taste. Either way, religion remained one important way in which they identified who they were, but increasingly that identity was a matter of personal choice.

Another internal migration saw a flow of impoverished rural Negroes from the segregated South move north for jobs in booming industrial cities like Cleveland, Chicago, Milwaukee, and Detroit. In these new locations, many Negroes gradually attained working-class status and, for the first time, enjoyed the right to vote. By 1950, for example, the black population of Michigan had already increased by 112 percent. These migrants, too, brought their religion with them, but found the doors of most northern churches closed to them. So were the doors to inner-city neighborhoods, which often amounted to the same thing. In the Fifties, the urban neighborhoods of the North were mostly Catholic enclaves, divided along ethnic parish lines, each with its own church, parochial school, convent, rectory, and, often as well, its own gym and auditorium—occasionally, even, its own high school and football stadium. These Catholics were not about to abandon these institutions that sustained their dense social networks. Nor were they about to welcome the black folk in. In this way did the maintenance of social distance and communal boundaries take on new and flagrantly racist meanings. Because they were not similarly invested in staying put, inner-city Protestants and Jews more easily could and did split for the suburbs, taking their congregations with them. In Cleveland, which was then the sixth most Jewish urban area, virtually the entire Jewish population relocated to the city’s eastern suburbs. For different reasons, those white Catholics and Protestants who stayed behind looked disapprovingly on what one Protestant moralist deplored as “the Suburban Captivity of the Church.”

If there really was a domestic “New Frontier” when John F. Kennedy took office in 1961, then surely that frontier was the burgeoning suburbs. Because of the Depression and World War II, there had been virtually no new housing built in the United States for nearly two decades before the war ended. Led by returning veterans, who needed no down payment and enjoyed home loans guaranteed by the government, new housing starts rose almost overnight from “only one per thousand people in 1944 to a record-high twelve per thousand in 1950, a number not equaled since.”11 Nearly all these new houses were single suburban homes, mostly ranch or split-level, each set on its own patch of lawn with a length of sidewalk along the street. Private backyard patios replaced the front porches typical of more communal, less nuclear older neighborhoods. From a nation of mostly city and small-town dwellers, America became by the late 1970s a nation where a majority of Americans called “suburbia” home.

As that word implies, the suburbs were seen by many Americans as utopia for those who never owned a home or thought they never could afford one—an anticipated state of happiness as well as a place to raise a family. In 1950, Time took notice of this utopian promise with a cover story, “For Sale: A New Kind of Life,” that featured super-builder Bill Levitt, developer of several mass-produced “Levittowns.” Four years later, both Life and McCall’s discerned in the child-centered suburbs what Life’s editors called the “domestication of the American Male.” And at decade’s end, The Saturday Evening Post suggested that the migration to the new suburban frontier was “motivated by emotions as strong and deep as those which sent pioneer wagons rolling westward a century ago.” By this time, idealized images of suburban family life were available every week on television via those sitcoms featuring benignly bumbling dads, forbearing moms, and their cute but devilish offspring. In them, no one (except Mom, in the kitchen) ever seemed to work.

It was myth, of course, as are all utopian dreams. But so, I want to argue, were longer-lasting images of the suburbs as dystopia. According to this counter-myth, the suburbs were cultureless wastelands of personal alienation and social conformity—Riesman’s lonely crowd mousetrapped in spiritual and emotional cul-de-sacs. This theme birthed an entire genre of films, from The Man in the Gray Flannel Suit (1956) through The Stepford Wives (1975) to American Beauty (1999) and Pleasantville (1998), a satire that has two teenagers transported to a fictive Fifties family sitcom. But none of these was more hateful than John Keats’s 1956 novel, The Crack in the Picture Window, which offered a view of the suburban development as “a jail of the soul, a wasteland of look-alike boxes stuffed with look-alike neighbors…a place that lacks the advantages of both city and country but retains the disadvantages of both.” Not surprisingly, Keats named his fictive suburban family “the Drones.”

In short, the new suburbs of the Fifties became for critics tangible proof that postwar America was becoming hopelessly conformist. It never occurred to them that those who flocked to the suburbs wanted what they got. In the middle of the decade, William Whyte Jr. spent several months examining the inhabitants of Park Forest, Illinois, a “packaged community” of tract homes hewn out of cornfields thirty miles south of Chicago. They were, he felt, perfect specimens of “the Organization Man at home.” Most of the families were young and transient: for them, Park Forest was the first of what they expected to be a series of domestic way stops as the husband moved up the organizational ladder. It was also, Whyte reported in The Organization Man, “a hotbed of Participation. With sixty-six adult organizations and a population turnover that makes each one of them insatiable for new members, Park Forest probably swallows up more civic energy per hundred people than any other community in the country.”12

But there are other, more positive ways of describing life in the new suburbs of the Fifties. They offered not only new addresses but also fresh opportunities to create communities where none before existed. This meant forming churches as well as schools, Sunday schools and PTAs, fire departments, Rotary clubs, the Jaycees, and other volunteer civic organizations. In the cities and older suburbs, as well as in small towns, these associations had the benefit of established local habitations and habits—not to mention authority figures like longtime pastors and elders of the congregation. On the new suburban frontier, these human fixtures had to be produced from scratch.

But where, fifty years later, social scientists would have found virtue—namely, the creation of political skills and other “social capital”—in the necessity of Park Foresters to develop civic and communal organizations, Whyte found enforced “belonging” and social pressure to conform. Moreover, where Whyte saw a homogeneous white community of child-centered Organization Men and their homemaking wives, others might have marveled at the religious and (white) ethnic diversity of these migrants, and at their willingness to cross old social boundaries for the sake of building communal institutions.

Even so, Whyte did stumble onto one trend along the new suburban frontier that presaged a long-term transformation in American religion: growing indifference to doctrinal and denominational particularity among American Protestants. If it was not quite the “faith in faith” that Herberg reviled, it was a step in that direction. In his chapter on “The Church in Suburbia,” to cite one much-discussed example, Whyte focused on the establishment of the United Protestant Church in Park Forest, a cooperative venture among twenty-two large and small denominations. Since there wasn’t enough land to build a church for each denomination, the community was surveyed door-to-door to find out what neighbors wanted in a church. Denominational identity was fourth after minister, Sunday school, and location. This did not surprise Park Forest’s “village chaplain.” A former chaplain in the Navy, he was experienced in conducting generic Christian services aboard ships. What the survey showed him was a yearning for a useful church that would not let fine points of theology get in the way of forming “a sense of community.” “We pick out the more useful parts of doctrine to that end,” he said. “We try not to offend anybody.”13 Billy Graham could not have described the effect of his own nondenominational mass evangelism in better terms.

As it turned out, Lutherans and Episcopalians, as well as Catholics, Jews, and Unitarians, all built separate houses of worship in Park Forest. But the idea of creating a Protestant congregation based solely on the members’ ascertained wants and needs would eventually become a prized technique among Evangelical Christians in the Eighties and Nineties. It was part of what historian Martin Marty perceived in 1956 as “the New Shape of American Religion,” a give-the-customer-what-he-wants attitude that would eventually undercut the denominational diversity and loyalty that Protestants at the nation’s founding had created.14

In the Fifties, then, religion was embedded in the national culture as well as in the landscape—though, like minerals in the soil, particular religious traditions were deposited at different depths and levels of concentration. Understandably, most historians of American religion have focused on the generalized culture of belief under Eisenhower because that was a feature unique to the postwar era. Historians have also been greatly influenced by Herberg’s gathering of religious Americans into the tripartite paradigm of Protestant, Catholic, Jew, which contained a lot of truth. Most Americans did identify themselves that way.

At the same time, however, millions of Americans—often the same Americans—continued to identify with particular denominational traditions, living out their lives in communities boundaried by particular beliefs and behaviors that gave them a powerful sense of “place.” They rarely prayed with those outside their own fold, and except for weddings and funerals of friends and neighbors, they seldom entered churches not their own. In fact, well into the Sixties, we Catholics were forbidden by canon law to attend Protestant worship services. And even in a town like Grand Rapids, Michigan, where most of the inhabitants were both Calvinist and of Dutch descent (and where many last names began with “Van”), the marriage of a woman from the Reformed Church in America to a man from the Christian Reformed Church was still considered a risky “interfaith” union.

I have stressed the persistence of religion’s social distancing not because I miss it—I don’t—but because it contradicts the image of the Fifties as exceptionally homogenized and conformist. On the contrary, I would argue, the age of Riesman’s “lonely crowd” was yet to come. In the Fifties, the real engines of conformity—mass media, mass advertising, mass markets and consumption, and, yes, mass evangelism—were up and running. But compared to subsequent decades there were still spaces where Americans could be resolutely different. Religion defined one of those spaces. So did region and ethnicity of the originating immigrant European variety to which religion was structurally tethered. On the other hand—and in any era there always is an “other hand”—the erosion of these boundaried spaces was already well advanced. Here I will only point to one event, often overlooked, that as both fact and metaphor divides the America of the Fifties from the America of ever after.

On July 16, 1956, Congress passed a bill that mandated the taxes (four cents on every gallon of gasoline) necessary to build the Interstate Highway System. Like the building of the transcontinental railroad a century earlier, the interstate system was intended to bind the country together for economic, military, and social purposes. Truckers could reach markets faster with producers’ goods. The military could move convoys of equipment and troops, just as the Germans had over the autobahn during both world wars. And for the increasingly mobile Americans of the postwar era, the interstate promised greater freedom to “see the USA in your Chevrolet,” as Dinah Shore urged again and again on that other emergent transcontinental highway of the Fifties: television.

Like other technological innovations, the creation of the interstate—now nearly forty-seven thousand miles long—birthed unintended social consequences. When I was young and a reluctant passenger in my father’s big blue Buick on his salesman’s trips south from Cleveland, we would travel two-lane roads that ran straight through every small central Ohio town, linking them like beads on a rosary. We’d stop on Main Streets where my father knew exactly where he could get a savory stew for lunch, or—his favorite—a pig’s knuckle sandwich. Most of those restaurants have disappeared, and so have some of the towns.

Today you can travel from east coast to west and never leave the interstate to eat or rest. The food along the way is fast and much the same, and so are the motel rooms. Venture off the highway and you find towns with abandoned centers. The hulking old churches still stand, but there are fewer people in them. The old hotels (the kind my father-in-law ran), the local shops and restaurants that once made each town seem like another country, are mostly gone now, and what little commerce that remains has moved to franchised outlets in anonymous strip malls that long since sprouted up like concrete weeds along the exits from the interstate. Unlike the transcontinental railroad, which created new communities, the interstate system has gutted most of those it has not destroyed. And when, at last, you arrive at wherever you are going, you have the feeling that you’ve ended up where you began. Only the weather is different.

What I mean to suggest is this: just as the Interstate Highway System bound the nation together by overriding much that once made local communities and geographical regions deliciously different, so did the postwar massification and standardization of consumer products and appetites, of television entertainment and its audiences, of higher education, its curricula and students—even of the way Americans talked and cooked and raised their children—gradually erode the ecologies of localized habitations and habits that had once characterized American diversity. It was inevitable.

But as one set of cultural and social boundaries faded, others—chiefly race—came into sharper focus. Here I want to describe where and how I, a white kid from an all-white suburb, first awoke to the evils of American apartheid. It was a long way from the South. But as Martin Luther King Jr. wrote in his “Letter from Birmingham Jail” in the year of my awakening, “Injustice anywhere is a threat to justice everywhere.”

American Apartheid and Me

Before I became a journalist, I had met few Negroes, as they were called then (and will be called here, to preserve the tenor of those times). Growing up, the only black man I ever met was Fletcher, a burly, soft-speaking handyman my mother engaged whenever trees needed trimming, flowers replanting, or any other needed yard work that we kids were too young to do or my notoriously unhandy father couldn’t manage. No Negroes lived in Rocky River or, so far as I knew, anywhere else in Cleveland’s westward suburbs. Even the “cleaning ladies” my mother periodically employed were white.

Were we racists?

Years later, when I reviewed Norman Podhoretz’s Doings and Undoings, a collection of his early essays, I was struck by his bravely candid “My Negro Problem—And Ours.” In that essay, written in response to the Negro writer James Baldwin, Podhoretz recounted the turf wars between the Jews and the goyim (mostly Sicilian Catholics) and the schwarzes that made just getting to school and back a fearful experience. Podhoretz was a poor, Depression-era Brooklyn kid and terror was the name he gave to his youthful experience of ethnic “diversity,” though there were moments of racial amity as well.

We suburban kids had none of that—no racial taunts in schoolyards to remember, no gang fights in the parks. In our experience of American apartheid, racism was a prejudice that you had to develop from a distance, but for real hatred we had no moving targets. You couldn’t grow up hating people with whom you never shared space or time. But you couldn’t get to know them, either.

Omaha in Black and White

There were only a handful of Negroes in my class at Notre Dame, and only one on the basketball team, though he was the standout player. There were none (that I can recall) in my law school class at Michigan or among the graduate students in English at Iowa. I did tutor a few black football players while at Iowa, but it wasn’t until I went to work with the Sun Newspapers, a string of weeklies in Omaha, that I got to know any Negroes. It was also where I learned the costs of northern segregation.

In 1962, Omaha was a city of 301,000, an ideal size for learning how economic and social structures function. Railroading (Union Pacific), meatpacking (Swift, Armour, Cudahy), insurance (Mutual of Omaha), utilities and construction (Peter Kiewit & Sons) were the major industries. Their leaders served on one another’s boards—a perfect local illustration of C. Wright Mills’s “interlocking directorates.” The social hierarchy was rooted in the Knights Ak-Sar-Ben (Nebraska spelled backward), a faux medieval brotherhood of civic leaders and boosters that even Sinclair Lewis could not have conjured. At their annual ball, the board of governors crowned a member to reign as “King of Ak-Sar-Ben” alongside a Queen (usually a young woman of debutante age) plus a retinue of Princesses, Countesses, and Escorts. The whole apparatus was designed to incorporate ambitious upstarts into its clubby and conformist business circle, and to discourage the sons and daughters of local gentry from migrating after college. There were no Negro Knights, and only a few token Jews. To refuse to join Ak-Sar-Ben, as billionaire Warren Buffett famously did, was to thumb your nose at the Omaha establishment.

In those days Omaha billed itself as “the Gateway to the West”—never the other way around. Politically, it belonged to the western “cowboy” tradition of conservative individualism: personal autonomy, small government, states’ rights, free-enterprise capitalism, fierce anticommunism, and a wariness of Washington were its hallmarks. In 1963, Nebraska was Barry Goldwater territory and Republicans Roman Hruska and Carl Curtis were among the most consistently conservative (and intellectually challenged) members of the U.S. Senate. Two of the major events I covered freelance for New York magazines while in Omaha were the Christian Anti-Communism Crusade of Dr. Fred Schwarz, and an investigation into local live television broadcasting by the Federal Communications Commission. Typically, Omaha’s daily newspaper, the World-Herald, welcomed the first and feared the second as an ambush by the feds.

The World-Herald, which was sold throughout the state, was our outsize competition. Its outlook was insular, staunchly Republican, and so conservative that not until 1962 did the editors allow a liberal columnist—James “Scotty” Reston, syndicated by the New York Times—to appear on its op-ed page. Other than the Lincoln Journal Star, published in the state capital, the Sun was Nebraska’s only liberal voice. For those of us who arrived from east of the Gateway to the West, it was hard not to be liberal in a state as buttoned-up as Nebraska.

As the Sun’s newest recruit I spent nine months as editor of its North Omaha edition. North Omaha was the one quadrant of the city with no real commercial center or social hub. Breaking news—indeed news of any kind—was hard to come by in my territory, which included poor white communities of East Omaha and Carter Lake. I wanted the Sun to be their voice.

A few blocks away, in a two-mile corridor along North Twenty-Fourth Street, was the Near North Side—the city’s Negro ghetto. There were twenty-six thousand Negroes in Nebraska at that time, and that’s where most of them lived. Malcolm Little—later Malcolm X—was born there in 1926, and in the early Fifties civil rights leader Whitney Young had apprenticed there as director of the Omaha Urban League. The Negro community had its own newspaper, the Omaha Star, but the paper’s lone reporter, Charlie Washington, graciously introduced me to the clergy and other leaders of the community. I wanted them to be my readers, too.

Some weeks, as I worked the streets, lugging my 4×5 Speed Graphic in search of pictures, it was almost impossible to find a story or picture worthy of front-page display. Eventually, I coaxed white and black shopkeepers to create Pioneer Days, a weekend festival of parades and music and sidewalk booths selling local arts and crafts. It gave me months of stories to print. A decade later, well after I’d moved on, it had expanded into a ten-day community celebration. Ak-Sar-Ben no longer exists, but a version of Pioneer Days is still celebrated every spring.

For the first time in my short life I was mixing regularly with people who were not at all like me. Now and then one of the Negro leaders I got to know would suggest that we and our wives meet for dinner at the Blackstone or one of the other downtown hotels. It never occurred to me then that these social occasions might also be a test: I didn’t know that five years earlier some of these establishments had refused to seat Negroes at their tables.

Our favorite couple was Dr. Claude Organ and his fetching wife, Betty. Claude was president of the Urban League and, as a surgeon on the faculty of Creighton University’s medical school, something of a pioneer himself. Tall and as physically imposing as Paul Silas, the all-American center on Creighton’s basketball team in those years, Claude was at age thirty-four the intellectual match of any man in the city. On one occasion he suggested we drive back for a nightcap at his home, his Betty with me in my car, my Betty alongside him in his. That night, standing in the Organs’ splendid house on an isolate bluff above the ghetto, at last it hit me: even this accomplished couple would not be welcome in most white sections of the city, even in our own middle-class neighborhood of modest houses on small lots. Two months later, in fact, when the pastor of our church announced that he had hired a Negro woman to teach in the parish school, he assured the congregation that she would not be looking to buy a house within the parish boundaries.

Omaha was like that, but so was most of the country.

In the late spring of 1963, I was pulled out of North Omaha to write the Sun Special, a weekly feature of several thousand words that allowed me to research and report whatever social issue editor Paul Williams deemed worthy of in-depth analysis. Like most of the Omahans I came to admire, Paul was from elsewhere (Kansas) and he examined the city as if he held it under a microscope. Under his direction, I investigated overlapping tax districts, rising health-care costs, children of the affluent, failing schools, and, on occasion, religion. “Lay your statistical base” was one of his mantras, and if that meant spending a week in civic archives, that is what I did. No one then used the phrase “civic journalism,” but that is what we practiced.

In the summer of 1963, the civil rights movement that had for so long seemed so far away erupted in Omaha. The city fathers thought it couldn’t happen here in the Gateway to the West. There were no Negro members of the City Council to alert them, no black Knights of Ak-Sar-Ben to offer advice, not even any white clergy close enough to the all-black Ministerial Association to warn of insipient militancy. By July a group of Negro clergymen had formed the “Citizens Civic Committee for Civil Liberties” (4CL), promising demonstrations, protests, pray-ins, and other forms of nonviolent direct action advocated by Dr. King. From then until I left for Newsweek exactly a year later, civil rights was my primary beat.

Nothing I had read in books about racism taught me more than what I learned by tracking this story. The Near North Side, it soon became apparent, was rather like the black townships I later visited in South Africa: only those who lived there knew much about it. The mayor immediately formed a Biracial Committee in response to the threat of civil disturbances, but its members had no data on Negro employment, education, or housing, the chief concerns of the 4CL. Since neither the World Herald nor the local television stations were into investigative reporting, we had the field to ourselves.

It wasn’t easy laying the usual statistical base, but over the next few months, here is what we found. There were very few Negroes in white-collar jobs and almost none in the craft and trade unions, where membership was passed from fathers to sons. An examination of the city’s public school system revealed only seventy-seven Negroes on its payroll, more than half of them as building custodians and none as senior high school teachers. Part of the problem was lack of education: most Negro high school students never graduated and few of those who did enrolled in college. Creighton University, a Jesuit school with a solid reputation for color-blindness, reported that only twelve Negro students had graduated in the previous five years, and that eight of them had moved elsewhere to find jobs.

Housing was the most flammable issue. In a monthlong study of ghetto real estate transactions, I found that compared to residents of white communities Negroes were paying half again to twice as much for housing relative to value. Many were barely more than shanties, and two-thirds of the buildings, records showed, needed major repair. Most Negroes rented and were at the mercy of slumlords—black as well as white—who thrived on residential segregation. Population density was more than double that of the rest of Omaha. Those who could afford to move couldn’t. Where else were they to live? On the ghetto’s fringes, where I did interviews door-to-door, neighbors described how real estate agents pressured white homeowners to sell their houses below market value once a Negro family broke the color barrier—a practice called “block-busting.” I was a suburban kid getting an urban education.

It was also a moral education. The mayor’s committee proposed building public housing on the Near North Side as the answer to ghetto overcrowding. What the Negro clergy wanted, though, was an open-housing law that would ban discrimination by owners as well as by real estate agents. Omaha’s Realtors defended what they called a “Property Owners’ Bill of Rights,” which preserved an owner’s freedom to decide whom to sell or rent to. In practice, said the president of the Omaha Real Estate Board, “I would show a house in a white neighborhood to a Negro but I would divorce myself from a sale to a Negro in order to stay in business.”

I didn’t have to go far to know that wasn’t true. As it happened, the Negro teacher in our parish school did decide to move closer to her job: she wanted a house for herself and her sister, one with a separate suite for their aged mother. There was just such a house for sale across the street from mine and after sounding out the neighbors we invited the teacher to look it over. But the real estate agent refused to let her even cross the threshold.

Open housing was the one issue on which both the mayor’s committee and the black ministers looked to the city’s white clergy for moral leadership. Several Jewish spokesmen, who knew all about residential restrictions and gentlemen’s agreements, were vocal in support. Among Protestants, however, there was moral dithering.

The most prominent Protestant leader was Dr. Edward Stimson, who had once studied theology in Germany with the great Karl Barth, a boast no other minister in Omaha could make. Stimson was head of the city’s Human Relations Board and pastor of Dundee Presbyterian Church, where many of the city’s business leaders worshipped. Asked to lead an open-housing initiative, Stimson insisted that “as a Christian minister” he could not in conscience do so—not, he said, without “some means of screening out the morally undesirable.” Too many Negro families, he explained, tolerate “a permissiveness in matters we consider moral which would make them unacceptable in most white communities.” As an alternative, Dr. Stimson suggested that “we salt a few Negroes who would be excellent neighbors throughout white neighborhoods.”

Most clergymen I interviewed said the issue of residential segregation had never come up in congregational conversations. “I try to gear my sermons to the needs of our members,” a Methodist minister explained, “and that need is not pressing enough to supersede other needs.” Lutheran minister William Youngdahl, son of a former governor of Minnesota, made a modest proposal: he asked for ten couples willing to meet socially with couples from another Lutheran church in the ghetto. His church debated the proposal for months. The exchanges never happened and in the end Youngdahl was asked to resign. The general pattern was clear: most white Protestant pastors were not going to support an open-housing law for fear of alienating their own flocks.

The Catholic response was more nuanced. The archbishop of Omaha, who labored under no such fear of the laity, thought there ought to be “laws to give equal opportunity for decent housing to every member of the human race.” But any concrete action had to come from Father James Stewart, a young, overworked priest who ran the Archdiocesan Council on Social Action. Father Stewart drew up a tough open occupancy law, with possible jail sentences for violators, and presented it to the City Council. It never passed. In the view of the City Council president, the proposed law “bordered on atheism and sovietism.”

My last week in Omaha was exactly one year after the civil rights protests began. My last story for the Sun summarized the modest progress the mayor’s Biracial Committee had achieved: 22 new Negro teachers in the public schools, 116 jobs elsewhere, and no progress at all on open housing. In 1966 and again in 1968 there were riots in the streets. The black clergy were no longer calling the shots. Omaha was now mentioned along with Chicago and other cities as a site of urban unrest. Federal funds were propping up the local economy and the Gateway to the West was now a two-way exit on the interstate.

Father Stewart left Omaha about the same time I did. He went to Notre Dame to get a doctorate, married, and moved on to teach in Minnesota. Before he left, he asked me if I would sell my home to his replacement on the Archdiocesan Council on Social Action, a Negro married to a white woman. My agent refused to let him in the door, nor would he answer my complaints.

In May 1973, the Sun won the Pulitzer Prize for investigative reporting, the first weekly newspaper to be so honored. Paul Williams sent me the write-up on page two of the North Omaha edition. But the lead story on page one was about Pioneer Days. And scrawled across the top in red crayon Williams wrote, “You should have stayed.” The Sun folded for want of advertising three years later. Even the Pulitzer Prize and an infusion of Warren Buffett’s money couldn’t save it.

I wasn’t sorry to leave Nebraska, but I was glad I’d gone there. Omaha was where I learned what journalism is, though I remain a two-fingered typist. It was also a microcosm of all the social tensions, moral challenges, and ambivalent responses occurring all across the country. Never again would my work run so close to any one community’s grain. At Newsweek, our audience was a nation, our reporting from around the world.