CHAPTER SIX

THE WEB OF NUMBERS: A MESSAGE FROM THE CENSUS TO POLITICS

THE sixties had yet to fade off into the seventies when I realized that somehow, subtly, “the numbers” were changing my way of thinking—I was trapped in a web of numbers.

The realization came to me when I recognized that I was waiting for Thursday—”Bloody Thursday.” Every Thursday evening my television screen would strike out at me with the numbers on Vietnam: 168 killed, 212 wounded, 13 missing. Or: 207 killed, 400 wounded, 19 missing. Or: 45 killed, 102 wounded, 7 missing. I would stay home Thursdays to catch the numbers on death, then turn off; and my mood, my political thinking would be warped, one way or another, for the weekend.

My emotions, I found, were being shaped by statistics. Numbers were doing it to me. At the beginning of each month would come the unemployment figures—up to 5.8 percent, down to 5.4 percent, up to 6.0 percent, all seasonably adjusted to affect my mood, like air-conditioning. A week later would come the price-index figures—these were always worse, and my mood went up or down depending on whether things were getting worse faster or slower. Prices were up 0.7 percent in a bad month, up 0.4 in a good month. Then would come crime figures; and housing figures; and export-import figures; trade-balance figures; school figures; divorce figures; finally Gross National Product figures, and these always baffled me. What were they? What did they measure? In whose web was I caught?

Statistics had once been a clearly marked area of scholarship, where economists, sociologists and planners held intellectual squatter’s rights. Now the numbers were a new staple of journalism. The Bloody Thursday figures fitted into the middle pages of the newspapers, as did the numbers on traffic, schools and tobacco use. But the high-impact figures—unemployment, prices, crime—were front-page news everywhere, as well as natural stories for the television evening news.

Slowly, one tried to explore the numbers, for they had become the fashionable way for politicians to demonstrate a grip on reality. And one learned that there are real numbers and phony numbers.

The FBI numbers on crime, for example, were phony. No one could tell from them how grave the menace of domestic violence was to American civilization. FBI figures were bad not because the FBI was lying, but simply because it did not control its figures. National figures were put together by adding local figures collected by corrupt or efficient, slothful or diligent local police forces. If a fairly effective police force, like New York City’s, collected its figures honestly, it made New York seem like Death Capital of the nation. If a city like Dallas slapped its figures together haphazardly, Dallas glowed by contrast. And the reputations of both cities affected their politics. But national figures on crime were meaningless. Statisticians had an acronym for such figures: GIGO—Garbage In, Garbage Out.

On the other hand, wherever the Census of the United States certified data, the numbers had to be taken seriously, for the Census made a formidable effort to be accurate. Each month, for example, in whatever week falls the 19th day of the month, the Bureau of Census sends out 1,500 canvassers who in the five days of that week must conduct interviews in 50,000 households in 450 specific areas, asking who is at work in the family, who is seeking but cannot find work. Daily, their sheets are mailed to twelve regional data centers for coding. The tapes are then sent to the national processing center at Jeffersonville, Indiana, which by the weekend is ready to fly processed data to Washington. The Census Bureau meets the plane at the airport and rushes the data to the only four men in the Bureau permitted to work on them. By Monday, some time between nine and eleven, the Census telephones the Bureau of Labor Statistics, which sends two couriers, who must be personally known to the men delivering the data, and the couriers carry the information to the BLS analysts, who work in secret for another week. Up to this point, all unemployment data are as secret as CIA messages—fortunes can be made in the market with advance information; not even the President or the Director of the Census has access to the data. Then, finally, the following Monday, the BLS announces what has happened—how many are working, how many are jobless, whether things are getting better or worse. And with the news, if there is a sharp break up or down, the national mood changes. The stock market soars or plunges, bureaucrats and Congressmen gloat or shudder; Governors, Mayors, labor leaders, academics weave the figures through their speeches; perspectives change.

There is history behind the unemployment figures as there is behind all the major series of numbers the Federal, state and local governments put out. In the Depression of the early 1930’s no one knew even vaguely how many Americans were unemployed—was it 5,000,000, 10,000,000 or 15,000,000? And without the measure of the problem, the government could only fumble for solutions. It was the WPA which, in the Depression, first assigned unemployed scholars to work on a realistic facsimile of this information; and from their research came the concept of the Employment Act of 1946; and then the further collection and refinement of figures until the statistocracy of the United States became the world’s leading producer of social numbers.

By 1972, numbers had become part of the political culture, picked over by officials, scholars, businessmen, public-relations experts, all of them brothers in a new Jesuitry, skilled—in mimicry of the old—at making the worse appear the better numbers. Were 12 killed in Vietnam in a week 12 too many, and hence evidence of a failure of policy? Or was that number—down from an average 300 a week four years earlier—the evidence of progress and disengagement? Which of two equally accurate numerical statements should a politician use: that more than 100 people receiving over $200,000 a year in income paid absolutely no taxes at all? Or that of the 15,300 people making that much or more, 15,200 of them paid 44 percent, or an average of $177,000 each, to the Federal tax system?

Or, when talking about Black Progress, do you point out that the average black family income rose almost 50 percent in the previous decade, faster than white family income? Or that white families still average 39 percent more a year than black? Or how does one handle figures that strike at the belly—is the alarmist correct because, factually, beef costs more in the United States than ever before in history? Or are the complementary figures on consumption more relevant—in 1972, Americans ate nearly 88 pounds of beef and veal a year; in 1962, only 71 pounds; and in 1929, at the peak of the then-prosperity, only 50 pounds. Are the demand-and-consumption figures more operative, or the price-rise figures more indicative of what is happening?

Behind every table of figures lies a pattern, a concept, an idea of what someone sought to measure. But do old mind patterns, by which numbers are customarily arranged, still measure today’s America?

For example, in the largest pattern of all, the Gross National Product, whose numbers are estimated quarterly—is it a valid over-all measure of national well-being? How can numbers measure well-being if the concept of well-being itself changes from decade to decade? The concept of a national measurement of the Gross National Product was imposed in February, 1946, when the thinking of the postwar world impelled Congress to pass its first Full Employment Act, creating a Council of Economic Advisers that would, in turn, decree the Gross National Product as a measuring figure.

“I was one of the key figures pressing for it then,” says Bertram Gross, professor in the Urban Affairs Department of Hunter College, New York. “Who knew then that pushing for growth would distort all human values and priorities? I’m against growthmanship now…. The concept of national accounting, this measuring technique, came out of mercantilism; you can trace it from Colbert, through Condorcet, through Keynes…. It leaves out the measuring of national resources entirely. Progress came in the old thinking by conquest of national resources, exploiting nature—the more you took out of the ground, the richer you were. But, really, are you richer if you have less coal, or oil, or copper in the ground at the end of the year? … Do you value the environment as an asset to be preserved, or something you cannibalize for accounting purposes? … And how do you measure household work, the time the woman puts in her house? Or household investment in equipment and appliances—it’s larger than industrial capital in our country by a margin of seven to five. GNP is a concept that worked for a certain time period. It doesn’t work in a pre-industrial society, because it can’t measure the large amount of non-market activities. And again, in this society of ours, this super-industrial society—we’re in a period where growth can’t be measured by GNP, because our growth is already largely in the non-market references.”

In short—was our way of looking at ourselves obsolete? What were we measuring when we measured America? And how much of the heat of political debate came from measures that were misread? Or measures that no longer fitted? Was politics trapped by numbers that came from the thinking of the postwar world, which thus imprisoned the thinking for tomorrow? Or did the numbers themselves describe the new American post-industrial world, providing American data for the pioneer of new social theory—as the data of the British census, a century before, had provided the data for the metaphysics of Karl Marx?

Whatever the philosophic answer, there was no other way to begin an understanding of the campaign of 1972 except to explore the data of the national census which underlay the campaign—the Census of 1970.

   One enormous fact dominated the Census figures, pushing through all its tables and indices—the collective decline of the American city, its anguish and turbulence. The postwar world had left the cities behind and gutted their industry and vitality. This story was to be almost entirely ignored in the Presidential campaign of 1972, giving it a quality of domestic irrelevance unmatched since the 1920’s.

To make the Census tell what happened to the city, one had to go first to the gross data.

On April 1, 1970, so reported the Census, the population of the United States was 203,000,000. It had grown from 179,000,000 people in 1960 by 24,000,000 citizens (almost half the population of Great Britain), and would grow further at 3,000,000 a year to reach 210,000,000 people by voting day of 1972. This growth was the second largest growth in numbers in American history—but, contrariwise, in percentages the numbers had grown by only 13.3 percent, the smallest percentage growth in American history except for the Depression decade. The United States was growing, yet its growth was shrinking.

To sift the story of the city out of these figures, one had to start with the past, the countryside and the farm. At the beginning of the Republic, only 5 percent of Americans had lived in cities. Now in 1970 the proportion was reversed—only 5 percent lived on farms. As recently as fifty years before, in 1920, 40 percent of all Americans had still actually lived and worked on farms. The old song of World War I had been more accurate in predicting the future than it knew: “How Ya Gonna Keep ‘Em Down on the Farm, After They’ve Seen Paree?” The answer since then, decade after decade, had come in continuing flight; and in the decade of the 1960’s more people fled the farm to the city than ever before. In 1960, 15,635,000 Americans had still lived and worked on farms. But, said the Census of 1970, 40 percent of those had fled in the next ten years, leaving behind only 9,700,000!1

With the shift, however, had come only the vaguest effort of political imagination to catch up with what this movement of men and women had done to America’s heritage of cultural values. The ethos of rural America had held that a man’s individual effort made all the difference—his trying hard controlled his rewards. If you plowed deep and sowed with care, you harvested richly; if you worked diligently, you could clear timber and enjoy lush fields, or clear rock and have good pasture. But in the city this ethos no longer necessarily works. Individual effort in the city is webbed with other people’s efforts in giant organizations; in the American city, as in the Communist world, your position of leverage in your organization can determine your reward as much as your effort; and the leverage of your group in politics determines what it gets as much as the group’s needs. Yet the old cultural values of farm and countryside, no matter what the Census told of reality, still persisted in 1972—not only the ethos of individual striving, but its kindlier memories of neighborliness, friendship, charity. Of all the Presidential can-didates, only one, John Lindsay of New York, seemed to grasp that American cities were ungovernable by old cultural values.

The figures of the 1970 Census seemed to indicate that the movement of Americans from countryside to metropolitan center was irreversible. Of all America’s 3,124 counties in 1970, 2,169 had witnessed a net out-migration of their people. Of these, 1,367 showed not only net out-movement but an actual net loss in over-all population; of these, again, some two thirds had been losing population for thirty years, and some for half a century; and three states—North and South Dakota and West Virginia—had net over-all losses in population.

A map published by the Census Bureau in varying shades of red and blue—red for loss, blue for gain—showed the movement graphically. In terms of mileage and space, red dominated the map. Like an inverted triangle, the red of the emptying interior spaces of the country ran south from a broad base on the Canadian border, narrowing as it pushed down through the Plains states west of the Mississippi, throwing off red spurs east in Appalachia and the old Black Belt of the South, then reaching a pointed wedge at the Gulf in Texas.

The growth counties in America were equally clear. The 955 counties that gained by internal migration were found mostly in the city clusters that the Census lists as the 243 Standard Metropolitan Statistical Areas of the United States (SMSA’s)—those vast belts of touching cities, towns and suburbs which now create the dominant form of American civilization. And the new figures said:

§ 85 percent of the nation’s population growth had come in these metropolitan areas—and within them, 80 percent of the gain had come in their suburban rings.

§ Almost three quarters (73.5 percent) of all living Americans lived in these urban clusters; the clusters fringed the shores of the lakes and the oceans, and the figures said that more than half of all Americans now lived within fifty miles of the oceans or the Great Lakes. The clusters merged into larger megalopolises; of these the largest was the 450-mile belt from Boston to Washington, where lived 36,200,000 people, one sixth of the nation.

§ Put another way: Three quarters of all Americans lived on 1.5 percent of the nation’s land. Only 53,000,000 out of the 203,000,000 Americans could be described as living in the rural areas that covered the other 98.5 percent of America’s stretch.

§ In this changing pattern of American life, the West had replaced the Northeast as the most urbanized area of the country (by 82.9 percent to 80.4 percent); California was the most urbanized state in the Union, with 93 percent of all its people living in Standard Metropolitan Statistical Areas. By contrast, only three states were still unspotted by such an SMSA—Wyoming, Alaska, Vermont.

As for life within these great clusters, even the serious analysts and scholars of the Census Bureau worried about the definitions underlying their collections of numbers. “What is a suburb now?” asked one of the Census Bureau’s demographers when I pressed him for a definition. “We used to define a suburb as a place where people live who commute to work in the city. But now more people work in the suburbs than work in the city, jobs are growing faster there than in the city, the factories are leaving the central cities, but they want to stay in a metropolitan area.”

No one could define a suburb any longer by function except to say that it was a satellite community in a ring of other such communities around a desperate central-core city which provided the key services for the entire metropolitan area, delivered its culture, entertainment and thinking, and fed and housed the area’s poor on welfare. But the center city no longer provided the jobs for the poor, the illiterate, the unfortunate, the untrained by which, historically, it had grown great. The cities were left with the glories and the debris of a civilization ending.

Neither the Census nor any other figures could describe that macabre phenomenon of decay and fear which spread like cancer spots in half a dozen great American cities. The South Bronx in New York, for example, had become a two-by-four-mile human cesspool comparable only to Calcutta—wild dogs roamed it, drug addicts haunted it; by day and by night it was a place of peril, or, in the words of Dr. Harold Wise, founder of the Martin Luther King, Jr. Health Center there, it was “a necropolis—a city of death.” An abandoned hospital, its windows broken, its doors unhinged, reported by its silence that the city had simply been unable to provide police protection necessary for health care in the most underprivileged health area in any big city in the country. Tenements in street after street were boarded up; open lots glittered with broken glass and stank of refuse. Here lived 400,000 people, 65 percent Puerto Ricans and other Hispanics, the rest mostly black—and the whites withdrew farther and farther north as the spreading tentacles of decay and violence urged them out. There seemed to be no way for urban civilization to make the life of these people better; and the phenomenon was repeated, on a lesser scale, in Boston, in Philadelphia, in Detroit, in St. Louis.

The Census of 1970 could only deliver numbers which people might interpret as they wished. For example: Of the twenty largest cities in the country, nine had gained population in the decade of the sixties, eleven had lost population. But if one sorted out such numbers, two gainers among the nine winners were special cases: New York, which had gone up by 1.1 percent, from a population of 7,781,984 to 7,867,760 by the special magic of its unquenchable vitality; and Indianapolis, which had risen, statistically, by an incredible 56 percent, from 476,258 to 744,624, by a technical change of jurisdictional count. When one eliminated these two special cases from the winning column, the figures on cities slowly took on a strange clarity: Every single major city of the North and East had lost population—Chicago, Philadelphia, Detroit, Cleveland, Boston, Washington, Milwaukee, etc. And every single winner—Los Angeles, Houston, Dallas, San Antonio, Phoenix—was in the Southwest.

These numbers represented gross movement; but within them was the most important political story in America: White people, in millions, were leaving the big cities of the North and East; black people and Spanish-speaking people were replacing them. No national politician could examine such figures candidly or openly without instantly exposing himself to the corrosive charge of racism and prejudice. The civil-rights programs of the sixties had delivered much of honor and vast achievement; but they had placed the burden of progress on the people of the big cities; and wherever white people, caught in the clutch of such inexorable programs, could find a way out, they were fleeing the solutions imposed on them.

   Politics phrased the race confrontation in America in 1972 as “busing.” But “busing” was only a gingerly way of talking about the largest emotional and social problem of domestic life: How would the two races of the country live together in peace? Would blacks eventually dominate the big cities of the North, whites surrounding them in the suburbs to make of the SMSA’s huge bull’s-eyes of black and white? Should the civil-rights theories of the sixties be pressed further, even if those theories required constitutional reorganization of all the cities and metropolitan areas of the country? Mr. Nixon said, quite clearly, no. Mr. McGovern’s position was obscure. And neither could derive any guidance for the future from the figures of the 1970 Census.

The Census noted that black people numbered 22,600,000 of America’s 203,000,000, 11 percent of the national total.

Then it added neutrally: “In the Central Cities of the 12 largest SMSA’s, the black population increased 37 percent, while that of the whites dropped 13 percent.” It provided other material for thought, but entirely avoided opinion or projection.

If one started examining Census numbers of black and white, one had to begin, historically, in the South—the onetime land of slavery.

The new South of 1970, for the first time in a century, was gaining population by net in-migration—not only such traditional gainers as Florida and Texas, but also such states as Georgia, Alabama, North Carolina, Virginia. The net in-migration to those states, however, was a migration of whites—whites passing on their way south the counterflow of blacks to the cities of the North, the whites surpassing blacks by substantial numbers in their contrary flights.

The dimensions of the internal migration of the black people in the United States have been equaled in American history only by the migration here of the Irish in the depopulation of Ireland in the middle nineteenth century, or by the migration of the Jews out of Eastern Europe at the turn of the nineteenth century. In 1940, 77 percent of all the black people in America had still lived in the South and were, in the eyes of Northerners, a rural Southern problem. By 1970, a generation later, 65 percent of American blacks lived in the industrial states of the North and West and had become in the eyes of most of the country a city problem. Moving at a rate of approximately 150,000 a year in the decade of the sixties, the 1,500,000 black migrants who left the South had pooled chiefly in five large states—New York, California, Michigan, Illinois and New Jersey; and, joining with the larger internal multiplication of the black communities already there, had increased black communities in a city like New York by 53 percent, Boston by 66 percent. Whether North or South, three out of five of all black people now clotted in the ghettoes of the central-core cities of a major metropolitan area. Indeed, the larger the city, the more densely blacks clustered there—by 1970, so the Census said, Negroes averaged 28 percent of the population of the central cities in metropolitan areas of 2,000,000 or larger.

Such figures, however, told little of the political or emotional impact on life in the big cities as the black movement proceeded. In 1960 only one major city had a black majority—Washington, the nation’s capital. Now there were four such cities—Washington, Gary, Atlanta, Newark. And in metropolitan areas over half a million large, seven more core cities had black populations greater than 40 percent of their total. Of these seven (Detroit, Baltimore, St. Louis, New Orleans, Richmond, Savannah and Birmingham), only Birmingham was expected to keep its white majority through the next decade.

The blackening of the cities was rarely talked about in public political dialogue; but it was obsessive where mothers gathered in neighborhood parks, where men gathered at bars, where young couples talked with each other about apartment-hunting. On the common tongue, the whole phenomenon was styled “tipping”: one block would go black; then another; then the neighborhood. In some inner cities, a major factory or mill would tip black, and young whites would look for work elsewhere. In this unspoken drama, it was the school system always that set off anxieties; if a local school tipped, the neighborhood would tip; if a city school system tipped, then ten or fifteen years later one could see the entire city beginning to tip. When politicians talked of “busing,” they were obliquely talking, as everyone knew, of tipping. And tipping was a phenomenon limited not just to big-city neighborhoods. When tipping began, even in small cities, it developed its own acceleration. Of the ten cities in the U.S.A. with the highest percentage of black population, no less than four were small California communities of less than 50,000 population which had had white majorities in 1960 and tipped by 1970—Compton (71 percent), Westmont (80.6 percent), Wil-lowbrook (82.3 percent), Florence-Graham (56 percent).

Conscience, violence and determined government action had in the sixties finally begun to open opportunities for some black people, and numbers reflected that, too. Median Negro family income had risen by 50 percent in terms of constant dollars in the course of the sixties—to $6,520 a year per family in 1970. Only 9 percent of black families had earned more than $10,000 a year in 1960—by 1970, 24 percent earned more than that. And young black families (those under thirty-five) were now averaging $8,900 a year, or 91 percent of white income in the same age group. Education of sorts was finally being delivered to American blacks in the big cities—56 percent of all young black adults (between twenty-five and twenty-nine years old) had completed high school, as contrasted with 38 percent ten years earlier. By the fall of 1972, the 727,000 young blacks in college were more than double the number in college in 1964—and they were 9 percent of all American college students; black illiteracy, counting those over fourteen years old, had dropped in a decade from 7.5 percent to 3.6 percent.

Only imagination could bridge the gap between such numbers and another more morbid set of numbers offered by the decade’s change—those on the break-up of black family life. Decade by decade, for twenty years, the strain of life in central cities, and the alternative options offered there had incubated the dissolution of the older disciplines of marriage and family. In 1950, only 17 percent of black families had been headed by mothers without husbands; by 1960 the figure had jumped to 22 percent; by 1970, despite the most intense efforts by the Federal government to grip the problem, the figure had jumped again to 26.8 percent (and to 28.9 percent in 1971). White families in such condition—whether by abandonment or tragedy—remained stable at 9 percent. Stated otherwise—in 1960 the Census counted only 900,000 fatherless black families; by 1970 the number had risen to 1,600,000. And most of these lived in the cities, on welfare. To be specific, in New York City the number had grown from 81,000 to 127,000.

It was these broken black families as much as anything else that set the politics of big cities in motion—for where they were thrust by poverty, or pooled by public housing, safety and tranquility broke down, causing stable black families to flee from the danger areas of abandonment, crowding white families farther to the fringes of the central city. Deprived by history of any opportunity to exercise discipline of their own over their own community, the American blacks, who suffered most from urban anarchy, could only turn via their newly educated leaders and newly independent black political figures to challenge all white politics. Government, they insisted, must do something to change the nature of the society they lived in since they could not do so themselves—and in 1970 such black leaders were at hand, men who proposed to use their leverage in conventional politics, as every other group had done before them, to advance what they considered the interests of their people.

By 1970 the social progress of the previous decade and the black concentration in the cities had opened political advancement on a realistic base to American Negroes for the first time in their history. The 5 black Congressmen of 1960 had multiplied to 12 black Congressmen in 1970 (and were to rise to 15 black Congressmen in 1972). A black Senator, the first since Reconstruction, had been elected in Massachusetts. Eighty-one black mayors had been elected (later, in 1972: eighty-six), among those the mayors of Cleveland, Newark, Gary (and Washington). The number of black state legislators had grown from 52 to 198 in 1970, local officeholders had multiplied to 1,567 (half of them in the South). And in such critical states as New York, Illinois, Pennsylvania, Michigan and California, caucuses of black officeholders had formed to get the maximum benefit from the explosive emotional issues for which they spoke.

It was this pressure—of growth, of migration, of family break-up—on a government and court system unable to find a new solution for the black condition which, second only to foreign policy and war, colored the politics of 1972. Within their own resources, the cities could not contain the blacks and other minority groups, or meet their problems except at the expense of older white communities which existed within the city.

The Census lumped all whites together. But these whites, under the pressure of black expansion within the city, had begun to cleave, too. Fancying themselves the victims of government which sacrificed their interests to blacks, they had begun to examine themselves as communities, as they had not since the days of their fathers’ arrival. Once, as recently as 1960 with the election of John F. Kennedy as the first Catholic to reach the Presidency, it had been hoped that the final melting of the melting pot was under way. By 1972 that hope had turned out to be obviously illusory. Italians, Poles, Jews, Irish, Orientals, Puerto Ricans, Mexicans, even some Scandinavians were beginning to think of themselves as groups with identities and heritages of their own, and restlessly began to wonder how long their communities and heritages could persist in the meat-grinders of the metropolitan areas.

In the shorthand of the politics of 1972, all were lumped together as “ethnics.”

* * *

Of the ethnics, the Census spoke obscurely, because Americans are less candid about their origins than about most matters.

The Census offered a bare-bone figure in 1970: Despite the growth of American population in the decade of the sixties, the number of Americans of “foreign stock” had apparently fallen: from 34,000,000 to 33,600,000. “Foreign stock” technically meant Americans who had themselves recently migrated here, or who had at least one immigrant parent. Seventy percent of this “foreign stock” was still European—but 30 percent of the new foreign stock was of other origin. Within the decade Mexican foreign stock had jumped by 34.7 percent, Chinese by 63 percent, other Orientals by 68 percent and Cuban stock by 352 percent.

These figures were, however, only the top layer of the ethnic strata of America—the identifiable layer of recent newcomers, with a special twist given such figures by the new immigration laws of 1965 inviting in hundreds of thousands of Orientals, Caribbeans, Colombians, Venezuelans. Beneath this layer of figures lay those of the mass migration of the turn of the century, choked off by the 1924 immigration act, which had by now created second-, third-, fourth-generation Americans whose origins the Census could pick up only by special surveys.

One such special survey had been made in 1969, as “ethnics” began to enter the common dialogue of politics. It was a disappointment. Americans are embarrassed to be asked at their doorstep “What are you?”—they like to think of themselves as Americans. Thus, the Special Survey on Ethnic Origin of November, 1969, could identify only those willing to talk about their origins. Of the then 200,000,000 Americans, the Survey could find only 75,000,000 willing to identify themselves by heritage. Germans and English led the list, with 19,000,000 each. Next came Irish, with 13,000,000. Then Spanish-speaking, with 9,000,000, then Italian, with 7,000,000. Then Polish, 4,000,000; and Russian, 2,000,000. The other 125,000,000 Americans (including 22,000,000 blacks) could not or did not want to be identified by heritage.

Yet the others were there: There were pockets of Swedes and Scandinavians somewhere in that mix, of Jews and French-Canadians, of Greeks, Swiss, Dutch, Czechs, Scots-Irish and half a dozen variants of the old white Protestants—as well as various kinds of blacks, each with a subtle, vital internal culture of its own. Only the antennae of politicians could pick up these varied cultural patterns, with their prides and fears; yet the demographic drama as these real, yet undefined communities mixed, joined, intermarried, had created the unique nature of American politics.

One should linger over the ethnic mix of Americans before going back to the numbers to understand how dramatically different Americans are from other nations—and how swiftly they are changing.

Until the Civil War, Americans were overwhelmingly Protestants of British stock, with a slight admixture of Germans and Irish—their values, customs, sports, laws, education all descended from Britain. Decade by decade since then, the old Colonial-stock Protestants have been shrinking in percentage. Their culture is still the matrix into which all other cultures fit, but this mortar, which cements the other cultures into the political tradition, grows always thinner. To give one striking example: Connecticut, the Nutmeg State, is considered a typically New England state, and this correspondent had so accepted it, until the dialogues of 1972. Stimulated by the new ethnic colloquy and by the Democratic debate over quotas, I asked Irwin J. Harrison of the Becker Poll in the spring of 1972 to run a shirt-tail question in Connecticut to one of his regular political polls: How many citizens of Connecticut could count as many as two of their four great-grandfathers as being born in Connecticut? The answer, I hoped, would tell me how many of the Nutmeg State’s Yankees, who fought the Civil War, had left descendants behind in today’s Connecticut. The survey came back reporting that only 2 percent (!) of the state’s citizens could be sure that two of their four great-grandfathers had been born there a century earlier. In short, in Connecticut, once the most rigidly theological Protestant state of the Union (the Congregational Church in Connecticut was not disestablished until well into the nineteenth century), the Protestant Colonial stock had all but vanished. Here was the most decided minority in the state.

The English-descended Protestants had not vanished in anywhere near such striking proportions elsewhere in the Union—they had simply moved away from the big cities, moved away from the Northeast, held their own in the South and West, and from such areas and the suburbs they watched, with either cultural distaste or anxiety, what “others” were doing in the cities they had abandoned. As to who the “others” were, the Census gave an over-all unpublished figure when pressed to do so—insisting all the while that any such figures must be guesses. In the discussions of the new immigration law of 1965, the Bureau of Census had been required to prepare estimates of the origins of the 179,000,000 people in the United States in 1960. By 1972 their analysis of the population of 1960 was quite old—but it was the closest guess that one had to work with for the campaign of ‘72. The Census guessed that of the approximately 180,000,000 Americans of that time, people of British-Scots-Irish stock could be counted as 61,000,000, or one third of the total—still the largest ethnic component of American life, but now, definitely, only the largest minority among many minorities. Next, by their guess, followed the Germans, with an estimated 26,000,000; then the blacks, at that time 19,000,000; then the Irish with 17,000,000; then a mixture of people from the Slav countries—Poles, Russians, Yugoslavs, Czechs—totaling 12,600,000; then the Scandinavian family—Swedes, Norwegians, Finns, Danes—for 7,400,000; then Italians, for 7,400,000; then the French and Belgians for 4,000,000; and Dutch 3,000,000. To which must be added the loosely defined Spanish-speaking—9,000,000. And as a final yeast in the leaven: the American Jews, for whom the Census finds no category but who are estimated by the American Jewish Committee as 6,060,000—but their numbers are scattered indiscriminately through the other categories.

Since no real data exist on the geography or community involvement of ethnic groups in America,2 no serious description can be made by anyone. What follows, therefore, is an impressionistic portrait of how ethnic groups fit into the American political jigsaw of the seventies, as might be described to a visitor from a distant land.

One must start with the largest minority, the old-stock Protestants. In the big cities of the North and East, such Protestants have vanished almost completely as a substantial voting group; their power in the cities of Chicago, Boston, Philadelphia, New York is more akin to that of the Manchu mandarins who governed the mass of Chinese for three centuries from enclaves of administrative residence. What splits exist in the Protestant mandarinate of the big cities are those of policy, of morality, of administration, of leadership rights or of economic interest. Moving west across the map from the Atlantic Seaboard, one begins to find the old-stock Protestants as a major voting force in upstate New York and Pennsylvania, and then, on the far side of the Appalachians, a continuing, sometimes overwhelming power group in small towns and suburbs. Old-stock Protestants vote, as most people do, their economic interests first; the general interest next; their ethnic interest last. They are registered predominantly as Republicans except in the South, where until recently they have registered overwhelmingly Democratic. Germans and Scandinavians are also generally believed to vote in the same pattern—by economic interest first, general interest next, lastly by ethnic interest or inheritance.

The Irish lie between the old-stock German-Scandinavian general-interest voters and the five major special-interest ethnic voting blocs. Suburban and assimilated Irish voters vote with the Protestants of the suburbs; the big-city Irish, ever diminishing, vote by ethnic inheritance.

The Big Five among the lesser ethnic blocs—the blacks, the Italians, the Chícanos, the Slavs and the Jews—vote by ethnic interest beyond all other interest. They are the weak; they are vulnerable; they vote for protection more than for hope. Any abbreviated description of these five voting ethnic blocs has to be a stereotyped distortion, for each one is split within. Yet, over the run of the sixties, one can see patterns in their voting. The most solidly Democratic voting bloc among this Big Five is the Spanish-speaking bloc; Spanish-speaking voters vote lightly—but they vote spectacularly Democratic, perhaps up to 90 percent. The next most solid Democratic voting bloc is the blacks, who, in the big cities, also vote 90 percent and up Democratic. (Since these two groups are at the bottom of the pecking order in American life, they sense their rivalry at the bottom and can, by deft political manipulation, sometimes be made to separate against each other.) Next most Democratic among voting blocs are the Jews, although the Jews are going through an internal diffusion of politics of their own, separating rich and poor Jews traumatically. Then follow the Italians and the Slavs.

The pattern of evolution of politics among these groups over the sixties runs like this: John F. Kennedy in 1960 did best with Mexican-Americans—getting 85 percent of their vote; next best with the blacks; then with the Jews, Italians and Slavs; he barely held his own Irish with him. Lyndon Johnson, running against Barry Goldwater, caught them all: Jews, Negroes, Slavs, Italians, Spanish-speaking, to make the landslide of 1964. In 1968 came a break in the patterns. Humphrey swept Jews, Chicanos and blacks—he had served them all in both substance and eloquence; but the drift away from the Democrats among Irish, Italians and Slavs was marked.

By 1972 each of these vulnerable ethnic communities was in turmoil. They seek, above all, protection—whether it be protection for jobs or seniority in the factories, for Italy or Israel, or for the tranquility of their neighborhoods. Among them, when stirred, the Minority Five ethnic blocs may cast as much as 30 percent of the national vote. In key states like New York, their vote may reach as high as 40 percent of the total.

These people of the Minority Five are metropolitan people; they live in the great cities and the suburban ring. There is no adequate map of these communities, and there can be none, not even a leopard-spot approximation, for most of them are mixed in texture. One finds a Parma, Ohio, just outside Cleveland, an overwhelmingly Slav satellite city; a Mount Vernon, New York, heavily Italian; a Beverly Hills, California, heavily Jewish. But most are vari-colored intertwining of racial strands, whose lawns and gardens and shopping centers all look alike, but for whom the thrusting menace is the growing black population of the inner city and the violence of the deprived. On the keyboard of their fears and hopes, politicians have always played campaign melodies. In 1972, more sensitive than ever before, the Americans of such communities were listening to what the politicians had to say. McGovern, in fact as well as in metaphor, read the numbers as black and white—and the whites, ethnic or not, gathered from his message that they must pay the price of the injustice and oppression which America’s past de livered to America’s present. Nixon, with longer experience in politics, read the ethnics as a threatened group—and promised protection.

   If the story of the city and its suburbs was the chief drama traced in the 1970 Census’ web of numbers, other substantial dramas in the numbers were useful in understanding the campaign of 1972.

§ After the city story came the youth story. There, on the age profile of Americans, like an orange passing down the throat of an ostrich, was the bulge made by the postwar baby boom. Americans between the ages of 14 and 24 had risen to 20 percent of the population, a larger percentage than in any decade since 1910. In 1960 there had been only 27,000,000 such young Americans; by 1970 there were 40,000,000 of them; and by 1972, so estimated the Census, 25,000,000 of them would be eligible to vote.

In examining the characteristics of these 25,000,000 new voters, one came across the education story.

Each decade in American life has a Sacred Issue to which all politicians must pay lip service. In the 1950’s, the Sacred Issue had been Defense and Anti-Communism. In the 1970’s, it seems certain that it will be the cause of Environment. In the 1960’s, however, the Sacred Issue was Education—and the Census of 1970, reporting on youth, measured the mania for education which had swept American society in the previous decade.

It was clear that what most people spoke of as the generation gap was rather an education gap. For example: The Census said that 61 percent of all white college students came from homes where neither parent had ever been to college! And there were millions of such students. Of the 14,300,000 Americans between the ages of 18 and 21 (up 52 percent in absolute numbers from 1960), approximately a third, or 4,500,000, were in college. Of all those in the 20-to-24 age bracket, 23 percent were still in college, against a percentage of only 13 percent a decade earlier. Education had not neglected the others—of all youngsters of high-school age, 94 percent were still enrolled in high school. But there were differences among youth—those who left high school to go to work went to work at the hard trades; of the 16-to-19-year-old young men who had begun work, 56 percent were blue-collar workers; and an unidentified proportion more had been taken into the armed services, whose duties college youth largely escaped. Somewhere there would have to come a clash, or several clashes. Those in college, by the very numbers of the postwar baby boom, would have to jostle each other cruelly for the limited number of executive posts at the top; they had been trained, in the modern idiom of college education, for leadership, and they itched to command. But beneath them lay the proletarian youths of the big city, ethnics, blacks and majority-stock youths alike— and these proles were not quite the same thing as the leadership youths. The one group wanted to lead; the other group resented this command.

It was impossible to draw their profile from the numbers, for education was more than schooling. The life-style of the young, their dress, their mobility, their dreams, their rhetoric, was entirely different from that of their parents. Television had changed their vision of right and wrong, of war and glory. They had grown up in the longest war of American history, the only war America was destined not to win. They were mobile people, more than any other identifiable group in history except the desert nomads. (Of those people 22 to 24 years old who were counted in 1970, no less than 45 percent had changed their address in the previous year.) More ver, the pill had changed their attitude to family life (in the single decade of the 1960’s, for example, the number of children under 5 who had been born to college-educated women had dropped by 55 percent).

§ As for women, the Census had only such rudimentary measures as fertility, income and childbearing to measure the feminine dynamic of 1972 politics. The abstract concepts had not yet been developed to measure the role of women in the post-industrial society as individuals apart from their relationship to men.

What little numerical information there was, however, buttressed the eyeball observation of the liberation of women as individuals. Women were learning to live by themselves; for example—in 1972 the number of divorced women living alone was 66 for every 1,000 women living with husbands, up by 57 percent since 1960. (Indeed, singles, both male and female, were increasing at a phenomenal rate—one-person households in the United States had reached 11,100,000 out of a total household population of 66,700,000.) Women worked more. They accounted for no less than 30,000,000 jobholders (38 percent of the total work force) as against 22,000,000 only ten years earlier. Women waited longer to get married—in the decade of the sixties, the number of women under 24 who chose to remain unmarried had grown by one third. They wanted fewer children, said the Census: in a questionnaire of young women between the ages of 18 and 24, 70 percent responded that they expected to have two children at the most. And they were getting their way. Of all the numbers on women, the most critically important figure was the birth rate. In 1970, the birth rate had been 18.2 births per 1,000 population, a postwar low. The next year, in 1971, however, the birth rate was to drop to 17.2 per 1,000, the lowest in all American history.3

There were many other measures that graphed the emerging post-industrial society and the backdrop for the politics of 1972. But two more quick measures will have to suffice:

§ America was becoming a society where the definition of work was changing. The proportion of people who made things, who dug minerals, spun textiles, drove trucks, forged metal, assembled parts, was diminishing. Only 35 percent, or 27,791,000, of America’s 78,600,000 workers actually made or moved things with their hands and muscle in 1970. The Census called these people, officially, “blue-collars.” Most other American workers were called “white-collars,” and the white-collars were rapidly increasing because America required services more than goods or food. “Take care of me, Daddy,” was no longer a child’s request—it was a social need in the complicated post-industrial world. Thus, the Census reported that in the decade of the 1960’s, white-collar workers had grown from 43 percent of all jobholders in 1960 to 48 percent of the total job force, or 37,997,000, in 1970. If to this figure were added the figures of strictly old-fashioned service workers, like barbers, beauty-parlor operators, psychiatrists and housemaids4 who came to an independent total of 9,712,000, then the proportion of American workers in service to those workers who hewed the wood, poured the metals, drilled for oil and harvested food and fiber had risen in the years of the sixties from 55 percent to 60 percent.

§ There was a last number to be separated out of the general employment numbers on how Americans made their living. It was a number overbearing in politics—the number of Americans who now worked for government.

The 1960’s had been a decade of prosperity, but for none had it been a more prosperous decade than for those who worked in governments—Federal, state and local. Not only had their salaries, pension rights and fringe benefits increased far faster than for those who worked in private employment. So, too, had their numbers. In the decade of the sixties, they had risen from 7,859,000 to 12,320,637, by nearly 4,500,000 or somewhat more than 55 percent, in a population that had risen by only 13.3 percent. Government workers lived better than most average Americans and they weighed heavily on the budgets of others; at the county, village and municipal level, their demand on taxes had risen by more than 100 percent—from $22.6 billion in 1962 to $56.7 billion in 1971. George Wallace had a visceral sense of what was happening when he denounced the “pointy-headed bureaucrats” eating up the taxpayers’ dollars; so did all the primary candidates in Wisconsin when they discovered what a burden local services were putting on property taxes.

Government employees were now, like the academic staff men of the campuses, a constituency bloc, and, comparing government employees to self-employed, the contrast glared. Decade by decade, the number of Americans who worked for themselves had dwindled. But the drop in the self-employed in the decade of the 1960’s was almost as spectacular as the drop in the farming population. In 1960, 15.7 percent of all American men had been self-employed, doing their own work, paying themselves out of their own enterprises, substantially more numerous than government employees. By 1970, however, that figure had dropped to 10.2 percent of American men. Meanwhile the figure of those employed by government had risen from one eighth of all working people to one sixth.

Government, by the end of the decade of the Great Society, had become obtrusive. In the post-industrial world, Americans needed government more than ever—to clean their air and water, to preserve natural beauty, to balance the economy and provide jobs, to build roads and protect the streets, to educate the young and heal the sick. But how far should government go? Was government to be judged, like other services, on cost and performance? Or was it to be judged by feel, by what it did or did not do that made daily life easier and more pleasant?

And with these questions about government, one had to leave the numbers provided by the Census and explore other numbers more relevant to the moods that were to underlie the campaign of 1972.

   The campaign of 1972 was often to be compared with that of 1964, and with good reason. It was in the Goldwater-Johnson contest that the issue of government was first and most clearly raised. “Leaders of the present administration,” said Goldwater, “conceive of government as master, not servant. Responsibility has shifted from the family to the bureaucrat; from the neighborhood to the arbitrary and distant agencies.” To which Johnson had responded, “Government is not an enemy of the people. Government is the people themselves.”

And the people, in 1964, had voted for Johnson and more government by a landslide.

Goldwater might have phrased what came next as, “You ain’t seen nothing yet”—for after the landslide of 1964, the appetite, vision and reach of government expanded in a manner unprecedented in American history. Its visions were grandiose, its morality genuine, its goodwill robust.5 But, for the social engineers of the Great Society, visions, moral-ity, goodwill were all ultimately measurable by numbers and statistics—both results and goals could be “quantified.” Quantification was one of those legacies left to American thinking by the Second World War. In that war, American intellects performed with stunning success by imposing the logic of science on the whirlwind of combat, defining by the most sophisticated digital and numerical analyses the way combat energies should be managed. In the postwar world, social scientists, too, became intrigued with numbers—numbers on crime, numbers on black/white classroom ratios, numbers on suburban change, numbers on housing square-footage, numbers on unemployment and manpower. Such numbers defined shortfalls of achievement or morality; and dollars could provide solutions. The underlying assumption of the best postwar American thinking was that with enough dollars, and enough goodwill, and quantifiable goals, domestic problems could be solved with steady forward movement and a minimum of political discontent.6

But they could not. Education, for example, was a problem which did indeed require dollars, but which dollars alone could not solve in the political context of American communities. In no area had American government made a greater effort than in education. Federal funds had gone up from $1.7 billion in 1960 to $9.7 billion in 1972; local funds from $9.7 to $27.6 billion; over-all national spending for education from $24.7 to $86.1 billion. In no area had the approval and applause of intellectual leadership been greater. Left and right agreed that education was a Good Thing. Trying to describe the change in mood of American life in the years since the Depression, Frederick Lewis Allen had written in 1952, in The Big Change, that now, in the postwar world, “even the most conservative citizens wanted … bigger and better schools.”

But exactly twenty years later, a parent in the East Flatbush section of Brooklyn, an orthodox Democratic-Liberal community, was quoted in The New York Times as saying, “For a long time it was hard to oppose a school per se. Schools were in the same category as motherhood. I grew up here. There’s nothing special about this place, but it’s good. And the school threatens that way of life.” The school in question—Intermediate School 387—was to be built in a neighboring district at a site where it was certain to draw white students out of the East Flat-bush community and tilt the precarious racial balance of the neighborhood. The morality of numbers imposed such transfers of children by color; but to the inhabitants of East Flatbush the numbers meant nothing; the school system had become a leverage by which a remorseless government, in the name of public interest, imposed its distant will on their home community. Said a Mrs. Elfie Haupt, executive secretary of the local school board, in February of 1971, “This community has been battered by their experiments. First there was open enrollment, then enclave zoning, then rezoning …, then pairing…. They never follow up with services, so the experiments fail…. They frighten people.”

Across the country, whether in Flatbush, or South Boston, or Pontiac, or Dade County, or Richmond, or Los Angeles, this morality of numbers terrified people.

Public housing was another face of menace. By the broad measure, one could state simply that public-housing and public-road programs in the decade of the sixties tore down more houses of poor people, chiefly black, than they built. But where public housing was physically installed, it all too frequently brought disaster to the communities of reception. One of the national flagships of public-housing disaster was the famous Pruitt-Igoe project of thirty-three high-rise buildings in St. Louis, built by cost accountants, quantifying goodwill in maximum square-footage allocated to poor people—without ever inquiring what kind of poor people were going to live there. By early 1972 the $36,000,000 project, a showplace of the Great Society, had been so vandalized, so transformed into a place of private danger and random violence, that two of its eleven-story units had to be systematically demolished. But it was easier to erase such a public-housing project physically than to erase from the folk attitude the reputation of what public housing might bring to a quiet community in the way of social disorganization. By 1971 some twenty-five public-housing authorities in the country were described as being on the edge of bankruptcy. Opposition to public-housing projects had become nationwide. From Blackjack, Missouri, to Suffolk County, New York, citizens organized to keep planned housing out of their own communities. Appalled liberals called such opposition “racist”; but in community after community—Greensboro, North Carolina; Philadelphia; Flint, Michigan—middle-class blacks joined middle-class whites in court suits or public opposition to Federally funded projects that would implant in their own stable communities the social debris of the inner city.

If the mood in the early seventies was to reject, everywhere, housing planned or funded by public authority,7 the hostility did not extend to the more colorful transformation of metropolitan America by private enterprise.

The shopping mall, for example. It was a phenomenon whose impact on American culture and commerce begged for the attention of social historians. No set of American artifacts will baffle future archeolo-gists more than these impermanent temples of commerce. Archeologists will be unable to reconstruct adequately the sense of vibrant life they stimulated—the balloon-tagged cars in the huge parking lots, the lost children, the overspilling bins of plenty, the shopping lust which they were designed to celebrate—any more than they will be able to reconstruct the life and sound of the South Bronx. This correspondent, an inner-city man, found them in 1972 in every primary state, California, Wisconsin, Florida, Massachusetts, in a phase of development that made the early postwar supermarkets of Long Island and California a memory of pre-modern times. They were developing across the country like fantasy-land—the bulging warehouses of groceries, produce, clothing, appliances, surrounded by plazas and fountains, arcades of gourmet and cheese shops tucked away between the enclaves of giant corporate distributors; jewelry stores, fashion boutiques, gardening nooks, baby centers, movie houses, sometimes playhouses, banking booths, night clubs, restaurants, and even chapels and churches, accreting at these marketplaces where, as in medieval times, the suburban citizens found it most convenient to congregate. It could be argued and, indeed, was argued by liberal economists analyzing the tax system that such shopping centers or shopping malls were as much an expression of government intent as was public housing—investment in shopping centers, under the tax laws, was among the safest tax shelters that government permitted to people with high incomes. The government probably lost more money in remission of taxes to such investors than it paid out for public housing with tax dollars. But these temples of the merchandisers worked, and were embraced by suburbia, while public housing was abhorrent to them.

By 1971 there were over 13,000 shopping centers in the United States; in the next fifteen years their number, it was estimated, would more than double. And the inner cities might well tremble at what the numbers of the past decade forecast for the next. In 1958, the year before the first shopping center appeared outside Portland, Maine, its downtown businessmen had grossed $140,000,000; ten years later, with ten peripheral shopping centers in business, downtown Portland’s business had fallen to $40,000,000. Other cities had less accurate measures—yet whether it was Janesville, Wisconsin, or Rochester, Minnesota, or Selma, Alabama (the last of whose three downtown department stores closed in 1972), the shopping centers of the suburban belt were destroying the central city, by draining it of its commercial vitality.

The flight from the city, from its laws, its taxes, its numerical definition of morality, expressed itself in many ways. For total security from the reach of government, however, nothing could beat the mobile home—with the house on wheels, one could escape not only the city, but the past. If a roving family did not like its trailer park, or the group life of the community it found in passage, it hitched up and moved on. From an almost unknown category of American life in 1960, the aging Wandervögel of the mobile-home communities had grown to 6,000,000 in 1970. In 1960, only 103,700 mobile homes had been built in America; but mobile housing developed through the next decade into a golden growth industry that produced 415,000 in 1970 (and went on to an estimated 550,000 in 1972). Mobile-home owners—free from real-estate taxes, generally aging (average age: approximately fifty), fleeing violence, given the choice of new neighbors at every move until they found the right neighbors—had become a subculture in America.

The theme of flight could be explored in many other collections of data. For example, in the development of amusement parks. Many big-city amusement parks—the Luna Park at Coney Island, the Palisades Park of New Jersey—had begun to fold; they had become too edgy with tensions and frictions. But across the country, amusement centers which controlled and dominated their communities as their major industry were offering packaged escape and fantasy with such exuberant profit results that they had become investment blue chips. In 1960 there had been one Disneyland, in Anaheim, California; by 1972 it had received over 100,000,000 visitors—and its cheerful young public-relations officer, Ronald Ziegler, had graduated to become the stern public spokesman of Richard Nixon. Southern California by 1972 boasted the Lion Country Safari, Marineland, the Japanese Village and Deer Park, as well as the original Knotts Berry Farm. In Florida there was now a Disneyland East at Orlando. There was a vast new amusement park in Houston (Astroworld). Entrepreneurs in Chicago, Atlanta and Dallas and, doubtless, a dozen other cities were developing plans for even more glittering escape centers.

Two contrary impulses seemed to be vying in American life. Escape from government and constraint was one—escape from war, from draft, from cities, from taxes, from pressure. Sports were flourishing in America as never before, the great journalistic success of the magazine world being Sports Illustrated, reaching its peak of profitability in the year when its great sister magazine Life died for having rubbed America’s nose too hard in the reality of the times. Yet with the impulse of escape from government came also the impulse of demand on government—for cleaner air, for purer waters, for better services, for mass transportation, for protection in the street from muggers and at the check-out counter from commercial scoundrels.

   Whatever the rhetoric of politics in 1972, it rang against a background of change. There could be no doubt that the candidates of 1972 were addressing a country different from that addressed by John F. Kennedy at his inaugural in 1961. “Ask not,” John F. Kennedy had said, “what your country can do for you—ask what you can do for your country.” Public spirit and social conscience had run low by 1972—a war had worn out the spirit, and random experimentation had worn out conscience. Few, except for the blacks and deprived, asked what the country could do for them, and fewer still asked what they could do for their country. Most, apparently, by mood and numbers, wanted their country to leave them alone—and leave the rest of the world alone, too. They wanted out of Vietnam, out of world affairs, out of the cities, out of the web of numbers. Whether the mood was deep or transitory, accidental or historic, reversible or permanent—that was what the campaign of 1972 was all about. Mr. McGovern persisted in the Lincolnian tradition of hoping an appeal to the better angels of people’s nature might summon them to new visions; Mr. Nixon proposed to deal with Americans as they are.

1 If one separated black from white, the black figures were even more startling. In April, 1960, 2,500,000 black Americans still lived on farms. By April, 1970, 60 percent (!) had fled, leaving behind only 900,000 black farmers, sharecroppers and peons to perplex Afro-American historians with their memories.

2 The best workaday running analysis of the data on ethnics in politics that I have found, and recommend, is The Ethnic Factor—How America’s Minorities Decide Elections, by Mark R. Levy and Michael S. Kramer (New York: Simon and Schuster, 1972).

3 Such figures fanned out when broken down by ethnics. Mexican-American women had the largest American families—an average of 4.4 children per woman. Black American women had an average of 3.6. At the low end of the fertility, or romance, scale were the figures on Italian and Jewish-American women. They averaged 2.4 children per woman, barely above the natural replacement level of 2.1 percent.

4 The plaint of the white middle-class housewife about the scarcity of servants was a very real one, according to the Census. The number of black and minority-group women willing to be domestics dropped in the sixties by 42 percent, from 898,000 to 520,000. White women willing to be domestics dropped also, but only by 29 percent, from 758,000 to 533,000.

5 A typical example of the rhetoric at high noon in the Great Society might be Lyndon Johnson’s presentation of the Model Cities program to Congress in January of 1966:

“Today I have placed before the Congress and before you, the people of America, a new way of answering an ancient dream. That dream is of cities of promise, cities of hope, where it could truly be said, to every man his chance, to every man, regardless of his birth, his shining golden opportunity, to every man the right to live and to work and to be himself and to become whatever thing his manhood and his vision can combine to make him.

“The new way of answering that ancient dream is this:

—to rebuild where there is hopeless blight

—to renew where there is decay and ugliness

—to refresh the spirit of men and women that are growing weary with jobless anxiety

—to restore old communities and to bring forth new ones where children will be proud to say ‘This is my home!’”

6 For a thoroughly obsolete picture of the thinking of American intellectuals and their impact on government policy at the high noon of the Great Society, the reader is referred to a series called The Action Intellectuals, Spring 1947, Life Magazine, written by the author of this book in a season of disordered admiration.

7 One of the first acts of the second Nixon administration, fulfilling what it saw as its mandate, was, in January, 1973, to suspend all new Federally funded public-housing projects across the nation. The program of Federal public housing had begun in 1949, blessed by Senator Robert Taft and opposed by then-Congressman Richard Nixon. It had built 800,000 housing units by the time Richard Nixon arrived in office; under his first administration more such housing was built than ever before in history, with the total now approaching 1,100,000 such units as old programs reach their end.