CHAPTER 3

BY THE NUMBERS

We knew a lot of things we could hardly understand.

—Kenneth Fearing1

Anyone who looks through enough statistics will eventually find numbers that seem to confirm a given vision. Often, the same set of statistics contains other numbers that seem to confirm diametrically opposite conclusions. The same is true of anecdotal “facts.” That is why evidence is different from mere data, whether numerical or verbal.

Scientific evidence, for example, comes from systematically determining—in advance—what particular empirical observations would be seen if one theory were correct, compared to what would be seen if an alternative theory were correct. Only after this careful and painstaking analysis has been completed can the search begin for facts that will differentiate between the competing theories. Seldom is this approach used by those who believe in the vision of the anointed. More typically, they look through statistics until they find some numbers that fit their preconceptions, and then cry, “Aha!” Others with different views can, of course, do the same thing. But only those with the prevailing views are likely to be taken seriously when using such shaky reasoning. This is only one of many misuses of statistics that goes unchallenged as long as the conclusions are consonant with the vision of the anointed.

“AHA!” STATISTICS

Perhaps the purest examples of the problems of the “Aha!” approach are sets of statistics which themselves contain numbers completely at odds with the conclusions drawn from other numbers in the same set. This is not as rare as might be expected.

Infant Mortality and Prenatal Care

A widely reported study from the National Center for Health Statistics showed that (1) black pregnant women in the United States received prenatal care less often than white pregnant women and that (2) infant mortality rates among blacks were substantially higher than among whites.2 “Aha!” reactions in the media were immediate, vehement, and widespread. It was automatically assumed that the first fact was the cause of the second, that this showed American society’s “neglect” of its minorities, if not outright racism, and that what was needed was more government spending on prenatal care. According to a New York Times editorial, one-fourth of the infant deaths in the United States were “easily preventable” and were “primarily attributable to their mothers’ lack of prenatal care.” What was needed was “an increase in Federal spending on prenatal care.”3 The Washington Post likewise urged legislation to “provide vital assistance to pregnant women who cannot afford normal medical care.”4

In the very same report that showed racial disparities in infant mortality—indeed, on the very same page—statistics showed that (1) Mexican Americans received even less prenatal care than blacks, and that (2) infant mortality rates among Mexican Americans were no higher than among whites.5 Had anyone been seriously interested in testing an hypothesis, the conclusion would have been that something other than prenatal care must have been responsible for intergroup differences in infant mortality. That conclusion would have been further buttressed by data on infant mortality rates for Americans of Chinese, Japanese, and Filipino ancestry—all of whom received less prenatal care than whites and yet had lower infant mortality rates than whites.6 But, of course, no one with the vision of the anointed was looking for any such data, so there was no “Aha!”

In a reprise of the pattern of justification for government spending on the “war on poverty,” it has been claimed that money invested in prenatal care will prevent costly health problems, thereby saving money in the long run. Various numbers have been thrown around, claiming that for every dollar spent on prenatal care, there is a saving of $1.70, $2.57, or $3.38, depending on which study you believe. Marian Wright Edelman of the Children’s Defense Fund, for example, used the $3.38 figure.7 However, a careful analysis of these studies in the New England Journal of Medicine found such claims unsubstantiated.8 What is even more striking was the response to these damaging findings:

Dr. Marie McCormick, chairman of the department of maternal and fetal health at the Harvard School of Public Health, said it was true that “justification of these services on a cost-benefit analysis is a weak reed,” but added that “people were reduced to this sort of effort” by politicians reluctant to spend money on services for the poor.9

In other words, if they told the truth, they wouldn’t get the money. Invalid statistics serve the purpose of allowing the anointed to preempt the decision by telling the public only what will gain political support.

Intergroup Disparities

Media and academic preoccupation with black-white comparisons permits many conclusions to be reached in consonance with the prevailing vision, but whose lack of validity would immediately become apparent if just one or two other groups were included in the comparison. For example, the fact that black applicants for mortgage loans are turned down at a higher rate than white applicants has been widely cited as proof of racism among lending institutions. The Washington Post, for example, reported that a “racially biased system of home lending exists”10 and Jesse Jackson called it “criminal activity” that banks “routinely and systematically discriminate against African-Americans and Latinos in making mortgage loans.”11 But the very same data also showed that whites were turned down at a higher rate than Asian Americans.12 Was that proof of racism against whites, and in favor of Asians?

Similarly, a statistical analysis of the racial impact of layoffs during the recession of 1990-91 turned up the fact that blacks were laid off at a higher rate than whites or others. Although this was a “news” story, as distinguished from an editorial, the story was sufficiently larded with quotations alleging racism that it was clear what conclusion the reader was supposed to draw. However, here again, Asian American workers fared better than white workers. Nor could this be attributed to high-tech skills among Asian Americans. Even among laborers, Asian Americans increased their employment at a time when white, black, and Hispanic laborers were all losing jobs.13 Yet no one claimed that this showed discrimination against whites and in favor of Asians.

Such Asian-white statistical disparities cause no “Aha!” because their implications are not part of the prevailing vision. In short, numbers are accepted as evidence when they agree with preconceptions, but not when they don’t.

In many cases, academic and media comparisons limited to blacks and whites—even when data on other groups are available in the same reports or from the same sources—may reflect nothing more than indolence. However, in other cases, there is a positive effort made to put other kinds of comparisons off-limits by lumping all nonwhites together—as “people of color” in the United States, “visible minorities” in Canada, or generic “blacks” in Britain, where the term encompasses Chinese, Pakistanis, and others. Whatever the rationale for this lumping together of highly disparate groups, its net effect is to suppress evidence that would undermine conclusions based on “Aha!” statistics, and with it undermine the prevailing vision of the anointed.

Perhaps the best-known use of the “Aha!” approach is to “prove” discrimination by statistics showing intergroup disparities. Once again, these inferences are drawn only where they are consonant with the prevailing vision. No one regards the gross disparity in “representation” between blacks and whites in professional basketball as proving discrimination against whites in that sport. Nor does anyone regard the gross “overrepresentation” of blacks among the highest-paid players in baseball as showing discrimination.

The point here is not that whites are being discriminated against, but that a procedure which leads logically to this absurd conclusion is being taken in deadly seriousness when the conclusion fits the vision of the anointed. In short, what is claimed by the anointed to be evidence is clearly recognized by them as not being evidence when its conclusions do not fit the prevailing vision.

Implicit in the equating of statistical disparity with discrimination is the assumption that gross disparities would not exist in the absence of unequal treatment. However, international studies have repeatedly shown gross intergroup disparities to be commonplace all over the world, whether in alcohol consumption,14 fertility rates,15 educational performance,16 or innumerable other variables. A reasonably comprehensive listing of such disparities would be at least as large as a dictionary. However, a manageably selective list can be made of disparities in which it is virtually impossible to claim that the statistical differences in question are due to discrimination:

1. American men are struck by lightning six times as often as American women.17

2. During the days of the Soviet Union, per capita consumption of cognac in Estonia was more than seven times what it was in Uzbekistan.18

3. For the entire decade of the 1960s, members of the Chinese minority in Malaysia received more university degrees than did members of the Malay majority—including more than 400 degrees in engineering, compared to 4 for the Malays.19

4. In the days of the Ottoman Empire, when non-Moslems were explicitly second-class under the law, there were whole industries and sectors of the economy predominantly owned and operated by Christian minorities, notably Greeks and Armenians.20

5. When Nigeria became an independent nation in 1960, most of its riflemen came from the northern regions while most of its officers came from southern regions. As late as 1965, half the officers were members of the Ibo tribe21—a southern group historically disadvantaged.

6. In Bombay, capital of India’s state of Maharashtra, most of the business executives are non-Maharashtrian, and in the state of Assam, most of the businessmen, construction workers, artisans, and members of various professions are non-Assamese.22

7. Within the white community of South Africa, as late as 1946, the Afrikaners earned less than half the income of the British,23 even though the Afrikaners were politically predominant.

8. As of 1921, members of the Tamil minority in Ceylon outnumbered members of the Sinhalese majority in both the medical and the legal professions.24

9. A 1985 study in the United States showed that the proportion of Asian American students who scored over 700 on the mathematics portion of the Scholastic Aptitude Test (SAT) was more than double the proportion among whites.25

10. In Fiji, people whose ancestors immigrated from India—usually to become plantation laborers—received several times as many university degrees as the indigenous Fijians,26 who still own most of the land.

11. Although Germans were only about one percent of the population of czarist Russia, they were about 40 percent of the Russian army’s high command, more than half of all the officials in the foreign ministry, and a large majority of the members of the St. Petersburg Academy of Sciences.27

12. In Brazil’s state of São Paulo, more than two-thirds of the potatoes and more than 90 percent of the tomatoes have been grown by people of Japanese ancestry.28

13. As early as 1887, more than twice as many Italian immigrants as Argentines had bank accounts in the Banco de Buenos Aires,29 even though most Italians arrived destitute in Argentina and began work in the lowest, hardest, and most “menial” jobs.

14. In mid-nineteenth-century Melbourne, more than half the clothing stores were owned by Jews,30 who have never been as much as one percent of Australia’s population.

15. Even after the middle of the twentieth century in Chile, most of the industrial enterprises in Santiago were controlled by either immigrants or the children of immigrants.31

Although these examples were deliberately selected to exclude cases where discrimination might plausibly have been regarded as the reason for the disparities, this in no way excludes the possibility that discrimination may be behind other disparities. The point here is that inferences cannot be made either way from the bare fact of statistical differences. Nor does it necessarily help to “control” statistically for other variables. Most social phenomena are sufficiently complex—with data on many variables being either unavailable or inherently unquantifiable—that often such control is itself illusory. That illusion will be analyzed as a special phenomenon which can be called the residual fallacy.

THE RESIDUAL FALLACY

A common procedure in trying to prove discrimination with statistics is to (1) establish that there are statistical disparities between two or more groups, (2) demonstrate that the odds that these particular disparities are a result of random chance are very small, and (3) show that, even holding constant various nondiscriminatory factors which might influence the outcomes, that still leaves a substantial residual difference between the groups, which must be presumed to be due to discrimination. Since essentially the same intellectual procedure has been used to “prove” genetic inferiority, the choice of what to attribute the residual to is inherently arbitrary. But there is yet another major objection to this procedure. Not uncommonly, as the gross statistics are broken down by holding various characteristics constant, it turns out that the groups involved differ in these characteristics on every level of aggregation—and differ in different proportions from one level to another.

The residual fallacy is one of the grand non sequiturs of our time, as common in the highest courts of the land as on the political platform or in the media or academe. At the heart of the fallacy is the notion that you really can hold variables constant—“controlling” the variables, as statisticians say—in practice as well as in theory.

“Controlling” for Education

A commonly made claim is that discrimination is so pervasive and so severe that even people with the same educational qualifications are paid very differently according to whether they are male or female, black or white, etc. Holding years of education constant is often illusory, however, since groups with different quantities of education often have qualitative differences in their education as well. Thus, when group A has significantly more years of education than group B, very often group A also has a higher quality of education, whether quality is measured by their own academic performance at a given educational level, by the qualitative rankings of the institutions attended, or by the difficulty and remuneration of the fields of study in which the group is concentrated. At the college or university level, for example, group A may be more heavily concentrated in mathematics, science, medicine, or engineering, while group B is concentrated in sociology, education, or various ethnic studies. In this context, claims that members of group B are paid less than members of group A with the “same” education (measured quantitatively) are clearly fallacious. Qualitative differences in education between groups have been common around the world, whether comparing Asian Americans with Hispanic Americans in the United States, Ashkenazic Jews with Sephardic Jews in Israel, Tamils with Sinhalese in Sri Lanka, Chinese with Malays in Malaysia, or Protestants with Catholics in Northern Ireland.32

Male-female differences in income are often likewise said to prove discrimination because men and women with the “same” education receive different pay. Suppose, for example, that we try to hold education constant by examining income statistics just for those women and men who have graduated from college. There is still a sex difference in income at this level of aggregation, and if we are content to stop here—the choice of stopping point being inherently arbitrary—then we may choose to call the residual differences in income evidence of sex discrimination. However, if we recognize that college graduates include people who go on to postgraduate study, and that postgraduate education also influences income, we may wish to go on to the next level of aggregation and compare women and men who did postgraduate study. Now we will find that the proportion of women and men with postgraduate degrees differs from the proportions with college degrees—women slightly outnumbering men at the bachelor’s degree level, but being outnumbered by men by more than two-to-one at the master’s degree level, and by 59 percent at the Ph.D. level.33 Clearly, when we compare college-educated women and men, which includes those who went on to postgraduate work, we are still comparing apples and oranges because their total education is not the same.

Suppose, then, that we press on to the next level of aggregation in search of comparability, and look only at women and men who went all the way to the Ph.D. Once more, we will discover not only disparities but changing ratios of disparities. Although women receive 37 percent of all Ph.D.s, the fields in which they receive them differ radically from the fields in which men receive their Ph.D.s—with the men being more heavily concentrated in the more mathematical, scientific, and remunerative fields. While women receive nearly half the Ph.D.s in the social sciences and more than half in education, men receive more than 80 percent of the Ph.D.s in the natural sciences and more than 90 percent of the Ph.D.s in engineering.34 We are still comparing apples and oranges.

Some specialized studies have permitted even finer breakdowns, but sex disparities in education continue in these finer breakdowns as well. For example, if we examine only those women and men who received Ph.D.s in the social sciences, it turns out that the women were more likely to be in sociology and the men in economics—the latter being the more remunerative field. Moreover, even within economics, there have been very large male-female differences as to what proportion of the economics Ph.D.s were specifically in econometrics—a difference in a proportion of ten men to one woman.35 In short, we have still not held constant the education we set out to hold constant and which we could have said that we had held constant by simply stopping the disaggregation at any point along the way.

While the disaggregation process must stop at some point, whether because the statistics are not broken down any further or because time is not limitless, the fatal fallacy is to assume that all factors left unexamined must be equal, so that all remaining differences in outcome can be attributed to discrimination. In other words, having found causal disparities at every level of aggregation—and often changing ratios of such disparities, as well—it is arbitrarily assumed that the causal disparities end where our disaggregation ends, so that all remaining differences in reward must be due to discrimination.

Innumerable historical and cultural differences, found among many groups in countries around the world—as the numbered examples listed above suggest—make statistical disparities fall far short of proof of discrimination. Such data may be accepted as evidence or proof in courts of law but, logically speaking, such data prove nothing. They are “Aha!” statistics.

Mortgage “Discrimination” Statistics

In the studies of black and other minority mortgage loan applicants who were turned down at higher rates than whites, some attempt was made to control for nonracial variables that might have affected these decisions, by comparing minorities and whites in the same income brackets. However, anyone who has ever applied for a mortgage loan knows that numerous factors besides income are considered, one of the most obvious being the net worth of the applicants. Other data, from the U.S. Census, show that blacks average lower net worth than whites in the same income brackets. Indeed, even blacks in the highest income bracket do not have as much net worth as whites in the second-highest bracket.36 Controlling for income gives only the illusion of comparability. That illusion has been further undermined by the fact that a widely cited Federal Reserve study on racial disparities in mortgage loan approval rates did not control for net worth or take into account the loan applicants’ credit histories or their existing debts.37 Nor was “the adequacy of collateral” included.38

When a more detailed follow-up study was done for the Boston area by the Federal Reserve Bank of Boston, it was discovered that in fact black and Hispanic applicants for mortgage loans had greater debt burdens, poorer credit histories, sought loans covering a higher percentage of the value of the properties in question, and were also more likely to seek to finance multiple-dwelling units rather than single-family homes.39 Loan applications for multiple-dwelling units were turned down more often among both white and minority applicants, but obviously affect the rejection rate moreso among the latter, since they applied more often for loans for such units.40 Even among those applicants whose loans were approved—and the majority of both minority and white applicants had their loans approved—minority borrowers had incomes only about three-quarters as high as whites and assets worth less than half the value of the assets of the white borrowers.41 Nevertheless, when all these variables were “controlled” statistically, there was still “a statistically significant gap” between the loan approval rate for minority loan applicants and white loan applicants, though substantially less than in the original study.

Whereas 72 percent of the minority loan applications were approved, compared to 89 percent for whites, when other characteristics were held constant 83 percent of the minority loan applications were approved.42 The remaining differential can be expressed either by saying that there was a residual difference of 6 percentage points in loan approval rates or that minority applicants were turned down 60 percent more often than white applicants with the same characteristics—since a 17 percent rejection rate is 60 percent higher than an 11 percent rejection rate. The Boston Federal Reserve Bank report chose the latter way of expressing the same facts.43

Was the residual difference of 6 percentage points due to racial discrimination? After finding minority and white loan applicants different on all the relevant variables examined, can we assume that they must be the same on all remaining variables for which data are lacking? One test might be to examine the logic of the discrimination hypothesis and to test its conclusions against other empirical data. For example, if there were racial discrimination in lending, and yet most applicants in all racial or ethnic groups were successful in obtaining loans, the implication would be that minority loan applicants had to be more credit-worthy than white applicants to be approved. And if that were so, then the subsequent default rates among minority borrowers would be lower than among white borrowers. In reality, however, census data suggest no racial difference in default rates among the approved borrowers.44

When the principal author of the Boston Federal Reserve Bank study, Alicia Munnell, was contacted by a writer for Forbes magazine and this clear implication was presented to her, she called it “a sophisticated point.” When pressed, she agreed with the point made by the Forbes writer, that “discrimination against blacks should show up in lower, not equal default rates—discrimination would mean that good black applicants are being unfairly rejected.”45 The following discussion ensued:

FORBES: Did you ever ask the question that if defaults appear to be more or less the same among blacks and whites, that points to mortgage lenders making rational decisions?

Munnell: No.

Munnell does not want to repudiate her study. She tells FORBES, on reflection, that the census data are not good enough and could be “massaged” further: “I do believe that discrimination occurs.”

FORBES: You have no evidence?

Munnell: I do not have evidence.… No one has evidence.46

This lack of evidence, however, has not prevented a widespread orgy of moral outrage in the media.

CHANGING ASSORTMENTS

One common source of needless alarm about statistics is a failure to understand that a given series of numbers may represent a changing assortment of people. A joke has it that, upon being told that a pedestrian is hit by a car every 20 minutes in New York, the listener responded: “He must get awfully tired of that!” Exactly the same reasoning—or lack of reasoning—appears in statistics that are intended to be taken seriously.

Claims that major industries throughout the American economy are dominated by a few monopolistic corporations are often based on statistics showing that four or five companies produce three-quarters, four-fifths, or some other similar proportion of the industry’s output—and that this condition has persisted for decades, suggesting tight control by this in-group. What is often overlooked is that the particular companies constituting this “monopolistic” group are changing.47 In short, there is competition—and particular businesses are winning and losing in this competition at different times, creating turnover. This simple fact, so damaging to the monopoly hypothesis, is evaded by statistical definition. Those with the alarmist view of a monopolistic economy define the percentage of sales by given businesses as the share of the market they “control.” Thus, they are able to say that the top four or five companies “control” most of the business in the industry—turning an ex post statistic into an ex ante condition. But of course the fact that there is turnover among these companies indicates that no such control exists. Otherwise, monopolistic firms would not allow themselves to be displaced by new competitors.

Perhaps the clearest example of how illusory the “control” of a market can be was a federal case involving a Las Vegas movie-house chain which showed 100 percent of all the first-run movies in that city. The chain was prosecuted under the Sherman Antitrust Act for “monopolization” of its market. However, by the time the case reached the Circuit Court of Appeals, one of the second-run movie chains had begun to show more first-run movies than the statistically defined “monopolist.”48 Obviously, if even 100 percent “control” by statistical definition is not effective, lesser percentages are likely to be even less so.

Much the same implicit assumption of unchanging constituents underlies many discussions of “the rich” and “the poor.” Yet studies that follow particular individuals over time have shown that most Americans do not remain in one income bracket for life, or even for as long as a decade.49 Both the top 20 percent who are often called “the rich” and the bottom 20 percent who are called “the poor” represent a constantly changing set of individuals. A study of income tax returns showed that more than four-fifths of the individuals in the bottom 20 percent of those who filed income tax returns in 1979 were no longer there by 1988. Slightly more had reached the top bracket by 1988 than remained at the bottom.50 For one thing, individuals are nine years older at the end of nine years, and may well have accumulated experience, skills, seniority, or promotions during that time. Other studies show similar patterns of mobility, though the data and the percentages differ somewhat.

A University of Michigan study, for example, found that less than half of the families followed from 1971 to 1978 remained in the same quintile of the income distribution throughout those years.51 This turnover of individuals within each bracket may well explain some strange data on those people labeled “the poor.” Nearly half of the statistically defined “poor” have air conditioning, more than half own cars, and more than 20,000 “poor” households have their own heated swimming pool or Jacuzzi. Perhaps most revealing, the statistically defined “poor” spend an average of $1.94 for every dollar of income they receive.52 Clearly, something strange is going on.

Just as people from lower income brackets move up, so people from higher income brackets move down, at least temporarily. Someone in business or the professions who is having an off year financially may receive an income for that year that falls in the lowest bracket. That does not make these individuals poor—except by statistical definition. Such people are unlikely to divest themselves of all the things that go with a middle-class lifestyle, which they will continue to lead as their incomes rebound in subsequent years. Yet the vision of the anointed is cast in such terms as “the poor” and “the rich”—and any statistics which seem to fit the prevailing vision of such categories will be seized upon for that purpose.

In keeping with this vision, the media made much of Congressional Budget Office data that seemed to suggest that the rich were getting richer and the poor were getting poorer during the years of the Reagan administration. This was clearly an “Aha!” statistic, in keeping with what the anointed believed or wanted to believe. Even putting aside the very large question of whether the particular individuals in each of these categories were the same throughout the eight Reagan years, the statistical definitions used systematically understated the economic level of those in the lower income brackets and overstated the economic level of those in the higher brackets. For example, well over $150 billion in government benefits to lower income people go uncounted in these statistics—more than $11,000 per poor household.53 At the other end of the income scale, the official data count capital gains in a way virtually guaranteed to show a gain, even when there is a loss, and to exaggerate whatever gains occur.

For example, if someone invests $10,000 and the price level doubles during the years while this investment is being held, then if it is sold for anything less than $20,000 at the higher price level, it is in fact a loss in real terms. Yet if the original investment remains the same in real value by doubling in money value as the price level doubles, the official statistics will show it as a “gain” of $10,000—and will correct for inflation by dividing this by 2 to get a $5,000 gain in real income. With such definitions as these, it is no wonder that the rich are getting richer and the poor are getting poorer, at least on paper. A will always exceed B, if you leave out enough of B and exaggerate A.

One of the offshoots of the preoccupation with “rich” and “poor” has been another definitional catastrophe—“hunger in America.” Here many advocacy groups put out many kinds of statistics, designed to get media attention and spread enough alarm to produce public policy favoring whatever they are advocating. The definitions behind their statistics seldom get much scrutiny. One hunger activist, for example, determined how many people were hungry by determining how many were officially eligible for food stamps and then subtracting those who in fact received food stamps. Everyone else was “hungry,” by definition. Using this method, he estimated that millions of Americans were hungry and produced documents showing the 150 “hungriest” counties in the United States.

Of these “hungry” counties, the hungriest county of all turned out to be a ranching and farming community where most farmers and ranchers grew their own food, where farm and ranch hands were boarded by their employers, and where only two people in the entire county were on food stamps.54 Because some people in this county had low money incomes in some years, they were eligible for food stamps, but because they were eating their own food, they did not apply for food stamps—thereby becoming statistically “hungry.” Again, studies of actual flesh-and-blood human beings have yielded radically different results from those produced by broad-brush statistical definitions. When the U.S. Department of Agriculture and the Centers for Disease Control examined people from a variety of income levels, they found no evidence of malnutrition among people with poverty-level incomes, nor even any significant difference in the intake of vitamins, minerals, and other nutrients from one income level to another. The only exception was that lower-income women were slightly more likely to be obese.55

Such facts have had remarkably little effect on the media’s desire to believe that the rich are getter richer, while the poor are getting poorer, and that hunger stalks the less fortunate. A CBS Evening News broadcast on March 27, 1991, proclaimed:

A startling number of American children are in danger of starving… one out of eight American children is going hungry tonight.56

Dan Rather was not alone in making such proclamations. Newsweek, the Associated Press, and the Boston Globe were among those who echoed the one-in-eight statistic.57 Alarming claims that one out of every eight children in America goes to bed hungry each night are like catnip to the media. A professional statistician who looked at the definitions and methods used to generate such numbers might burst out laughing. But it is no laughing matter to the activists and politicians pushing their agenda, and it should be no laughing matter to a society being played for suckers.

One of the common methods of getting alarming statistics is to list a whole string of adverse things, with the strong stuff up front to grab attention and the weak stuff at the end to supply the numbers. A hypothetical model of this kind of reasoning might run as follows: Did you know that 13 million American wives have suffered murder, torture, demoralization, or discomfort at the hands of left-handed husbands? It may be as rare among left-handers as among right-handers for a husband to murder or torture his wife, but if the marriages of southpaws are not pure, unbroken bliss, then their wives must have been at least momentarily discomforted by the usual marital misunderstandings. The number may be even larger than 13 million. Yet one could demonize a whole category of men with statistics showing definitional catastrophes. While this particular example is hypothetical, the pattern is all too real. Whether it is sexual harassment, child abuse, or innumerable other social ills, activists are able to generate alarming statistics by the simple process of listing attention-getting horrors at the beginning of a string of phenomena and listing last those marginal things which in fact supply the bulk of their statistics. A Louis Harris poll, for example, showed that 37 percent of married women are “emotionally abused” and 4 million “physically abused.” Both of these include some very serious things—but they also include among “emotional abuse” a husband’s stomping out of the room and among “physical abuse” his grabbing his wife.58 Yet such statistics provide a backdrop against which people like New York Times columnist Anna Quindlen can speak of wives’ “risk of being beaten bloody” by their husbands.59 Studies of truly serious violence find numbers less than one-tenth of those being thrown around in the media, in politics, and among radical feminists in academia.60

Sometimes definitions are reasonable enough in themselves, but the ever-changing aggregations of individuals who fall within the defined categories play havoc with the conclusions reached from statistics. For example the ever-changing aggregations of individuals who constitute “the rich” and “the poor”—and all the income brackets in between—raise serious questions about the whole concept of “class,” as it is applied in academia and in the media. Third-party observers can of course classify anybody in any way they choose, thereby creating a “class,” but if their analysis pretends to have any relevance to the functioning of the real world, then those “classes” must bear some resemblance to the actual flesh-and-blood people in the society.

What sense would it make to classify a man as handicapped because he is in a wheelchair today, if he is expected to be walking again in a month, and competing in track meets before the year is out? Yet Americans are given “class” labels on the basis of their transient location in the income stream. If most Americans do not stay in the same broad income bracket for even a decade, their repeatedly changing “class” makes class itself a nebulous concept. Yet the intelligentsia are habituated, if not addicted, to seeing the world in class terms, just as they constantly speak of the deliberate actions of a personified “society” when trying to explain the results of systemic interactions among millions of individuals.

Some people do indeed remain permanently at a particular income level and in a particular social milieu, just as some people remain in wheelchairs for life. But broad-brush statistics which count the transient and the permanent the same—as all too many social statistics do, given the much higher cost of following specific individuals over time—are potentially very misleading. Moreover, those on the lookout for “Aha!” statistics often seize upon these dubious numbers when such statistics seem to confirm the vision of the anointed.

The simple fact that everyone is getting older all the time means that many statistics necessarily reflect an ever-changing aggregation of people. Nowhere is this more true than in statistics on “income distribution” and the “concentration” of wealth. Younger adults usually earn less than middle-aged people. This fact can hardly be considered starling, much less sinister. Yet this simple reality is often ignored by those who automatically treat statistics on income and wealth differences as differences between classes of people, rather than differences between age brackets. But it has long been a common pattern that the median incomes of younger individuals have been lower and that people reach their peak earnings years in their mid-forties to mid-fifties. As of 1991, for example, people in the 45- to 54-year-old bracket earned 47 percent more than those in the 25- to 34-year-old bracket. The only age bracket in which one-fifth or more of the people consistently earned more than double the national average income in 1964, 1969, 1974, 1979, 1984, and 1989 was the age bracket from 45 to 54 years old. As of 1989, 28 percent of the people in that age bracket earned more than double the national average income, compared to only 13 percent of people aged 25 to 44.61 Looked at another way, just over 60 percent of the people in the top 5 percent of income-earners in 1992 were 45 years old or older.62 This is an age phenomenon which the anointed insist on talking about as if it were a class phenomenon.

In accumulated wealth, the disparity is even greater—again, hardly surprising, given that older people have been accumulating longer. As of 1988, the net worth of households headed by someone in the 55- to 64-year-old bracket averaged more than ten times that for households headed by someone in the under 35 bracket.63 Despite the enormous influence of age on income and wealth, statistical disparities are often equated with moral inequities when discussing economic differences. Yet the fact that a son in his twenties earns less than his father in his forties is hardly an “inequity” to be “corrected” by the anointed—especially since the son is likely to earn at least as much as his father when he reaches his forties, given the general rise of incomes over time in the American economy. Only by ignoring the age factor can income and wealth statistics be automatically translated into differences between classes.

Also ignored in most discussions of family or household income statistics—both favorites of those proclaiming vast inequities—is the simple fact that upper-income families contain more people than lower-income families. There are more than half again as many people per household in households earning $75,000 and up as in households earning under $15,000.64 That is in fact one of the reasons for their being in different brackets, since it is people who earn income, and more paychecks usually mean more income. There are more than twice as many income-earners in households earning $75,000 and up as in households earning less than $15,000.65 Families in the top 20 percent supply 29 percent of all people who work 50 weeks per year or more, while families in the bottom 20 percent supply just 7 percent of such workers.66

A declining size of both families and households over time67 means that intertemporal trends in household income can be very misleading, as are intergroup comparisons, since household size differs from group to group, as well as over time.68 Although Americans’ median household income was not appreciably higher in 1992 than in 1969,69 income per person rose from $3,007 in 1969 to $15,033 in 1992—approximately a five-fold increase in money income while the price index rose less than four-fold,70 indicating about a 40 percent increase in real income per capita. The fact that more individuals could afford to have their own households in 1992 than in 1969 was a sign of increased prosperity, not stagnation.

For blacks, whose family and household size have been declining especially sharply, comparisons of family or household incomes are particularly misleading, whether comparing their own progress over time or their income relative to that of whites. For example, the real income per black household rose only 7 percent from 1967 to 1988, but real income per black person rose 81 percent over the same span. On a household basis, blacks’ average income was a lower percentage of whites’ average income at the end of this period than at the beginning but, on a per person basis, blacks were earning a significantly higher percentage of what whites were earning in 1988.71

Needless to say, the anointed much prefer to quote family and household statistics on income, claiming “economic stagnation,” the “disappearance of the middle class,” and miscellaneous other rhetorical catastrophes. “For all but the top 20 percent,” an op-ed column in the New York Times said, “income has stagnated.” Moreover, this alleged fact was “widely acknowledged” by “politicians, economists and sociologists.”72 That so many such people echoed the same refrain—without bothering to check readily available census data to the contrary—says more about them than about income. Moreover, not all such use of household or family income data can be attributed to statistical naivete. New York Times columnist Tom Wicker knew how to use per-capita income statistics when he wished to depict success for the Johnson administration and family income statistics when he wished to depict failure for the Reagan and Bush administrations.73

As for the top 20 percent, so often referred to as “the rich,” those using “income distribution” statistics seldom say how much hard cash is involved when they talk about “the rich” in either income or wealth terms. In income, a little over $58,000 a year was enough to put a household in the top 20 percent in 1992 and a little under $100,000 was enough to put it in the top 5 percent.74 Since a household may contain one individual or a large family, even the latter figure may reflect multiple paychecks of only modestly prosperous people. It is a little much for media pundits with six- and seven-figure incomes to be referring to top-20-percent households earning $58,000 a year as “the rich.”

Wealth statistics show equally modest sums possessed by the top 20 percent. As of 1988, a net worth of $112,000 was enough to put an individual in the top 20 percent of wealth-holders. That is not $112,000 in the bank but a total of that amount from counting such things as the value of a car and the equity in a home, as well as money in the bank. The value of the individual’s own residence was in fact the largest single item in net worth, constituting 43 percent nationally. Even if we count only the top 5 percent of individuals as rich, a statistically “rich” person with a $100,000 income, two children in college, and a mortgage to pay and with federal and state governments together taking nearly half his income, might have real trouble staying financially above water. And if he lost his job, it could spell disaster. There are of course genuinely rich people, just as there are genuinely poor people—but they bear little resemblance to the statistical categories referred by their names.

Those who use existing statistics to advocate government policies designed to produce greater equality in income and wealth seldom bother to consider how much statistical “inequality” would exist in even a 100 percent equal world. Even if every human being in the whole society had absolutely identical incomes at a given age, the statistical disparities (“inequities”) in income and wealth could still be huge.

As a simple hypothetical example, imagine that each individual at age 20 begins his working career earning an annual income of $10,000 and—for the sake of simplicity in following the arithmetic—remains at that level until he reaches age 30, when he receives a $10,000 raise, and that such raises are repeated at each decade until his 60s, with his income going back to zero when he retires at age 70. To maintain perfect equality at each age, let us assume that all these individuals have identical savings patterns. They each have the same notion as to what their basic needs for “subsistence” are (in this case, $5,000) and that they will save 10 percent of whatever they earn above that, using the rest to improve their current standard of living as their incomes rise over time. What kind of statistics on income and wealth would emerge from this situation of perfect equality in income, wealth, and savings habits? Looking at the society as a whole, there would be a remarkable amount of statistical inequality, as shown in the table below:

AGE: 20

ANNUAL INCOME: $10,000

“SUBSISTENCE”: $5,000

ANNUAL SAVINGS: $500

LIFETIME SAVINGS: 0

AGE: 30

ANNUAL INCOME: 20,000

“SUBSISTENCE”: 5,000

ANNUAL SAVINGS: 1,500

LIFETIME SAVINGS: $5,000

AGE: 40

ANNUAL INCOME: 30,000

“SUBSISTENCE”: 5,000

ANNUAL SAVINGS: 2,500

LIFETIME SAVINGS: 20,000

AGE: 50

ANNUAL INCOME: 40,000

“SUBSISTENCE”: 5,000

ANNUAL SAVINGS: 3,500

LIFETIME SAVINGS: 45,000

AGE: 60

ANNUAL INCOME: 50,000

“SUBSISTENCE”: 5,000

ANNUAL SAVINGS: 4,500

LIFETIME SAVINGS: 80,000

AGE: 70

ANNUAL INCOME: 0

“SUBSISTENCE”: 5,000

ANNUAL SAVINGS: 0

LIFETIME SAVINGS: 125,000

Note: Savings are given as of the day each individual reaches the age shown in the Age column. Therefore, the person who has just turned age 20 and enters the labor force has zero savings, even though the rate at which he saves out of his income will be $500 per year. Conversely, the person who has just turned age 70 and retired will have $125,000 in savings accumulated out of past earnings, even though his current income is zero.

Note what statistical disparities (“inequities”) there are, even in a hypothetical world of perfect equality over every lifetime. At a given moment—which is how most statistics are collected—the top 17 percent of income earners have five times the income of the bottom 17 percent and the top 17 percent of savers have 25 times the savings of the bottom 17 percent, not even counting those who have zero in either category. If these data were aggregated and looked at in “class” terms, we would find that 17 percent of the people have 45 percent of all the accumulated savings in the whole society. Obviously, there would be ample raw material here for alarums, moral indignation, and the promulgation of “solutions” by the anointed.75

In the real world as well, even without ideological bias or manipulation, statistics can be grossly misleading. For example, data from the 1990 census showed that Stanford, California, had one of the highest poverty rates among more than a hundred communities in the large region known as the San Francisco Bay area. Although the community of Stanford coincides with the Stanford University campus, where many faculty members live, it had a higher poverty rate than East Palo Alto, a predominantly low-income minority community not far away.76 Stanford is the second richest university in the country, its faculty are among the highest paid, and its top administrators have six-figure salaries. How could Stanford have more poverty than a rundown ghetto community?

The answer is that students greatly outnumber professors—and although undergraduates living in dormitories are not counted by the census, graduate students living in their own apartments are.

About half the students at Stanford are graduate students and many of them are married and have children. The cash incomes from their fellowships often come in under the official poverty level for a family. Not only is their period of “poverty” as graduate students one that will end in a few years, leading to professional occupations with professional-level salaries, even during this period of “poverty” they are likely to be far better off than the residents of East Palo Alto. Stanford graduate students live in rent-subsidized housing, located within walking distance of their work and their recreation—much of the latter provided free or at subsidized prices. People in East Palo Alto must pay transportation costs to and from work, to and from movies, sports events or other recreation, and pay what the market charges for everything from rent to newspapers. At Stanford, three campus newspapers are available free, as are tennis courts, swimming pools, and buses. Movies, lectures, football games, and a world-class hospital are available at less than market rates. In no reasonable sense is there more poverty at Stanford than in East Palo Alto. But statistically there is. This is not a product of deception but of the inherent pitfalls of statistics, made far worse by an attitude of gullible acceptance of numbers as representing human realities.

CORRELATION VERSUS CAUSATION

One of the first things taught in introductory statistics textbooks is that correlation is not causation. It is also one of the first things forgotten. Where there is a substantial correlation between A and B, this might mean that:

1. A causes B.

2. B causes A.

3. Both A and B are results of C or some other combination of factors.

4. It is a coincidence.

Those with the vision of the anointed almost invariably choose one of the first two patterns of causation, the particular direction of causation depending on which is more consistent with that vision—not which is more consistent with empirical facts. As part of that vision, explanations which exempt the individual from personal responsibility for unhappy circumstances in his life are consistently favored over explanations in which the individual’s own actions are a major ingredient in unfortunate outcomes. Thus, the correlation between lack of prenatal care and high infant mortality rates was blamed by the media on society’s failure to provide enough prenatal care to poor women,77 rather than blaming those women’s failure to behave responsibly—whether in seeking prenatal care, avoiding drugs and alcohol during pregnancy, or in many other evidences of deficient parental responsibility. The fact that there is no such correlation between a lack of prenatal care and high infant mortality rates in groups which traditionally take more care of their children is simply ignored.

A study making comparisons within the black community in Washington found that there was indeed a correlation between prenatal care and low birth weight among infants—but the mothers who failed to get prenatal care were also smokers twice as often as the others and alcohol users six times as often.78 In other words, the same attitudes and behavior which jeopardized the infants’ well-being in one way also jeopardized it in others. Failure to seek prenatal care was a symptom, rather than a cause. In terms of our little scheme above, C caused both A and B. However, this study going completely against the vision of the anointed was almost completely ignored in the national media.

Similarly, the fact that crime and poverty are correlated is automatically taken to mean that poverty causes crime, not that similar attitudes or behavior patterns may contribute to both poverty and crime. For a long time it was automatically assumed among social reformers that slums were “nurseries of crime.” In other words, the correlation between bad housing and high crime rates was taken to mean that the former caused the latter—not that both reflected similar attitudes and behavior patterns. But the vision of the anointed has survived even after massive programs of government-provided housing have led to these brand-new housing projects quickly degenerating into new slums and becoming centers of escalating crime. Likewise, massive increases in government spending on children during the 1960s were accompanied by falling test scores, a doubling of the teenage suicide and homicide rates, and a doubling of the share of births to unwed mothers.79 Yet, during the 1980s, such social pathologies were attributed to cutbacks in social programs under the Reagan administration80—to “neglect,” as Marian Wright Edelman of the Children’s Defense Fund put it.81 The fact that the same kinds of social deterioration were going on during a decade (the 1960s) when government spending on programs for children was rapidly escalating, as well as during a decade (the 1980s) when it was not, simply did not matter to those for whom “investment” in social programs was axiomatically taken to be the magic key.

In general, where a correlation goes directly counter to the vision of the anointed—drastically fewer urban riots during administrations which opposed the “war on poverty” approach—it is simply ignored by those seeking “Aha!” statistics. Likewise ignored is the continued escalation of venereal diseases, long after “sex education” has become too pervasive for ignorance to be blamed, except by those for whom the vision of the anointed is an axiom, rather than a hypothesis.

While Chapter 2 showed repeated examples of policies of the anointed being followed by dramatically worsening conditions, it is not necessary here to claim that statistics prove that these various policies—the “war on poverty,” sex education, changes in criminal justice procedures—caused the disasters which followed. It would be sufficient to show that the promised benefits never materialized. A consistent record of failure is only highlighted by the additional fact that things got worse. Conceivably, other factors may have been behind these disasters. But to have to repeatedly invoke unsubstantiated claims that other factors were responsible is to raise the question whether these other factors have not become another deus ex machina called upon in desperation to rescue predictions that began with such utter certainty and such utter disdain for any alternative views. Moreover, those with alternative views often predicted the very disasters that materialized.

“RACIAL” DIFFERENCES

As already noted in various examples, many differences between races are often automatically attributed to race or to racism. In the past, those who believed in the genetic inferiority of some races were prone to see differential outcomes as evidences of differential natural endowments of ability. Today, the more common non sequitur is that such differences reflect biased perceptions and discriminatory treatment by others. A third possibility—that there are different proportions of people with certain attitudes and attributes in different groups—has received far less attention, though this is consistent with a substantial amount of data from countries around the world. One of the most obvious of these kinds of differences is that there are different proportions of each group in different age brackets. Moreover, income differences between age brackets are comparable to income differences between the races. This of course does not mean that age differences explain everything, but it does suggest why the automatic assumption that racism explains racial disparities cannot be uncritically accepted either.

Different racial and ethnic groups not only vary in which proportions fall into which age brackets but vary as well in which proportions fall into various marital and other social conditions—and these in turn likewise have profound effects on everything from income to infant mortality to political opinions. As far back as 1969, black males who came from homes where there were newspapers, magazines, and library cards had the same incomes as whites from similar homes and with the same number of years of schooling.82 In the 1970s, black husband-and-wife families outside the South earned as much as white husband-and-wife families outside the South.83 By 1981, for the country as a whole, black husband-and-wife families where both were college educated and both working earned slightly more than white families of the same description.84

With differing proportions of the black and white populations living in husband-and-wife families, and differing proportions coming from homes where library cards and the like were common, the economic equality within such subsets did not make a substantial difference in the overall racial disparities in incomes. However, such facts do have a bearing on the larger question as to how much of that income disparity is due to employer discrimination or racism.

To a racist, the fact that a particular black individual comes from a husband-and-wife family or has a library card makes no real difference, even if the racist bothers to find out such things. The equality of income achieved within these subcategories of blacks suggests that racism is less of a factor in the overall differences than has been supposed—and that cultural values or behavioral differences are more of a factor.

Other studies reinforce the conclusion that varying proportions of people with particular values and behavior form one group to another make substantial differences in economic and social outcomes. Although the poverty rate among blacks in general is higher than that among whites in general, the poverty rate among families headed by black married couples has for years been consistently lower than the poverty rate among white, female-headed families, the latter living in poverty about twice as often as black intact families.85 With infant mortality as well, although blacks in general have about twice the infant mortality rate of whites in general, black married women with only a high school education have lower infant mortality rates than white unwed mothers with a college education.86 In short, race makes less difference than whether or not there are two parents. The real-life Murphy Browns are worse off economically than if they were black married women with less education, and their children are more likely to die in infancy.

Even as regards attitudes on political issues, family differences are greater than racial differences, according to a 1992 poll. Black married couples with children were even more opposed to homosexual marriage and to the legalization of marijuana than white married couples were.87 Many of the “racial” differences based on gross statistics are shown by a finer breakdown to be differences between people with different values and lifestyles, who are differing proportions of different racial populations. Where the values and lifestyles are comparable, the economic and social outcomes have tended to be comparable. But to admit this would be to destroy a whole framework of assumptions behind massive social programs—and destroy with it a whole social vision that is prevalent among political and intellectual elites. Such finer breakdowns receive very little attention in the media, in politics, or in academia, where gross statistics continue to be cited in support of the vision of the anointed.

THE “DISAPPEARANCE” OF TRADITIONAL FAMILIES

Among the many unexamined “facts” endlessly repeated throughout the media are that (1) “half of all marriages end in divorce”88 and (2) the traditional family with both parents raising their children is now the exception rather than the rule. Both “facts” are wrong and reflect an ignorance of statistics, compounded by a gullible acceptance of those beliefs which are consonant with the vision of the anointed.

Marriage Patterns

Washington Post writer Haynes Johnson, Texas governor Ann Richards, and feminist writer Barbara Ehrenreich are just some of the many to repeat the claim that half of all marriages end in divorce.89 In a given year, the number of divorces may well be half as large as the number of marriages that year, but this is comparing apples and oranges. The marriages being counted are only those marriages taking place within the given year, while the divorces that year are from marriages that took place over a period of decades. To say that half of all marriages end in divorce, based on such statistics, would be like saying that half the population died last year if deaths were half as large as births. Just as most people were neither born nor died last year, so most marriages did not begin or end last year. Yet, on the basis of such gross misconceptions of statistics, the anointed not only assume airs of superiority but claim the right to shape public policy.

According to census data for 1992, 11 percent of all adults who had ever been married were currently in the status of divorced persons.90 If 50 percent overstates the divorce rate, 11 percent does not include people who had been divorced but were now remarried, or those who were never married. However, these census statistics are relevant to the claim that traditional marriages are disappearing, for remarriages are still marriages. Married couples outnumbered unmarried couples by about 54 million to 3 million.91 Most of the people who had never married were under the age of 25. Marriage statistics, which count everyone over the age of 15, of course include many people whom no one would expect to be married. But, by the time people reach middle age, the great majority have been married. In the 45- to 54-year-old bracket, for example, people who were married and currently living with their spouse outnumbered the never-married by more than fifteen to one.92 That is not even counting those people who had been married but were now separated, widowed, or divorced. Traditional marriages have become an anachronism only in the vision of the anointed. People are getting married later—about five years later, as compared to 189093—but they are still getting married.

Within these general patterns there are substantial differences between racial groups which should not be ignored. However, what should also not be ignored is how relatively recent these racial differences are. In every decennial census from 1920 through 1960, inclusive, at least 60 percent of all black males from age 15 on up were currently married. Moreover, the difference between black and white males in this respect was never as great as 5 percentage points during this entire era. Yet, by 1980, less than half of all black males in these age brackets were currently married—and the gap between black and white males was 17 percentage points.94 By 1992, that gap had widened to 21 percentage points.95 Like other negative social trends—in crime, welfare dependency, venereal disease, and educational test scores, for example—this trend represented a reversal of a previous positive trend. From the census of 1890 through the census of 1950, there was an increase in the proportion of both men and women currently married, among both blacks and whites.96

“Ozzie and Harriet” Families

A member of the Institute for Human Development at the University of California at Berkeley voiced a widespread view among the intelligentsia when she said:

After three decades of social upheaval, the outlines of a new family are beginning to emerge. It’s more diverse, more fragile, more fluid than in the past.97

This was taken as representing the “passing of the Ozzie and Harriet family.”98

While the proportion of children living with both parents has been declining over the decades, still the 1992 statistics from a census survey showed that more than two-thirds—71 percent, in fact—of all people under the age of 18 were still living with both their parents. Fewer than one percent were living with people who were not relatives. In particular segments of the population, especially in urban ghettos, the situation was drastically different. Nationwide, a majority—54 percent—of all black children were living only with their mothers in 1992. However, this was not a “legacy of slavery” as sometimes claimed. As recently as 1970, a majority of black children were still living with both parents.99 The sharp decline in marriage rates among black males in recent decades has obviously taken its toll on black children being raised without a father.

If most American children are still living with both parents, how can the traditional or Ozzie and Harriet family be said to be “passing”? Like so many statistical misconceptions, this one depends on confusing an instantaneous picture with an ongoing process. Because human beings go through a life cycle, the most traditional families—indeed, Ozzie and Harriet themselves—would be counted statistically as not being a traditional family. Before Ozzie met Harriet, and even after they married, they would not be counted in the Census Bureau’s “Married Couple Family with Own Children Under 18” category until in fact they had their first child. In later years, after the children were grown and gone, they would again no longer be in that statistical category. Moreover, in old age, when one spouse dies first the other would obviously no longer be counted as a married couple. What this means is that innumerable people who have had the most traditional pattern of marriage and child rearing would at various times in their lives be counted in statistics as not in the category popularly known as the “traditional family” of parents and their children. Depending on their life span and the span of their childbearing years, some individuals in the most traditional families would be counted as not being in such families for most of their adult lives.

Because most 16-year-olds have not yet married, because married couples do not continue to have children living with them all their lives, and because every elderly widow or widower does not remarry, does not mean that the traditional family has been repudiated—except perhaps by some of the anointed.

The family is inherently an obstacle to schemes for central control of social processes. Therefore the anointed necessarily find themselves repeatedly on a collision course with the family. It is not a matter of any subjective animus on their part against families. The anointed may in fact be willing to shower government largess upon families, as they do on other social entities. But the preservation of the family as an autonomous decision-making unit is incompatible with the third-party decision making that is at the heart of the vision of the anointed.

This is not a peculiarity of our times or of American society. Friedrich Engels’ first draft of the Communist Manifesto included a deliberate undermining of family bonds as part of the Marxian political agenda,100 though Marx himself was politically astute enough to leave that out of the final version. Nor has this war against the autonomy of the family been confined to extremists. The modern Swedish welfare state has made it illegal for parents to spank their own children and, in the United States, a variety of so-called “children’s advocates” have urged a variety of government interventions in the raising of children101—going beyond cases of neglect or abuse, which are already illegal. In New Zealand, a whole campaign of scare advertisements during the 1980s promoted the claim that one out of eight fathers sexually abused their own daughters, when in fact research showed that not even one out of a hundred did so.102

As in so many other areas, the ascendancy of the vision of the family which now prevails among the anointed began in the 1960s. A 1966 article in the Journal of Social Issues epitomized the rationalistic view that the family was just one of a number of alternative lifestyles and an arbitrary “social preference” which defined “illegitimacy” as a social problem:

The societal preference for procreation only within marriage, or some form of socially recognized and regulated relationship between the sexes, is reinforced by laws and customs which legitimize coition as well as births and denote some responsibility for the rearing of children. It is within this context that value judgements may be regarded as the initial and formal causes of social problems. Without the value judgements which initially effected and now continue to support the legitimation of coition and births, illicit parenthood would not be regarded as a problem. In fact, by definition, it would not exist.103

Thus the “disproportionate publicity and public concern about teen-age unwed mothers”104 is simply a matter of how people choose to look at things. As in the case of early discussions of rising crime rates, it was suggested that “more inclusive and improved counting of non-white illicit births” may have caused a statistical change without a real change.105 In short, everything depended on how we chose to look at things, rather than on an intractable reality. Teenage pregnancy was only a socially defined problem in this view, while “the more generic problem of unwanted pregnancy106 was what needed to be addressed. Here “needs for counseling”107 were taken as axiomatic and “it is quite pointless to continue debating whether youth should receive sex education”108 for this too was axiomatic and inevitable, with only the particular channels of this education being open to rational discussion. In a similar vein, a later publication of the Centers for Disease Control declared that “the marital status of the mother confers neither risk nor protection to the infant; rather, the principal benefits of marriage to infant survival are economic and social support.”109 This rationalistic picture overlooked what is so often overlooked, that different kinds of people have different values and behavior patterns—and that these values and behaviors have enormous impacts on outcomes. But to say this would be to get into the forbidden realm of personal responsibility and away from the vision of a benighted “society” needing to be reformed by the anointed, who reject “consensus romanticism about the family,”110 as it was put by Hillary Rodham (later Clinton).