Facts are stubborn things; and whatever may be our wishes, our inclinations, or the dictates of our passions, they cannot alter the state of facts and evidence.…
—John Adams1
Factual evidence and logical arguments are often not merely lacking but ignored in many discussions by those with the vision of the anointed. Much that is said by the anointed in the outward form of an argument turns out not to be arguments at all. Often the logical structure of an argument is replaced by preemptive rhetoric or, where an argument is made, its validity remains unchecked against any evidence, even when such evidence is abundant. Evidence is often particularly abundant when it comes to statements about history, yet the anointed have repeatedly been as demonstrably wrong about the past as about the present or the future—and as supremely confident.
One of the more remarkable feats of those with the vision of the anointed has been the maintenance of their reputations in the face of repeated predictions that proved to be wrong by miles. Examples are all too abundant. A few of the more obviously false but teflon prophets include such individuals as John Kenneth Galbraith and Paul Ehrlich, and such institutional prophets as the Club of Rome and Worldwatch Institute. In each case, the utter certainty of their predictions has been matched by the utter failure of the real world to cooperate—and by the utter invulnerability of their reputations.
The best known of Professor Galbraith’s many books has been The Affluent Society, which popularized a previously arcane adjective. One of the central themes of this book was that the rising prosperity of relatively recent times had banished from the political agenda and from public concern questions about the distribution of income.
According to Professor Galbraith, “few things are more evident in modern social history than the decline of interest in inequality as an economic issue.”2 This “decline in concern for inequality” was not due to any successful egalitarian redistributive measures, according to Galbraith, but was instead a factor in the absence of such measures.3 Inequality had simply “faded as an issue.”4 Galbraith did not agree with this trend and, in fact, cited some of the usual misleading statistics on family income5 to show a social problem. But the “poverty at the base of the income pyramid” simply “goes largely unnoticed,” while “increasing aggregate output” has become “an alternative to redistribution” and “inequality has been declining in urgency.”6
Since 1958, when this was written, there have followed decades of some of the most intense preoccupation with inequality and income distribution in the history of the republic. From the political rostrum to the pulpit, from the mass media to academic journals, and from the halls of Congress to the chambers of the Supreme Court, “equality” has been the cry of the times.
Another theme appearing in The Affluent Society, and amplified in Galbraith’s later book The New Industrial State, was that big corporations had become immune to the marketplace. “The riskiness of modern corporate life is in fact the harmless conceit of the modern corporate executive,” according to Galbraith, for “no large United States corporation, which is also large in its industry, has failed or been seriously in danger of insolvency in many years.”7 General Motors is “large enough to control its markets”8 according to Galbraith—but not according to Toyota, Honda, and other Japanese automakers who proceeded to take away substantial parts of that market in the years that followed this pronouncement. By the early 1990s, Honda produced the largest selling car in the United States and Toyota produced more cars in Japan than General Motors did in the United States.
Since Galbraith’s sweeping pronouncements about corporate invulnerability were written, the country’s leading magazine, Life, stopped publishing and was resurrected later as a shadow of its former self. The W. T. Grant chain of retail stores, once a pioneer in the industry, went out of existence—as did the Graflex Corporation, which had dominated the market for press cameras for decades. Pan American was perhaps the best known of the many airlines that folded. Venerable newspapers were obliterated in cities across the country. The Chrysler Corporation was saved from extinction only by a government bailout. Despite Galbraith’s later assurance in The New Industrial State of “the impregnable position of the successful corporate management,”9 corporate takeovers and corporate shake-ups spread throughout the American economy, with heads rolling in corporate suites across the land. Conversely, despite Galbraith’s sneers at the idea of a lone entrepreneur starting up a pioneering new company,10 Steve Jobs created Apple computers and Bill Gates created the Microsoft Corporation, both companies rising into the Fortune 500 inside of a decade, with both men becoming multibillionaires. Nor were these isolated flukes. Nearly half the firms in the Fortune 500 in 1980 were no longer there just ten years later.11
None of this has made a dent in Galbraith’s reputation, his self-confidence, or his book sales. For no one has been more in tune with the vision of the anointed or more dismissive of “the conventional wisdom”—another term he popularized as a designation for traditional beliefs and values. If there is any single moral to the Galbraith story, it might be that if one is “politically correct,” being factually incorrect doesn’t matter. But he is just one of many examples of the same principle.
While John Kenneth Galbraith may be the best known of those who are often wrong but never in doubt, Paul Ehrlich is perhaps pre-eminent for having been wrong by the widest margins, on the most varied subjects—and for maintaining his reputation untarnished through it all. The prologue to his best-known book, The Population Bomb, first published in 1968, begins with these words:
The battle to feed all of humanity is over. In the 1970s and 1980s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now.12
Now that the 1970s and 1980s have come and gone, it is clear that nothing faintly resembling Ehrlich’s prediction has come to pass. Moreover, such local famines as struck sporadically had nothing to do with overpopulation and everything to do with the disruption of local food distribution systems, due usually to war or other man-made disasters. As with so many other predictions of catastrophe—“famine and ecocatastrophe” in Professor Ehrlich’s words13—there is, as the bottom line, a power agenda by which the vision of the anointed is to be imposed on the masses. According to Ehrlich, we must “take immediate action” for “population control”—“hopefully through changes in our value system, but by compulsion if voluntary methods fail.”14 The supreme irony is that this campaign of hysteria over population came at a time when the world’s population growth rate was declining,15 both in the industrial and the nonindustrial world,16 when producers of toys, diapers, and baby food were diversifying into other fields,17 and when hospital maternity wards were being closed or were being used for nonmaternity patients, in order to fill the empty beds.18
The Population Bomb is a textbook example of a scare book in another way—the unbridled extrapolation. As Ehrlich says, “the population will have to stop growing sooner or later”19 or a variety of catastrophic scenarios will unfold. By the same token, if the temperature has risen by 10 degrees since dawn today, an extrapolation will show that we will all be burned to a crisp before the end of the month, if this trend continues. Extrapolations are the last refuge of a groundless argument. In the real world, everything depends on where we are now, at what rate we are moving, in what direction, and—most important of all—what is the specific nature of the process generating the numbers being extrapolated. Obviously, if the rise in temperature is being caused by the spinning of the earth taking us into the sunlight, then the continuation of that spinning will take us out of the sunlight again and cause temperatures to fall when night comes. But both the logical and the empirical test are consistently avoided by the “population explosion” theorists.
Contrary to their theory of a declining standard of living with population growth, the standard of living was rising when Malthus first wrote, two hundred years ago. It rose during his lifetime and it has been rising since then. The population bombers cannot name a single country where the standard of living was higher when its population was half of what it is today. Instead, they must resort to extrapolations and ominous rhetoric about “standing room only” and the like. In reality, the entire population of the world today could be housed in the state of Texas, in single-story, single-family houses—four people to a house—and with a typical yard around each home.20 Moreover, the most thinly populated continent—Africa—is also the poorest. Japan has more than twice the population density of many African nations and more than ten times the population density of sub-Saharan Africa as a whole.21 In medieval Europe as well, the poorest parts of the continent—notably Eastern Europe and the Balkans—were also the most thinly populated. A large influx of Germans, Flemings, and other Western Europeans cleared and developed much of the fertile but empty land of Eastern Europe, raising the economic level of the region.22 For the nations of the world, there is no correlation between population density and income level. While there are costs associated with crowding, there are other and huge costs associated with trying to provide electricity, running water, sewage systems, and other services and infrastructure in a thinly populated area, where the cost per person is vastly greater than in a more densely populated area.
Is there some ultimate limit to how many people can live on the planet? Probably. But to see how meaningless and misleading such a question is, consider the fear of the young John Stuart Mill that a finite number of musical notes meant that there was some ultimate limit to the amount of music possible.23 Despite the young Mill’s melancholy over this, at that point Tchaikovsky and Brahms had not yet been born, nor jazz even conceived. Nor was there any sign that we are running out of music more than a century later.
The starvation of “hundreds of millions” is not the only Ehrlich prediction to have missed by miles. He was equally certain, equally wrong, and equally unblemished by his predictions about the exhaustion of natural resources. In 1980, economics professor Julian Simon challenged anyone to a bet as to whether various natural resources would or would not become more expensive over time—as would happen if they were in fact becoming more scarce. Professor Simon offered to allow anyone to pick any resources he wished, and any time period he wished, in which to test the theory that resources were becoming more scarce or approaching exhaustion. In October 1980, Ehrlich and other like-minded predictors of natural resource exhaustion bet $1,000 that a given collection of natural resources would cost more in ten years than when the bet was made. The Ehrlich group chose copper, tin, nickel, tungsten, and chrome as the natural resources whose combined prices (in real terms) would be higher after a decade of their continued extraction from the earth. In reality, not only did the combined prices of these resources fall, every single resource selected by Ehrlich and his colleagues declined in price.24
How could a decade of extracting these minerals from the earth not lead to a greater scarcity and hence a higher price? Because supply and demand are based on known reserves and these can just as easily increase as decrease. For example, the known reserves of petroleum in the world were more than twice as large in 1993 as they were in 1969, despite massive usage of oil around the world during the intervening decades.25 One of the fatal flaws in the vision of the anointed is the implicit assumption that knowledge is far more extensive and less costly than it is. In some abstract sense, there is indeed a fixed amount of any natural resource in the earth and usage obviously reduces it. But no one knows what that fixed amount is and, since the process of discovery is costly, it will never pay anyone to discover that total amount. Depending on various economic factors, such as the interest rate on money borrowed to finance exploration, there is a variable limit to how much it pays to discover as of any given time—no matter how many more untold centuries’ worth of supply may exist. By dividing the currently known reserves by the annual rate of usage, it is always possible to come up with a quotient—and to use that quotient to claim that in ten years, fifteen years, or some other time period we will “run out” of coal, petroleum, or some other natural resource.
A textbook example of this kind of hysteria by arithmetic was provided by Vance Packard in his 1960 best-seller, The Waste Makers:
In oil, the United States is clearly approaching depletion. At today’s rate of consumption—not tomorrow’s—the United States has proved reserves of oil sufficient to meet the nation’s needs for thirteen years.26
When this was published, the proved reserves of petroleum in the United States were not quite 32 billion barrels. At the end of the allotted 13 years, the proved reserves were more than 36 billion barrels.27 Nevertheless, the simple formula of hysteria-by-quotient has been creating alarms—and best-selling books—for more than a century. Meanwhile, known reserves of many vital natural resources have been increasing, driving down their prices.
There is perhaps no more sacrosanct figure among the contemporary anointed than Ralph Nader, usually identified as the premier “consumer advocate.” Yet one of Nader’s first published writings, in The Nation magazine in 1959, revealed the mind-set behind consumer advocacy when he said, “the consumer must be protected at times from his own indiscretion and vanity.”28 Once again, the role of the anointed was to preempt other people’s decisions, for their own good.
The book that put Ralph Nader on the map—Unsafe at Any Speed, denouncing the safety records of automobiles in general and the Corvair in particular—also exhibited another characteristic of the anointed, the ignoring of trade-offs. Nader’s thesis was that automobile safety was deliberately being neglected by the car manufacturers in favor of other considerations, such as styling and cost. He then proceeded to enumerate the safety deficiencies of various cars, but especially the Corvair, and mentioned gory accidents presumably caused by such deficiencies.
A moment’s reflection on the implications of trade-offs makes it clear that inevitably, beyond some point, safety will be sacrificed with any product in the sense that unlimited sacrifices of other features—including affordability—for the sake of safety would of course make that particular product somewhat safer. If the paper on which these words are written were made flameproof, that might well save someone a burn somewhere or perhaps even prevent a house from catching fire. Similarly, automobiles could of course be built to tank-like sturdiness at a sufficiently high price, which is to say, by making them unaffordable to many or most people. Carrying safety-first to such extremes on all the millions of products in the economy would raise costs in general and correspondingly lower the real income and living standard of the public. Nor is it clear that this would even increase safety, on net balance, since higher real incomes reduce death rates, whether one compares rich and poor in a given society or wealthy and poverty-stricken societies internationally.
Sacrificing real income for the sake of reducing remote dangers is a trade-off that would have to be justified on its merits in each specific case—if one were thinking in terms of trade-offs. But Nader scorned what he called “abject worship of that bitch-goddess, cost reduction.”29 The very notion of trade-offs was dismissed as “auto industry cant.”30 Unsafe at Any Speed is a classic of propaganda in its ability to use distracting or dismissive rhetoric to evade a need to confront opposing arguments with evidence or logic. Throughout the book, automobile manufactures were denounced for such things as “neglect” of safety,31 “industrial irresponsibility,”32 and “unconscionable” behavior.33 To Nader, “safety features lying unused on the automobile companies’ shelves”34 were virtual proof that they should have been used. Cost considerations—including the costs of changing the overall design of a car to accommodate these safety features, as well as the direct costs of the specific features themselves—were dismissed out of hand by Nader. Sometimes he counted only the modest cost of the particular feature in isolation,35 but at other times he brushed aside the effect of design changes that might make the car less attractive to the consumer. The design engineer was considered by Nader to “shirk his professional duty” when he considered “cost reduction and style.”36 Automobile company representatives who pointed out that the industry cannot produce features that the consumers do not want, or are unwilling to pay for, were scorned by Nader for treating the issue as “wholly one of personal consumer taste instead of objective scientific study.”37
Like so many who invoke the name and the mystique of science in order to override other people’s choices, Nader offered remarkably little hard data to back up his claims, whether on the overall safety of the automobile over time, or of American automobiles versus cars from other countries (including socialist countries where “corporate greed” was presumably not a problem), or of the Corvair compared to similar cars of its era. The whole issue was conceived in categorical rather than incremental or comparative terms.
Despite Nader’s argument that automakers paid little attention to safety, motor vehicle death rates per million passenger miles fell over the years from 17.9 in 1925 to 5.5 in 1965, the year Unsafe at Any Speed was published, and this trend continued to a rate of 4.9 five years later,38 after federal legislation on automobile safety, inspired by Nader and his followers. Naderites and the federal safety regulations they inspired have been widely credited with subsequent reductions in auto fatality rates,39 usually by those who are either unaware of, or who choose to ignore, the long-standing downward trend which had already produced a reduction of two-thirds in fatalities per million passenger miles before Nader ever appeared on the scene. Moreover, the earlier reductions in automobile fatalities occurred while the average highway speed of cars was increasing.40 In short, the era of corporate greed and the presumably ignorant and helpless consumer saw dramatic improvements in safety, before the anointed came to the rescue.
As for the Corvair, it did indeed have safety problems growing out of its rear-engine design. It also had safety advantages growing out of that same design, notably better traction on slippery surfaces. The salient question is whether on net balance it was any less safe than similar cars of its era. Extensive tests by the U.S. Department of Transportation showed that it was not.41 An independent academic researcher likewise noted, along with the Corvair’s greater tendency to have certain kinds of accidents, “its less than average number of accidents in other categories.”42 In other words, it was a trade-off.
Although Nader represented the Corvair as a car difficult to handle,43 Consumers Union’s 1960 evaluation of the Corvair noted a “sandstorm of controversy” about the steering of rear-engine cars but concluded that “prospective buyers need not be unduly concerned.”44 A woman who was both a race car driver and a writer on automobiles was quoted against the Corvair in Nader’s book Unsafe at Any Speed but, when questioned by Senator Abraham Ribicoff’s congressional committee, she replied that the Corvair she drove “was one of the sweetest handling, most pleasant-to-drive production cars I had experienced.” Moreover, she added, the way that Nader had quoted from her article “led me to suspect that he didn’t know too much about cars.”45 Another automotive expert interviewed by the Ribicoff committee, at the suggestion of a member of one of Nader’s organizations, said that he not only found the Corvair a safe-handling vehicle but even had sufficient confidence in its ease of handling to buy one for his own daughter, whose left hand was paralyzed by polio. This confidence was vindicated later, when a tire blew out while his daughter was driving the Corvair at over 80 miles per hour and she was still able to bring it to a safe stop.46
Whatever the outcome of the battle of facts, Nader won the battle of the media and the battle of politics. The alarm spread about the Corvair caused sales to drop to the point where General Motors discontinued the car. This alarm also promoted more federal intervention in the design and manufacturing of automobiles. This episode also promoted the emergence of “consumer advocates” in general on the national scene to make similar claims about other products and to spawn more federal legislation.
The technique of many “consumer advocates” remained that pioneered by Ralph Nader in Unsafe at Any Speed: sweeping charges, selective examples, selective quotes, purple prose, dismissals of trade-offs, and an attribution of malign or irresponsible behavior to others. “Doctors, lawyers, engineers and other specialists have failed in their primary professional ethic,”47 Nader’s book charged, and the answer was collectivized decisions by “society.”48 His earlier article in The Nation likewise charged “widespread amorality among our scholarly elite” because “researchers are reluctant to stray from their scholarly and experimental pursuits.”49 In other words, it is “amoral” to disagree with Ralph Nader on the role of a scholar.
One of the problems faced by “consumer advocates” in general is how to make the consumers’ own preferences disappear from the argument, since consumer sovereignty conflicts with moral surrogacy by the anointed. It is also not good politics to attack consumers. Here too Unsafe at Any Speed showed how artful phrasing can make the consumer’s preferences evaporate from the discussion, as a prelude to making his autonomy disappear in laws proposed by so-called “consumer advocates.” Arguing that the Corvair would be safer with higher pressure in the tires, Nader condemned the engineers for having “succumbed to the great imperative—a soft ride.”50 Clearly this was only the consumer’s imperative. General Motors would not make one dollar more or one dollar less at different tire pressures unless the consumers preferred one kind of ride to another, though Nader chose to depict this consumer preference as “the car makers’ obsession with the soft ride.”51 Displacing responsibility from the consumer to the producer has been a crucial part of consumer advocacy. “The American automobile is produced exclusively to the standards which the manufacturer decides to establish,” according to Nader,52 though what the automaker actually decides, with millions of dollars at stake, is far less likely to reflect some personal caprice than what consumers are apt to buy.
What the Nader approach boils down to is that third parties should preempt the consumer’s choice as to whether he wants to sacrifice a comfortable ride in order to make a remote danger slightly more remote. Considering that the tiredness that comes from uncomfortable rides can also affect safety, it is by no means obvious that there would be greater safety on net balance by creating a harder ride.
In a sense, however, discussions of facts and logic are irrelevant. Nader achieved his political objectives, established his own image, and put his targets on the defensive. Nader’s image has been aptly described by a biographer as “a combination of the best qualities of Lincoln of Illinois and David of 1 Samuel 17.”53 A somewhat different view of Nader was offered by a congressional committee chairman quoted in Newsweek: “Ralph’s a bully and know-it-all, consumed by certainty and frequently in error.”54 It is one of the signs of Nader’s continuing sacred status that this statement was made anonymously.
The confidence of the anointed in their own articulated “reason” has as its counterpoint their complete distrust in systemic social processes, operating without their guidance and intervention.55 Thus the operation of a free market is suspect in their eyes, no matter how often it works, and government control of economic activity appears rational, no matter how many times it fails. As bitterly resented as the gasoline lines of the 1970s were under government price controls, there were widespread predictions of skyrocketing gasoline prices if these controls were abolished. For example, Congressman John Dingell considered it “obvious that gasoline could reach at least $2 a gallon after decontrol.” So did Senator Howard Metzen-baum. Lester Brown of Worldwatch Institute declared that “gas will cost $2 per gallon within a few years and $3 per gallon during the vehicle’s lifetime.” Senator Dale Bumpers likewise predicted, “gasoline will soon go to $3 a gallon.”56
Airs of condescension pervaded criticisms of those who believed otherwise and who relied on a free market. For example, the New York Times commented on Ronald Reagan’s views:
Ronald Reagan brushed aside energy issues during the campaign, insisting that shortages could be overcome by unleashing private enterprise. But not even his most fervent supporters in the energy business share that optimism. Virtually all private forecasts predict declining domestic oil production and liquid fuel shortages during the next decade.57
In a similar vein, President Jimmy Carter said:
There is a dwindling supply of energy sources. The prices are going to rise in the future no matter who is President, no matter what party occupies the administration in Washington, no matter what we do.58
President Carter blamed the benighted masses for not facing up to the situation as seen by the anointed. “The American people,” he said, “have absolutely refused to accept a simple fact. We have an energy crisis.… We are going to have less oil to burn and we are going to have to pay more for it.”59 New York Times columnist Tom Wicker pronounced Carter’s statements to be “unquestioned truths.”60
Disregarding the anointed, in this as in other things, the newly elected President Ronald Reagan issued an Executive Order during the first month of his administration, ending oil price controls. Within four months, the average price of a gallon of unleaded gasoline fell from $1.41 to 86 cents.61 Refineries’ average cost of buying crude oil fell from more than $30 a barrel in 1981 to less than half of that by March 1986.62 Contrary to predictions of oil or gasoline shortages by President Carter’s energy secretary James Schlesinger, by Senator Bumpers, and others,63 the world’s known crude oil reserves were 41 percent higher at the end of the decade of the 1980s than at the beginning.64 In the post-Reagan years, the low price of gasoline made it a special target for taxation, which artificially forced up its price at the pump, though still not to the levels predicted when decontrol came in a decade earlier. The real cost of the gasoline itself—net of taxes and adjusted for inflation—reached an all-time low in 1993.65
How much the hysteria over oil price decontrol represented genuine misunderstandings of economics, and how much a cynical scare tactic to get more government control, may never be known. However, many of those pushing continued government control of oil prices were longtime promoters of other extensions of government power. Among these was Senator Edward Kennedy, who said: “We must adopt a system of gasoline rationing without delay,” in “a way that demands a fair sacrifice from all Americans.”66 Needless to say, the anointed would define what was “fair” for others, while enhancing their own power, as distinguished from letting the marketplace reduce the sacrifice for everyone with lower prices.
Perhaps the most famous mistaken prediction in recent times was the “Club of Rome” prediction that economic growth would grind to a halt, around the world, during the latter part of the twentieth century. Both industrial output per capita and food per capita were to decline, along with a long-run decline in natural resources.67 In this model the “death rate rises abruptly from pollution and from lack of food.”68 Like so many wrong economic predictions, it was buttressed with all sorts of graphs, tables, and mathematical models. It also relied on extrapolations—and on putting the burden of proof on others: “In postulating any different outcome from the one shown in table 3, one must specify which of these factors is likely to change, by how much, and when.”69 In other words, you cannot say that the emperor has no clothes until you have designed a whole new alternative wardrobe.
Just how threadbare the current wardrobe was was demonstrated by resort to the ultimate finiteness argument which misled John Stuart Mill about music and Paul Ehrlich about population:
There may be much disagreement with the statement that population and capital growth must stop soon. But virtually no one will argue that material growth on this planet can go on forever.70
Abstract ultimate limits are neither the theoretical nor the practical issue. What the Club of Rome report sought was collective coercive powers now to head off some impending catastrophe. They were discussing such scenarios as “stopping population growth in 1975 and industrial capital growth in 1985.”71 They wanted “society” to make choices72—i.e., collective decision making, through surrogates like themselves, in “a world forum where statesmen, policy-makers, and scientists” would decide what needed to be done.73 Such “concerted international measures and joint long-term planning will be necessary on a scale and scope without precedent.”74 This call for super-socialism on a global scale used the shopworn arguments that the alternative to “a rational and enduring state of equilibrium by planned measures” was leaving things to “chance or catastrophe.”75 The report warned: “A decision to do nothing is a decision to increase the risk of collapse.”76 This neat dichotomy between collective decision making and doing “nothing” circumvents the very possibility of systemic adjustments through the ordinary functioning of prices and other social forces, such as were in fact reducing the birth rate around the world even as this alarum was being sounded.
Like most prophecies of doom, the Club of Rome report had an agenda and a vision—the vision of an anointed elite urgently needed to control the otherwise fatal defects of lesser human beings. Long after the Club of Rome report has become just a footnote to the long history of overheated rhetoric and academic hubris, the pattern of its arguments, including its promiscuous display of the symbols of “science”—aptly characterized by Gunnar Myrdal as “quasi-learnedness”77—will remain as a classic pattern of orchestrated hysteria in service to the vision of the anointed. Moreover, this was not the isolated act of a given set of people. What made the Club of Rome report politically important was its consonance with widespread views and visions among the anointed. Economist Robert Lekachman, for example, declared: “The era of growth is over and the era of limits is upon us”78—all this on the eve of the longest peacetime expansion in history.
Anyone can be wrong about the future. Often the variables are so numerous, and the interactions so complex, that the only real mistake was to have predicted in the first place. Being wrong about the past is something else. Here the anointed’s pattern of being often wrong but never in doubt cannot be explained by the difficulties of interpreting numerous causal factors, because the end results are already known and recorded. That the record was not checked is only another sign of the great confidence of those with the vision of the anointed—and the groundlessness of that confidence.
Among the areas in which the contemporary anointed have made sweeping assumptions about the past, based on their vision rather than on the actual record of the past, have been two in which the record contradicting their assumptions is particularly clear and obvious. One has been the practice of attributing such social pathology as broken families in the black community to “a legacy of slavery.” Another has been the practice of attributing the soaring national debt and other economic difficulties of recent years to the past policies of the Reagan administration. There has also been a more general use of history to pooh-pooh those present concerns which the anointed do not share by showing that people voiced similar concerns in the past—the implicit assumption being that these past expressions of concern were groundless.
Nothing so turns the tables on critics of social pathology in the black community as invoking the painful history of slavery. But because slavery has left bitter legacies, it does not follow that any particular bitter experience among blacks today can automatically be attributed to slavery. Cancer is indeed fatal, but every fatality cannot be attributed to cancer—and certainly not after an autopsy has shown death to be due to a heart attack or gunshot wounds.
One of the key misfortunes within the contemporary black community, from which many other misfortunes flow, is the breakdown of the family, or the failure to form a family in the first place. As of 1992, more than half of all black adults had never been married, quite aside from an additional 16 percent who had been either divorced or widowed. By contrast, only 21 percent of white adults had never been married.79 More than half of all black children—57 percent—were living with only one parent and another 7.5 percent were not living with either parent.80 Thus, only a little more than a third of black children were living in traditional two-parent households. The great majority of those black children who were living with only one parent were living with their mothers, and more than half of these mothers were unmarried.81 The all too common, and all too tragic, situation was the teenage mother—“children having children.” Of 190,000 black children whose parents were currently still teenagers, only 5,000 were living with both parents.82 This of course does not include all those children whose mothers were teenagers when they were born but whose mothers were 20 years old or older at the time the Census Bureau collected the statistics. In short, it underestimates the extent of teenage motherhood and the consequences that continue long after the mother has reached 20 years of age.
Children having children is a deadly situation, whether from the standpoint of physical health—babies born to teenage mothers being prone to more medical disabilities—or from the standpoint of the inability or unwillingness of teenage mothers to raise those children with the knowledge, skills, and values necessary for them to become productive and law-abiding adults. Since many of these teenage girls are high school dropouts and are otherwise lacking in the discipline, knowledge, and maturity necessary to raise a child, they can hardly be expected to give the child what they themselves do not have. The tragedy of the situation is too obvious to require elaboration.
As in other areas where violations of societal norms have led to disasters, the first order of business for the anointed has been to turn the tables on society, which must itself be made to feel guilty for what it complains of. Blaming “a legacy of slavery” for the high levels of unwed teenage pregnancy among blacks, and the abdication of responsibility by the fathers of the children, clearly performs that function. Whether it is actually true is another question—and one receiving remarkably little attention.
Going back a hundred years, when blacks were just one generation out of slavery, we find that the census data of that era showed that a slightly higher percentage of black adults had married than had white adults. This in fact remained true in every census from 1890 to 1940.83 Prior to 1890, this question was not included in the census, but historical records and contemporary observations of the Reconstruction era depicted desperate efforts of freed black men and women to find their lost mates, children, and other family members—efforts continuing on for years and even decades after the Civil War.84 Slavery had separated people, but it had not destroyed the family feelings they had for each other, much less their desire to form families after they were free. As late as 1950, 72 percent of all black men and 81 percent of black women had been married.85 But the 1960 census showed the first signs of a decline that accelerated in later years—as so many other social declines began in the 1960s. This new trend, beginning a century after Emancipation, can hardly be explained as “a legacy of slavery” and might more reasonably be explained as a legacy of the social policies promoted by the anointed, especially since similar social policies led to similarly high rates of unwed motherhood in Sweden, where neither race nor slavery could be held responsible.
Looking more closely at the history of broken homes and female-headed households in the United States, we find that both have long been more prevalent among blacks than among whites, although the differences were not always as dramatic as they are today. The higher levels of broken homes among blacks in the past were due in part to higher mortality rates among blacks, leaving more widows and widowers, but there were also more family breakups.86 None of this was unique to blacks, however. The Irish went through a similar social history in nineteenth-century American cities. But the female-headed households of an earlier era, whether among blacks or whites, were seldom headed by teenage girls. As of 1940, among black females who headed their own households, 52 percent were 45 years old or older. Moreover, only 14 percent of all black children were born to unmarried women at that time.87 The whole situation was radically different from what it is today. Whatever factors caused the changes, these were clearly twentieth-century factors, not “a legacy of slavery.”
Few histories have been rewritten so completely and so soon as the history of the Reagan administration. From innumerable outlets of the anointed—the media, academia, and the lecture platform—poured the new revised history of the Reagan administration, that its reductions in tax rates in the early 1980s—“tax cuts for the rich” being the popular phrase—had brought on record federal deficits. Yet this revisionist history of the 1980s is easily refuted with widely available official statistics on the federal government’s tax receipts, spending, and deficits during the eight years of the Reagan administration. The year before Ronald Reagan became president, the federal government took in $517 billion in tax revenues, which was the all-time high up to that point. The record of tax revenues and expenditures during the Reagan years, from 1981 through 1988, is shown in the following table.
YEAR: 1981
RECEIPTS (billions): $599
OUTLAYS (billions): $678
DEFICIT (billions): $79
YEAR: 1982
RECEIPTS (billions): 618
OUTLAYS (billions): 746
DEFICIT (billions): 128
YEAR: 1983
RECEIPTS (billions): 601
OUTLAYS (billions): 808
DEFICIT (billions): 208
YEAR: 1984
RECEIPTS (billions): 666
OUTLAYS (billions): 851
DEFICIT (billions): 185
YEAR: 1985
RECEIPTS (billions): 734
OUTLAYS (billions): 946
DEFICIT (billions): 212
YEAR: 1986
RECEIPTS (billions): 769
OUTLAYS (billions): 990
DEFICIT (billions): 212
YEAR: 1987
RECEIPTS (billions): 854
OUTLAYS (billions): 1,004
DEFICIT (billions): 149
YEAR: 1988
RECEIPTS (billions): 909
OUTLAYS (billions): 1,064
DEFICIT (billions): 155
Source: Budget of the United States Government: Historical Tables (Washington, D.C.: U.S. Government Printing Office, 1994), p. 14. (Rounding causes some deficit numbers to be off by one.)
Contrary to the notion that deficits have resulted from reduced tax receipts by the federal government,88 those receipts in fact reached new record highs during the Reagan administration. Every year of that administration saw the federal government collect more money than in any year of any previous administration in history. By the last year of the Reagan administration in 1988, the federal government collected over $391 billion more than during any year of the Carter administration—in percentage terms, the government took in 76 percent more that year than it had ever collected in any year of any other administration.89 The idea that tax cuts—for the rich or otherwise—were responsible for the deficit flies in the face of these easily obtainable statistics. Spending increases simply outstripped the rising volume of tax receipts, even though hundreds of billions of dollars more were pouring into Washington than ever before. But of course there is no amount of money that cannot be overspent.
The very idea of “tax cuts” reflects verbal ambiguities of the sort so often exploited by the verbally adept among the anointed. Except for one year, tax receipts never fell during the two Reagan administrations (and even that one year, tax receipts were higher than they had ever been in any previous administration). It was tax rates that were cut. As for “the rich,” even if we accept the popular definition of them as people currently above some given income level, those in the top income brackets paid larger sums of money after the Reagan tax rate cuts than before. They even paid a higher percentage of all the taxes paid in the country, according to a report of the House Ways and Means Committee, controlled by Democrats.90 What bothered the liberals was that “the rich” paid a smaller percentage of their rising incomes than before. But, whatever the metaphysics of “fairness,” revisionist history can be checked against hard data—and it fails that test.
Corresponding to the notion that “tax cuts for the rich” caused the rising national debt has been the notion that “cutbacks in spending on social programs” were responsible for much social pathology, including the growth of homelessness. However, as liberal scholar Christopher Jencks has pointed out, actual federal spending on housing increased throughout the years of the Reagan administration. What declined were appropriations—the legal authorization of future spending.91 In other words, hypothetical money declined but hard cash increased. Since it is hard cash that pays for housing, homelessness has its roots in other factors besides government spending on housing.
While there were some social programs that were actually cut during the Reagan administration, most “cutbacks in social programs” were reductions in projected levels of future spending. That is, if program X were spending $100 million a year before the Reagan administration took office and was seeking to expand to $150 million a year, an actual expansion to $135 million would be called a “cutback” in spending of $15 million, even though the program received $35 million more than it had ever received before. This is Washington Newspeak rather than anything that most people would regard as a “cutback” anywhere else.
For many of the anointed, it was never sufficient to declare the Reagan administration’s economic, social, or foreign policies mistaken, malign, or even dangerous. It was necessary to ridicule them as the products of a consummately stupid president—an “amiable dunce,” as Democratic elder statesman Clark Clifford called him.92 This denigration of Ronald Reagan began even before he became president, and was in fact one of the reasons why his chances of becoming even the Republican nominee, much less president, were considered nil. As Washington Post editorial board member Meg Greenfield recalled the mood she saw among Washington insiders in 1980:
It was the wisdom of the other contenders and of most Republican Party leaders, too, not to mention of practically everyone in Democratic politics, that Reagan was: too old, too extreme, too marginal and not nearly smart enough to win the nomination. The Democrats, in fact, when they weren’t chortling about him, were fervently hoping he would be the nominee. When he carried the convention in Detroit, people I knew in the Carter White House were ecstatic.93
This assessment of Reagan remained, even after he defeated President Carter in a landslide in the 1980 elections. This view of him remained unchanged as he got major legislation—the “Reagan revolution”—through Congress over the opposition of those who disdained him, despite the fact that the Republicans were never a majority in both houses of Congress during the first Reagan administration and were not a majority in either house during the second. In a 1987 essay full of condescending references to “Ronnie,” Gore Vidal used as the crowning example of President Reagan’s being out of touch with reality the following quotation from the president: “I believe that communism is another sad, bizarre chapter in human history whose last pages even now are being written.”94 The later sudden collapse of communism in the Soviet bloc was foreseen by very few of the anointed who ridiculed Reagan.
The point here is not to reassess the Reagan administration—a task that can be left to future historians—but to examine the role of evidence for the anointed. Here, as elsewhere, the criteria they used were not pragmatic criteria of success, whether at the polls, in Washington politics or on the international stage. The overriding criterion was consonance with the vision of the anointed, and Ronald Reagan had to fail that test, because no president in half a century was so completely out of step with that vision. The choices facing the anointed were abandonment of a cherished vision or depicting Ronald Reagan as a bumbling idiot, even if that meant treating concrete evidence as irrelevant.
Complaints about the declining standards of the younger generation, about rising crime rates, or about any of a number of other concerns have been ridiculed by the anointed, by quoting people who voiced similar concerns and perhaps dire predictions in the past. Not only does this tactic relieve the anointed of any responsibility to debate the specific merits of the particular issue at hand, it further reinforces their more general picture of an irrational public, whose views need not be taken seriously.
Those who complained about the rising crime rates that followed the judicial expansion of criminals’ “rights” in the 1960s were ridiculed by quoting people who had complained about coddling criminals in earlier times, and who had predicted dire consequences. Complaints about the declining behavioral standards of the younger generation go even further back into history—into Roman times, for example—and so are even more useful for purposes of pooh-poohing such complaints today. It is seldom considered necessary to offer any evidence that (1) these complaints were without foundation, or that (2) the failure of the degeneration to reach the dire conditions warned against was not due to those warnings and the actions they spurred to forestall disaster and turn the situation around. One historical example may illustrate the point—and while one example is seldom decisive for a whole spectrum of issues, it is a salient demonstration of the pitfalls of reaching sweeping conclusions without evidence.
In early nineteenth-century America, there were many public alarms about drunkenness, violence, and crime. Moreover, empirical evidence suggests that these alarms were well-founded. As a result, massive campaigns against these social ills were launched by numerous organizations at both local and national levels. These included a temperance movement that swept across the country, along with religious revivals, and the creation of the Young Men’s Christian Association, with a moral message accompanying its athletic and other activities. Organized and uniformed police departments were created in big cities, replacing more haphazard methods of law enforcement. Employers began to ask job applicants for evidence of church membership. Volunteer organizations began placing homeless urban street urchins with farm families. In short, there was an all-out effort on many fronts against social degeneracy.95 Moreover, those efforts paid off. Per capita consumption of alcohol began declining in 1830 and by 1850 was down to about one-fifth of what it was in 1829.96 A decline in crime began in the mid-nineteenth century and continued on into the early twentieth century.97
While there were as always “many factors” at work, one of the more obvious being a changing age composition of the population, this change was found to account for only a small fraction of the declining crime rates.98 More fundamentally, from the standpoint of evaluating pooh-pooh history, there was a very real problem to begin with, very real efforts to deal with it, and very real progress following those efforts. It was not simply that the benighted voiced unfounded hysteria for the anointed to pooh-pooh.
The era in which trends in crime, drunkenness, and other social degeneracy were turned around was of course the era of “Victorian morality,” so much disdained by the anointed of later times. Its track record, however, compares favorably with the later track record of the opposite approach.
There are too many discussion tactics that substitute for substantive arguments to permit a comprehensive survey. Half a dozen common substitutes may be illustrative, however. They are (1) the “complex” and “simplistic” dichotomy; (2) all-or-nothing rhetoric; (3) burying controversial specifics in innocuous generalities; (4) shifting to the presumed viewpoint of someone else, in lieu of supporting one’s own assertions with evidence or logic; (5) declaring “rights”; and (6) making opaque proclamations with an air of certainty and sophistication.
One of the most frequently recurring buzzwords of the contemporary anointed is “complex,” often said with a sense of superiority toward those who disagree with them—the latter being labeled “simplistic.”
The real world is, of course, more complex than any statement that anyone can make about it, whether that statement is in three words or in three volumes. An exhaustive description of a watch, for example, including its internal mechanisms, the various sources of the materials from which it was produced, as well as the principles of physics which determine how the watch keeps time, not to mention the conceptual complications in the notion of time itself (wrestled with by Albert Einstein and Stephen Hawking, among others), would fill volumes, if not shelves of volumes—quite aside from the economic complications involved in the financing, production, and worldwide distribution of watches in very different economies. Yet, despite all this, most of us would find nothing wrong with the simple statement that Joe was wearing a watch, so that he could tell what time to stop work and go home. Nor would we question its validity on grounds that it was “simplistic.”
A truly exhaustive description being never-ending, we necessarily accept less than exhaustive descriptions all the time. What is truly simpleminded is to use that fact selectively to dismiss unpalatable conclusions, without having to offer either evidence or logic, beyond the bare assertion that these conclusions are “simplistic” in general or, more specifically, because they left out some particular element. Demonstrating that the omitted element changes the relevant conclusion in some fundamental way is the real task—a task often avoided merely by using the word “simplistic.”
Sometimes there is an underlying assumption that complex social phenomena cannot have simple causes. Yet many of the same people who reason this way have no difficulty accepting a theory that a giant meteorite striking the Earth—a very simple event, however catastrophic—could have had ramifications that included dust clouds obscuring the sun, leading to falling temperatures all over the planet and expansion of the polar ice cap, resulting in migrations and extinctions of whole species.
With social phenomena as well, a simple act can have complicated repercussions. A federal law saying simply that no interest rate in the United States could exceed 4 percent per annum would have enormously complicated repercussions, from the stock market to the construction industry, from oil exploration to credit card availability. Whole occupations, firms, and industries would be devastated. Organized crime, with its loan sharks, would flourish. Massive international capital movements would derange trade and payments between nations, disrupting economies around the world and straining relations among regional and global power blocs. All these complications—and more—would result from a law written in one short sentence, simple enough to be understood by any 10-year-old.
Complex phenomena may, of course, also have complex causes. But the a priori dogma that they cannot have simple causes is part of the “complex” complex. It is one more way of seeming to argue, without actually making any argument. It is also one more example of the presumption of superior wisdom and/or virtue that is at the heart of the vision of the anointed. As a tactical matter, this dogma enables them to deny, on purely a priori grounds, that their various “compassionate” interventions in legal, economic, or social systems could have been responsible for the many counterproductive consequences which have so often followed.
Despite attempts to dismiss unpalatable conclusions on grounds that they are “oversimplified,” nothing is oversimplified unless it is wrong—and wrong specifically for the purpose at hand. The ancient Ptolemaic conception of the universe has long since been rejected as incorrect, in favor of the more sophisticated Copernican system, but the Ptolemaic system continues to be used by modern astronomers to compute the times and durations of eclipses—and it does so with accuracy down to fractions of a second. The points on which the Ptolemaic system is wrong simply do not affect these kinds of calculations. Since its assumptions are simpler than those of the Copernican system, it is easier to use for calculation, without sacrificing accuracy. For other purposes, such as sending a spacecraft to Mars or Venus, the Ptolemaic conception of the universe must give way to the Copernican conception—because the latter gives more accurate information for that purpose.
Since all theories of complex phenomena must be simplified, in order to be completed within the lifetime of the analyst, the question as to whether a particular theory is oversimplified is ultimately an empirical question as to whether it leads to demonstrably false conclusions for the purpose at hand. Demonstrating the falsity of the conclusions—not of the assumptions, which are always false, at least in the sense of being incomplete—is a precondition for determining that a theory is oversimplified. Merely labeling an analysis “oversimplified” on a priori grounds puts the cart before the horse, by evading responsibility for first demonstrating the falsity of its conclusions.
If there are hundreds of factors involved in some phenomenon—whether physical or social—and someone claims to be able to predict that phenomenon with a high degree of accuracy by using only three of those factors, then the question as to whether this is madness or genius is ultimately a question as to whether he can actually do it. It is not a question as to whether it seems plausible. The theory of mercantilism may seem more plausible than Einstein’s theory of relativity, but Einstein’s theory has been verified—most notably at Hiroshima and Nagasaki—while mercantilism has failed repeatedly over the centuries, though still surviving politically on the basis of its plausibility.
The most fundamental reason for not using plausibility as a test is that what seems plausible is a function of our existing assumptions, and so cannot be a test of those assumptions. To dismiss opposing arguments on the a priori ground that they are “simplistic” is to seal off the prevailing vision from feedback.
An appreciation of the many complexities involved in resolving controversial issues might suggest that the existence of alternative (or opposing) conclusions is something quite reasonable to expect among intelligent and informed individuals who read the complicated evidence differently, or who weigh the intricate factors or the perplexing probabilities differently. From this perspective, complexity suggests intellectual or ideological tolerance. Yet that is seldom the conclusion drawn by the anointed. Despite their emphasis on complexity, the issue is almost never considered that complex. It is just complex enough that intelligent and compassionate individuals should clearly be on one side, while those on the other side are considered deficient in at least one of these qualities. This attitude was epitomized in New York Times columnist Anthony Lewis’ view of the much-debated—and indeed complex99—death penalty issue. Capital punishment will continue, he said, “until, perhaps, someday, reason overtakes primitive emotion.”100 Could anything be more self-congratulatory? In a similar vein, Justice Harry Blackmun wrote in one of his Supreme Court opinions: “I fear for the darkness as four justices anxiously await the single vote necessary to extinguish the light.”101 In other words, those of his colleagues who differed from him were the forces of darkness. In a similar vein and on a different set of issues, Ralph Nader declared: “The issues are black and white” and “No honest person can differ.”102
Most differences that matter in real life are differences of degree—even when these are extreme differences, such as that between an undernourished peasant, owning only the rags on his back, and a maharajah bedecked in gold and living in one of his several palaces. Yet a polemical tactic has developed which enables virtually any general statement, however true, to be flatly denied, simply because it is not 100 percent true in all circumstances. The simplest and most obvious statement—that the sky is blue, for example—can be denied, using this tactic, because the sky is not always blue. It is reddish at sunset, black at midnight, and gray on an overcast day. Thus, it is “simplistic” to say that the sky is blue. By the same token, it is “simplistic” to say that the ocean is water, because there are all sorts of minerals dissolved in the ocean, which also contains fish, plant life, and submarines, among other things.
This trivializing tactic is widely, but selectively, used to deny whatever needs denying, however true it may be. Even in the days of Stalin, to make a distinction between the Communist world and the free world was to invite sarcastic dismissals of this distinction, based upon particular inadequacies, injustices, or restrictions found in “the so-called ‘free world,’” as the intelligentsia often characterized it, which kept it from being 100 percent free, democratic, and just. This tactic persisted throughout a whole era, during which millions of human beings in Europe and Asia fled the lands of their birth, often leaving behind their worldly belongings and severing the personalties of a lifetime, sometimes taking desperate chances with their own lives and the lives of their children—all in order to try to reach “the so-called ‘free world.’” This verbal tactic continued, even as some Communist nations themselves chose to undergo political convulsions and economic chaos, in order to try to become more like “the so-called ‘free world.’”
All-or-nothing reasoning allows the anointed to say that such things as crime, child abuse, and alcoholism occur in all classes, that all segments of society are susceptible to AIDS, and otherwise obfuscate the very large and very consequential differences in all these areas. All-or-nothing rhetoric has likewise served as a substitute for arguments in many other contexts. Attempts to resist the escalating politicization of courts, colleges, and other institutions have been met with similarly derisive dismissals, on grounds that these institutions are already political, so that it is “hypocritical” to protest their politicization now, merely because of the ascendancy of political ideas with which one disagrees.103 This all-or-nothing argument has become a standard response to any resistance to the escalating politicization of any institution or organization.
Obviously, if there is not complete anarchy, there must be some political structure, and the institutions within the society must in some way be linked to those structures, if only by obeying the laws of the land. No institution in any society can possibly be nonpolitical in the ultimate sense of being hermetically sealed off from governmental authority. Yet centuries of struggle and bloodshed have gone into the effort to create zones of autonomy, constitutional limitations on government, and institutional traditions, all in order to insulate individuals and organizations from the full impact of political activity and governmental power. The separation of church and state, the sanctity of the doctor-patient relationship, the lifetime appointments of federal judges, and the exemption of spouses from testifying against one another in court are just some of the examples of these attempts to provide insulation from governmental power and the political process.
Nevertheless, all-or-nothing rhetoric has been used to deny that any institution is nonpolitical, thereby justifying such things as turning classrooms into propaganda centers and judges disregarding the written law, in order to substitute their own social theories as a basis for judicial rulings. At the very least, one might debate the specific merits or consequences of such actions, rather than have the whole issue preempted by the trivializing argument that educational institutions or courts are already “political”—in some sense or other. Extreme differences of degree are commonly understood as differences in kind, as when we refer to a maharajah as “rich” and a hungry peasant as “poor,” even though each owns something and neither owns everything.
A special variant of the all-or-nothing approach is what might be called tactical agnosticism. Law professor Ronald Dworkin, for example, objected to application of laws against inciting to riot because “we have no firm understanding of the process by which demonstration disintegrates into riot.”104 Apparently society must remain paralyzed until it has definitive proof, which of course no one has with most decisions, personal or social.
All-or-nothing tactics are almost infinitely adaptable as substitutes for arguments and evidence on a wide range of issues. For example, any policy proposals to which the anointed object can be dismissed as “no panacea.” Since nothing is a panacea, the characterization is always correct, regardless of the merits or demerits of the policy or its alternatives. This categorical phrase simply substitutes for logic or evidence as to those merits or demerits. Conversely, when a policy promoted by the anointed turns out to create more problems than it solves (if it solves any), attempts to show how the previous situation was far better are almost certain to be dismissed on grounds that opponents are nostalgic for a “golden age” which never existed in reality. Golden ages being as rare as panaceas, this truism again serves to preempt any substantive argument about the merits or demerits of alternative policies.
The all-or-nothing fallacy is also used to deal with analogies used for or against the vision of the anointed. Because all things are different, except for the similarities, and are the same except for the differences, any analogy (however apt) can be rejected by those who find it a sufficient objection that the things being analogized are not “really” the same. By the same token, any analogies favored (however strained) can be defended on grounds that those things analogized involve the same “underlying” or “essential” principle. Ice and steam are chemically the same thing, though of course they are not physically the same thing. Just as it is possible to make or deny an analogy between ice and steam, so any other analogy can be selectively made or denied. Similarly, anything can be said to have “worked” (as Lincoln Steffen said somewhat prematurely of the Soviet Union), or to have failed (as critics said of the Reagan administration’s policies), because everything works by sufficiently low standards and everything fails by sufficiently high standards. Such statements are not arguments. They are tactics in lieu of arguments—and they are accepted only insofar as they are consonant with the prevailing vision.
A special variant of the all-or-nothing principle is the view that either one knows exactly what particular statements mean or else one is free to engage in adventurous reinterpretations of the words. In literature this is called “deconstruction” and in the law it is called “judicial activism.” Proponents of judicial activism, for example, make much of the fact that the Constitution of the United States in some places lacks “precision” or is not “exact.”105 Ultimately, nothing is exact—not even physical measurements, for the instruments themselves cannot be made 100 percent accurate. In the real world, however, this theoretical difficulty is resolved in practice by establishing tolerance limits, which vary with the purpose at hand. A precision optical instrument that is off by half an inch may be wholly unusable, while a nuclear missile that lands 5 miles off the target has virtually the same effect as if it had landed directly in the center of the target. However, in the vision of the anointed, the absence of precision becomes an authorization for substituting the imagination. In reality, however, the question is not what exactly the Constitution meant by “cruel and unusual punishment” but whether the death penalty, for example, was included or excluded. Precision is a red herring.
All-or-nothing arguments are not mere intellectual errors. They are tactics which free the anointed from the constraints of opposing arguments, discordant evidence, or—in the case of judicial activism—from the constraints of the Constitution. Most important of all, they are freed from the feedback of uncooperative reality.
Yet another technique for arguing without actually using any arguments is to bury the specifics of one’s policy preferences in a vast generality, so diffuse that no one can effectively oppose it. For example, many people say that they are for “change”—either implying or stating that those opposed to the specific changes they advocate are against change, as such. Yet virtually no one is against generic “change.”
The staunchest conservatives advocate a range of changes which differ in specifics, rather than in number or magnitude, from the changes advocated by those considered liberal or radical. Milton Friedman wrote a book entitled The Tyranny of the Status Quo and the policy changes of the 1980s have been called “The Reagan Revolution.” Edmund Burke, the patron saint of conservatism, said: “A state without the means of change is without the means of its conservation.”106 Change, as such, is simply not a controversial issue. Yet a common practice among the anointed is to declare themselves emphatically, piously, and defiantly in favor of “change.” Thus, those who oppose their particular changes are depicted as being against change in general. It is as if opponents of the equation 2+2=7 were depicted as being against mathematics. Such a tactic might, however, be more politically effective than trying to defend the equation on its own merits.
Change encompasses everything from genocide to the Second Coming. To limit the term to beneficial change—to “progress”—is to be no more specific. Quite aside from whether the result anticipated will actually follow from the policies advocated, there are often serious differences of opinion as to whether a given empirical result is in fact morally or socially desirable. Everyone is for a beneficial outcome; they simply define it in radically different terms. Everyone is a “progressive” by his own lights. That the anointed believe that this label differentiates themselves from other people is one of a number of symptoms of their naive narcissism.
In academic circles, the equally vast generality is “diversity,” which often stands for a quite narrow social agenda, as if those who reiterate the word “diversity” endlessly had no idea that diversity is itself diverse and has many dimensions besides the one with which they are preoccupied. Advocates of diversity in a race or gender sense are often quite hostile to ideological diversity, when it includes traditional or “conservative” values and beliefs.
“Innovative” is another of the generalities used in place of arguments, and “making a difference” is likewise promoted as something desirable, without any specific arguments. However, the Holocaust was “innovative” and Hitler “made a difference.” The anointed, of course, mean that their particular innovations will be beneficial and that the differences their policies make will be improvements. But that is precisely what needs to be argued, instead of evading the responsibility of producing evidence or logic by resorting to preemptive words.
Often discussions of political controversies begin in the conventional forms of an argument, examining opposing assumptions, reasons, logic, or evidence—and then shift suddenly to presenting the opaque conclusions of one side. This tactic is yet another way for the anointed to appear to argue, without the responsibility of actually producing or defending any arguments.
During the student riots of the 1960s, for example, columnist Tom Wicker’s reply to those who charged that reason and civility were being violated on university campuses was, “all too often, as today’s students see things, ‘reason and civility’ merely cloak hypocrisy and cynicism, and they aim to ‘strike through the mask.’” As regards the student seizure of a building at Harvard, Wicker said:
Who really abandoned “reason and civility,” students asked—the students who seized a building to protest the Harvard Corporation’s retention of R.O.T.C. or the administrators who called in police to evict the protesters with what was widely regarded as excessive violence?107
This verbal sleight-of-hand not only enabled Wicker to put forth justifications without having to justify, but also to transform a vocal and violent minority into spokesmen for students in general:
Students everywhere volubly hold to the belief that an “Establishment”—political, social, economic, military—manipulates society for its own ends, so that the popular rule of the people is a myth. The war goes on, racism continues, poverty remains, despite the familiar American preachments of peace, democracy, prosperity, and the rule of reason.108
If Wicker had said on his own that the failure of American society—like every other society on Earth—to solve all its problems was justification for violence, he would have been expected to produce some logic or evidence of his own. Instead, he was able to shift to the viewpoint of “students everywhere” and to characterize them as “idealistic youth.”109 In general, to say that “it appears to some observers”110 that this or that is true is meaningless as a justification, because there would obviously be no issue in the first place unless some other observers saw things differently.
Illegitimate as it is to evade the responsibility for one’s conclusions by shifting to someone else’s viewpoint, it is doubly illegitimate when that is merely a presumed viewpoint, contradicted by what the others you are relying on actually said. In the wake of the Los Angeles riots of 1992, for example, many of the anointed justified the violence and destruction by shifting to the presumed viewpoint of “the black community”—when in fact 58 percent of blacks polled characterized the riots as “totally unjustified.”111 Justifying criminal activity by shifting to the presumed viewpoint of others extends far more widely in time and space. Ramsey Clark, for example, declared: “Nothing so vindicates the unlawful conduct of a poor man, by his light, as the belief that the rich are stealing from him through overpricing and sales of defective goods.”112 Not a speck of evidence was presented to show that the typical poor person in fact saw things this way “by his light” or by the light of Ramsey Clark, for that matter.
Even in scholarly—or at least academic—studies, the shifting viewpoint has substituted for both logic and evidence. For example, historian David Brion Davis said, “Emerson recognized the economic motive in British emancipation” of slaves in its empire,113 thus relieving himself of the formidable task of substantiating this conclusion in the teeth of massive evidence to the contrary. Words like “recognized” or “admitted,” attached to selected quotations, shift both the viewpoint and the burden of proof.
Often a shifting to the individual, in an issue that reaches far beyond that individual, takes the more specific form of what logicians call “the fallacy of composition”—the assumption that what applies to a part applies also to the whole. For example, it is true that one person in a stadium crowd can see the game better if he stands up but it is not true that, if they all stand up, everyone will see better. Those who focus on the effects of any particular government policy or judicial ruling on particular individuals or groups often implicitly commit the fallacy of composition, for the whole point of government policies and judicial rulings is that they apply very broadly to many people—and what happens as a consequence to those who are ignored is no less important than what happens to those who have been arbitrarily singled out by an observer.
New York Times columnist Anna Quindlen, for example, responded to criticisms of disruptions by gay activists demanding more money for AIDS by saying: “If I could help give someone I loved a second chance, or even an extra year of life, what people think would not worry me a bit.”114 In other words, the desires of the arbitrarily selected group are made the touchstone, not the consequences of such behavior on other people whose money is to be commandeered for their benefit—or the consequences for society in making mob rule the mode of social decision making. In a similar vein, Quindlen referred to letters from other readers on other issues saying, “Thank you for speaking our truth.”115 However lofty and vaguely poetic such words may seem, the cold fact is that the truth cannot become private property without losing its whole meaning. Truth is honored precisely for its value in interpersonal communication. If we each have our own private truths, then we would be better off (as well as more honest) to stop using the word or the concept and recognized that nobody’s words could be relied upon anymore. The more subtle insinuation is that we should become more “sensitive” to some particular group’s “truth”—that is, that we should arbitrarily single out some group for different standards, according to the fashions of the times or the vision of the anointed.
One of the most remarkable—and popular—ways of seeming to argue without actually producing any arguments is to say that some individual or group has a “right” to something that you want them to have. Conceivably, such statements might mean any of a number of things. For example:
1. Some law or government policy has authorized this “right,” which is somehow still being denied, thereby prompting the reassertion of its existence.
2. Some generally accepted moral principle has as its corollary that some (or all) people are entitled to what the “right” asserts, though presumably the fact that this right needs to be asserted suggests that others have been slow to see the logical connection.
3. The person asserting the particular “right” in question would like to have some (or all) people have what the right would imply, even if no legal, political, or other authorization for that right currently exists and there is no general consensus that it ought to exist.
In the first two cases, where there is some preexisting basis for the “right” that is claimed, that basis need only be specified and defended. Still, that requires an argument. The third meaning has become the more pervasive meaning, especially among those with the vision of the anointed, and is widely used as a substitute for arguments. Take, for example, the proposition, “Every American has a right to decent housing.” If all that is really being said is that some (or all) of us would prefer to see all Americans living in housing that meets or exceeds whatever standard we may have for “decent housing,” then there is no need for the word “rights,” which conveys no additional information and which can be confused with legal authorizations or moral arguments, neither of which is present. Moreover, if we are candid enough to say that such “rights” merely boil down to what we would like to see, then there is no need to restrict the statement to Americans or to housing that is merely “decent.” Surely we would all be happier to-see every human being on the planet living in palatial housing—a desire which has no fewer (and no more) arguments behind it than the “right” to “decent” housing.
However modest a goal, “decent” housing does not produce itself, any more than palatial housing does. Be it ever so humble, someone has to build a home, which requires work, skills, material resources, and financial risks for those whose investments underwrite the operation. To say that someone has a “right” to any kind of housing is to say that others have an obligation to expend all these efforts on his behalf, without his being reciprocally obligated to compensate them for it. Rights from government interference—“Congress shall make no law,” as the Constitution says regarding religion, free speech, etc.—may be free, but rights to anything mean that someone else has been yoked to your service involuntarily, with no corresponding responsibility on your part to provide for yourself, to compensate others, or even to behave decently or responsibly. Here the language of equal rights is conscripted for service in defense of differential privileges.
More important, from our current perspective, all this is done without arguments, but merely by using the word “right,” which arbitrarily focuses on the beneficiary and ignores those whose time and resources have been preempted. Thus, for example, health care was declared by Bill Clinton during the 1992 election campaign to be “a right, not a privilege”116—a neat dichotomy which verbally eliminates the whole vast range of things for which we work, precisely because they are neither rights nor privileges. For society as a whole, nothing is a right—not even bare subsistence, which has to be produced by human toil. Particular segments of society can of course be insulated from the necessities impinging on society as a whole, by having someone else carry their share of the work, either temporarily or permanently. But, however much those others recede into the background in the verbal picture painted by words like “rights,” the whole process is one of differential privilege. This is not to say that no case can ever be made for differential privileges, but only that such a case needs to be made when privileges are claimed, and that the arguments required for such a case are avoided by using words like “rights.”
Health care is only one of innumerable things for which such tactical evasions have been used. Housing, college, and innumerable other costly things have been proclaimed to be “rights.” New York Times columnist Tom Wicker encompassed all economic goods by proclaiming a “right to income.”117 Some have extended this reasoning (or nonreasoning) beyond material goods to such things as a right to “equal respect”—which is to say, the abolition of respect, which by its very nature is a differential ranking of individuals according to some set of values. To say that we equally respect Adolf Hitler and Mother Teresa is to say that the term respect has lost its meaning.
The language of “rights” has other ramifications. Rights have been aptly characterized as “trumps”118 which override other considerations, including other people’s interests. For the anointed to be announcing rights for particular segments of the population is for them to be choosing others as their mascots—and to be seeking to get the power of the state to ratify and enforce these arbitrary choices, all without the necessity of making specific arguments.
Perhaps the purest example of an argument without an argument is to say that something is “inevitable.” This is an inherently irrefutable argument, so long as any time remains in the future. Only in the last fraction of a second of the existence of the universe could anyone refute that claim—and perhaps they would have other things on their mind by then.
Whether particular policies are favored or opposed, there are opaque proclamations of this sort which substitute for arguments. And whether a policy is favored or opposed, it may be currently existing or nonexistent. In each of these four cases, there are proclamations that substitute for arguments, as illustrated below:
FAVORED
EXISTING: “Here to stay”
NON-EXISTING: “Inevitable”
OPPOSED
EXISTING: “Outmoded”
NON-EXISTING: “Unrealistic”
It is one of the signs of our times that such proclamations are so widely accepted in lieu of arguments—but only when used in support of the prevailing vision of the anointed.
Perhaps a few suggestions might be in order for seeing through much of the rhetoric of the anointed. Some of the things discussed in previous chapters, as well as in this one, illustrate some general principles of common sense, which are nevertheless often widely ignored in the heat of polemics:
1. All statements are true, if you are free to redefine their terms.
2. Any statistics can be extrapolated to the point where they show disaster.
3. A can always exceed B if not all of B is counted and/or if A is exaggerated.
4. For every expert, there is an equal and opposite expert, but for every fact there is not necessarily an equal and opposite fact.
5. Every policy is a success by sufficiently low standards and a failure by sufficiently high standards.
6. All things are the same, except for the differences, and different except for the similarities.
7. The law of diminishing returns means that even the most beneficial principle will become harmful if carried far enough.
8. Most variables can show either an upward trend or a downward trend, depending on the base year chosen.
9. The same set of statistics can produce opposite conclusions at different levels of aggregation.
10. Improbable events are commonplace in a country with more than a quarter of a billion people.
11. You can always create a fraction by putting one variable upstairs and another variable downstairs, but that does not establish any causal relationship between them, nor does the resulting quotient have any necessary relationship to anything in the real world.
12. Many of the “abuses” of today were the “reforms” of yesterday.