For words are wise mens counters,
they do but reckon by them: but
they are the mony of fooles…
Thomas Hobbes, 1651
While numbers can be used in ways that are deceptive as regards particular issues, words can be used in ways that can be more sweepingly deceptive as regards how a whole society is seen.
Numbers may deceive us because of their apparent objectivity, but words can deceive more comprehensively because of their emotional appeals that numbers seldom have. There may be very legitimate reasons to react adversely to words like “war,” “racism” or “murder,” but it is the illegitimate invoking of emotionally charged words that is especially dangerous—as anything that overrides thought, or substitutes for thought, can be dangerous. Emotional manipulation is, however, only one of the dangers when words are used in ways that obscure both realities and the connections of cause and effect behind those realities.
Decent people can be appalled by the many oppressions and persecutions that have abounded throughout history. But to weigh the current causal effects of such oppressions and persecutions of the past—or even of the present—is very different from simply reciting a litany of wickedness, as if that automatically establishes causation for other events.
In seeking to establish the causes of poverty and other social problems among black Americans, for example, sociologist William Julius Wilson pointed to factors such as “the enduring effects of slavery, Jim Crow segregation, public school segregation, legalized discrimination, residential segregation, the FHA’s redlining of black neighborhoods in the 1940s and ’50s, the construction of public housing projects in poor black neighborhoods, employer discrimination, and other racial acts and processes.”1
These various facts might be summarized as examples of racism, so the causal question is whether racism is either the cause, or one of the major causes, of poverty and other social problems among black Americans today. Many might consider the obvious answer to be “yes.” Yet some incontrovertible facts undermine that conclusion. For example, despite the high poverty rate among black Americans in general, the poverty rate among black married couples has been less than 10 percent every year since 1994.2
The poverty rate of married blacks is not only lower than that of blacks as a whole, but in some years has also been lower than that of whites as a whole.3 In 2016, for example, the poverty rate for blacks was 22 percent, for whites was 11 percent, and for black married couples was 7.5 percent.4
Do racists care whether someone black is married or unmarried? If not, then why do married blacks escape poverty so much more often than other blacks, if racism is the main reason for black poverty? If the continuing effects of past evils such as slavery play a major causal role today, were the ancestors of today’s black married couples exempt from slavery and other injustices?
As far back as 1969, young black males whose homes included newspapers, magazines, and library cards, and who also had the same education as young white males, had similar incomes as their white counterparts.5 Do racists care whether blacks have reading material and library cards?
Highly successful chains of charter schools, like the KIPP (Knowledge Is Power Program) schools and the Success Academy schools, are places where minority children from low-income families often score much higher on educational tests than other low-income and minority youngsters in the regular public schools. Sometimes they also score higher than students in school districts where most of the children are white and from families with higher incomes. On statewide tests in 2013, fifth-graders in one of the Success Academy schools in Harlem were reported in the New York Times as having “surpassed all other public schools in the state in math, even their counterparts in the whitest and richest suburbs, Scarsdale and Briarcliff Manor.” 6
These charter schools cannot change the facts of history, however. Their highly successful educational results, with ghetto children selected by lottery rather than ability, suggest that historic injustices—however deserving of condemnation—are not automatically current destiny. The stark question then is whether we want a better future for such children, and for blacks in general, or do we want the continued repetition of a “legacy of slavery” mantra, in order to preserve a social vision and the political careers, institutional fiefdoms and shakedown opportunities based on that mantra and that vision?
The crucial question is not whether evils exist but whether the evils of the past or present are automatically the cause of major economic, educational and other social disparities today. The bedrock assumption underlying many political or ideological crusades is that socioeconomic disparities are automatically somebody’s fault, so that our choices are either to blame society or to “blame the victim.” Yet whose fault are demographic differences, geographic differences, birth order differences or cultural differences that evolved over the centuries before any of us were born?
If we are serious about seeking causation, we must look beyond emotional words, which are not necessarily intended to inform or convince, but often achieve their goal if they simply overwhelm through repetition or silence through intimidation.
To his credit, Professor Wilson has noted that, when making comparisons of different time periods, the degree of various adverse economic factors in black communities has not corresponded with changes in the behavior of the people living in those communities:
Despite a high rate of poverty in ghetto neighborhoods throughout the first half of the twentieth century, rates of inner-city joblessness, teenage pregnancy, out-of-wedlock births, female-headed families, welfare dependency, and serious crime were significantly lower than in later years and did not reach catastrophic proportions until the mid-1970s.7
Fear of violence in black communities was demonstrably less in earlier years, as Professor Wilson also noted:
Blacks in Harlem and in other ghetto neighborhoods did not hesitate to sleep in parks, on fire escapes, and on rooftops during hot summer nights in the 1940s and 1950s, and whites frequently visited inner-city taverns and nightclubs.8
Many examples from other sources reinforce the same conclusion, that black communities were not nearly as dangerous in the past as they became in the second half of the twentieth century.9 In short, the supposed causes of major social pathologies in black ghettos today were much worse in the first half of the twentieth century than in the second half—and yet it was in the second half of that century that social pathologies became more widespread and ghettos more dangerous. In fact, this pattern extends far beyond black ghettos and far beyond the United States, as we can see when examining similar social degeneration, during the same time period, among low-income whites in England.
Meanwhile, what are we to make of the fact that blacks who are married or who have library cards have had such different economic outcomes than blacks as a group?
It seems unlikely that marriage, as such, or library cards, as such, directly cause the difference. What seems more probable is that these are indicators of cultural lifestyle choices in general that make for better economic prospects. Further evidence for this is that, while the labor force participation rate of black males as a whole has in recent times been lower than the labor force participation rate of white males as a whole, the labor force participation rate of married black males has been higher than the labor force participation rate of white males who never married. Moreover, this has been true for more than 20 consecutive years.10
Apparently individual lifestyle choices have major consequences for both blacks and whites. Marriage, library cards, labor force participation and entering one’s children in lotteries for charter schools are among the indicators of the cultural values behind those lifestyle choices and socioeconomic outcomes.
Alternative explanations might be advanced as hypotheses to be tested empirically. But what is far more common is the deploying of a whole vocabulary of words and phrases which evade any such empirical confrontation of opposing explanations. Instead, it is often simply asserted that a “web of rules and institutions” is what leads to “unequal outcomes.” These outcomes include “racial and gender pay gaps” and that blacks are “overrepresented among unemployed and low-wage workers and underrepresented in the middle class.”11
No one on either side of these issues has denied that there are different outcomes in different groups of Americans, as there have been different outcomes in different groups in other countries around the world, and over thousands of years of recorded history. What is at issue here, as in other times and places, are the causes of those differences. Merely reciting these differences and arbitrarily attributing them to whatever the prevailing social vision of the time declares to be the cause—whether genes or discrimination—is hardly hypothesis-testing. Slippery words are among the many ways of evading an empirical confrontation of opposing explanations.
In many discussions of social visions and social policies, familiar words have often been used in new ways, to mean something very different from what those words meant before. Among the words given new and often misleading meanings are such common and simple words as “change,” “opportunity,” “violence” and “privilege.” Conversely, old meanings have been expressed by new words, as vagrants became “the homeless,” exultant young thugs became “troubled youths,” and Balkanization became “diversity.”
As one of the most often used, and least often examined, words of our time, “diversity” may be as good a place as any to begin an examination of the world of words, and its contrast with the world of reality.
A fragmented society of people polarized into separate group identities used to be called a “Balkanized” society, and the painful history of strife, bloodshed and atrocities in the Balkans stood as an example of how destructive that can be to all. But that was before the word “Balkanization” was replaced by the much nicer-sounding word “diversity,” from which all sorts of wonderful benefits have been assumed and incessantly proclaimed, without any empirical test of those claims. This new and nicer-sounding word has also avoided having the painful history of the Balkans—and of similar places elsewhere around the world—being called to mind.
By “diversity” those who incessantly proclaim that word, and its presumed benefits, mean more than simply people with different cultures interacting. The word “diversity” is used to imply positive interactions, with benefits for the various participants and for society at large. But we cannot simply define our way into beneficial outcomes. Whether the promotion of separate identities—by race, sex or other characteristics—is beneficial or harmful in its consequences is an empirical question—and a question almost never confronted by apostles of “diversity.” The actual track record of promoting separate group identities, whether called “Balkanization” or “diversity,” has been appalling, in countries around the world.
Among the most disturbing social realities are societies in which different groups live with at least forbearance toward each other for years, or even generations, until some spark—whether a particular incident or a talented demagogue—comes along and suddenly sets off a nightmare of horrors.
In India—a country severely cross-cut by differences in castes, religions, languages and cultures—the country’s emergence into national independence in 1947 was marked by hundreds of thousands of people killed in mob violence against each other. Since then, there have been innumerable local outbreaks of lethal violence among India’s many discrete groups at various times and places. Outbreaks of intergroup violence in Mumbai (Bombay) in 1992–93, for example, were reported in The Times of India as including “neighbours leading long-time friends to gory deaths.”12
Nor was India unique in this regard. A study of twentieth century genocides in the Balkans noted that “outbursts of hatred and great violence occurred between people who had also known times of harmony or at least passive acceptance of each other.”13
Sri Lanka was another country with a very similar pattern. When colonial Ceylon became the independent nation of Sri Lanka in the middle of the twentieth century, many observers—both inside and outside the country—pointed to the good, and even cordial, relations between the Sinhalese majority and the Tamil minority as a model of what intergroup relations should be. As a result, many people predicted not only a peaceful transition to independent nationhood, but a happier future than in other multi-ethnic Third World countries.14
Nevertheless, before the first decade of Sri Lanka’s independence was over, a Sinhalese politician promoted resentments against the more prosperous Tamil minority, during his campaign to become prime minster. Ethnic polarization led first to discriminatory laws against the Tamils, and then to a cycle of violence and counter-violence that ultimately escalated into a decades-long civil war, in which there were unspeakable atrocities on both sides.15
At a minimum, history shows how dangerous it can be, to a whole society, to automatically and incessantly attribute statistical differences in outcomes to malevolent actions against the less successful. That the charge can often be false and misleading might also carry some weight, and merit closer attention to the specific facts of particular cases. Not only the society in general, but lagging groups in particular, can benefit from knowing what is true, as distinguished from what is currently in vogue. Not only does the truth offer a clearer path to advancement, the breakdown of law and order brought on by constantly stirred bitter resentments almost invariably leads to more suffering among the less fortunate.
In addition to individual transformations of words and meanings, sometimes a particular change of meaning has been imposed on a whole category of words, creating very different implications. For example, some words that refer to initial conditions have been used to describe outcome conditions, making it appear that individuals or groups who did not do as well as others had barriers placed in their paths that others did not have.
Such barriers may in fact be the reason various groups have lagged in educational, economic and other endeavors at various times and places. But the extent to which it is true in any particular case is something to be determined by empirical evidence, not by redefining words that refer to ex post results as if they referred to ex ante conditions. For example, some people have been said to have been denied “opportunity” or “access” to some benefit—mortgage loans, for example—when in fact they have simply failed to meet qualifying standards met by other people who had the same options available at the outset as themselves.
Just as some people have been said to have been denied “opportunity” or “access” because their outcomes have been less favorable, so some other people who have had more positive outcomes have been called “privileged,” even if individuals from such groups had no more numerous, nor more favorable, options available at the outset than others whose outcomes have not been as positive.
In each of these cases, there is a confusion between words that apply to conditions beforehand and words that apply to outcomes afterwards. Because someone ended up failing at some endeavor, that does not automatically mean that he was denied opportunity or access at the outset. Whether or not that was true in reality is a major empirical question, and a question too important to be answered by simply shifting the meanings of words.
In some cases, those who succeeded have had even fewer or less favorable options at the outset than others who did not succeed as well. Thus the Chinese minority in Malaysia, who have higher average incomes than the Malay majority, have been referred to as “privileged”16 and the Malay majority as “deprived”17—even though laws and government policies in Malaysia impose preferential treatment of Malays in university admissions and in both government and private employment. Moreover, this particular usage of the terms “privileged” and “deprived” was not taken from political demagoguery but from serious academic studies.
In the ordinary sense of words, it is the Malays who have been privileged, even if they have not used their privileges to produce as beneficial outcomes for themselves as the Chinese have, when using their more limited opportunities. This was in fact a conclusion reached, in his later years, by a Malay leader who had long advocated the creation of preferential policies for Malays.18
Verbal usages, turning reality upside down, have not been confined to Malaysia or to people writing about Malaysia. The confusion of the ex ante with the ex post has become common in America, not only among journalists or politicians, but among academic scholars as well. Even though an achievement is, by common usage, something that has been achieved—that is, an ex post result—the word “privilege” has often been substituted, even though a privilege is something that exists ex ante.
The notion that those who achieved must have been privileged at the outset may be consistent with the prevailing social vision, but the more fundamental question is whether or to what extent, that vision itself is consistent with empirical facts. However, merely by changing the usage of a word, that empirical test is circumvented.
If the issue were simply one of differences of opinion as to the causes of disparate achievements, differences of opinion have been common among human beings for thousands of years, and are in principle capable of being resolved by empirical evidence. But when beliefs are anchored in a social vision protected by redefined words, empirical tests are finessed aside.
It has become increasingly common to refer to achievement as “privilege” throughout the American educational system, where crusades against “white privilege” abound, along with demands for statistical parity based on demographic representation rather than individual productivity.
This is consistent with the seemingly invincible fallacy that social groups would tend to have equal, or at least similar, outcomes in the absence of biased treatment or genetic differences in ability. Here, yet again, there is an inversion of criteria, so that the prevailing social vision is not judged by whether it is consistent with facts but, instead, facts are redefined to make them verbally consistent with the vision.
The phrase “white privilege” is not the only verbal sleight of hand used to make achievement differences vanish. Even racial or ethnic groups that arrived in the United States destitute during the nineteenth century, and were forced to live in a desperate poverty and squalor almost inconceivable today, have had their later rise from such dire conditions verbally erased by calling their eventual achievement of prosperity a “privilege.”19
The histories of Irish, Jewish, Chinese and Japanese immigrants in America are classic examples of this process—and of their achievements being verbally air-brushed out of history by simply calling them “privilege.” Even middle-class blacks today have likewise been characterized by some as “privileged,”20 even though their ancestors arrived as slaves.
Achievements are a threat to a social vision and a political agenda based on that vision, and so are often kept off the hypothesis-testing agenda by adherents of that vision. Redefining words is a key part of that process.
Worse yet, children who are currently being raised with the kinds of values, discipline and work habits that are likely to make them valuable contributors to society, and a source of pride to themselves and to those who raised them, are called “privileged,” and are taught in schools to feel guilty when other children are being raised with values, behavior and habits that are likely to leave them few options as adults, other than to live at the expense of other people, whether via the welfare state or through a life of crime, or both.
Social mobility is another issue distorted verbally, by simply shifting the meaning of the word “mobility.” In the ordinary usage of words, an automobile is considered to be mobile, because it is capable of moving. Even when it is parked, an automobile remains capable of moving, so that no one is surprised to see someone get into a parked car and drive off. The car was always mobile, even when it was not always moving.
Mobility is an ex ante concept, independent of whether movement is actually taking place ex post. But, if a car’s motor is too damaged to function, then that car is no longer mobile, even if a tow truck is moving it at highway speeds. In short, mobility and movement are two fundamentally different things. One is ex ante and the other is ex post.
How much movement takes place does not tell us how much mobility there is. Nevertheless, empirical studies of social movement are cited as a test of social mobility, even by a Nobel Prize-winning economist, Joseph Stiglitz, who says that social mobility in America is a “myth,” based on data showing little ex post movement upward among poor people. In his own words, “when social scientists refer to equality of opportunity, they mean the likelihood that someone at the bottom will make it to the top.”21
That likelihood, however, is affected not only by external impediments but also by internal factors such as individual skills and efforts. Social mobility is the extent to which a society permits upward and downward movement. How much movement actually takes place depends also on the extent to which individuals and families avail themselves of the opportunities.
Measuring social mobility by how much movement takes place proceeds as if nothing depends on how individuals and families behave. That certainly avoids complications for those promoting the prevailing social vision. But it also avoids empirically testing that vision. Yet the very possibility that internal factors may be at work disappears, by verbal sleight of hand, when results ex post are equated with opportunity ex ante. Another Nobel Prize-winning economist, Angus Deaton, used the same ex post criterion to measure opportunity ex ante,22 thereby sealing off the prevailing vision from the danger of contamination by discordant facts.
One of the things that sometimes seems to threaten to puncture the verbally sealed bubble of the prevailing vision is the presence of some poor immigrant group with a culture very different from that of domestic low-income groups. Cuban refugees to the United States have been one of a number of such groups, who have initially been at least as poor as domestic groups living at the official poverty level. But when the newcomers have been unencumbered by the welfare state vision and its values, such groups have often risen socioeconomically above the domestic poor, and sometimes above the native population as a whole, as the descendants of the Cuban refugees have.
The children of some very poor immigrant groups have risen educationally in the schools, not only above the level of native-born children from families at the same income level, but even above the educational level of the children from the native population as a whole. In New York City, for example, while students who pass the demanding tests to get into the most elite public high schools tend to be from high-income neighborhoods, an exception are students from low-income neighborhoods with concentrations of immigrants from Fujian province in China.23
Such groups represent a threat to the prevailing social vision. Among the ways of meeting that threat are (1) ignoring such social results that are so discordant with the assumptions of that vision, (2) making statements attributing the newcomers’ success to “privilege,” using the word in the redefined sense that turns this into a circular argument, and (3) stigmatizing the making of comparisons between successful ethnic groups and unsuccessful ones as manifestations of implicit racism against such groups as blacks in the United States.
However politically effective this third tactic may be within the United States, it is not nearly so effective in countries where the underclass is predominantly white, and the poverty-stricken newcomers who succeed in the schools and in the economy include groups that are non-white. In England, for example, among children whose families’ incomes were low enough to qualify for free lunch programs, the children of immigrants from Africa and Bangladesh met educational test standards nearly 60 percent of the time, while white, native-born children from families at the same low economic level met the standards only 30 percent of the time.24
Viewed in racial terms, these educational results in England might seem to be very different from those in the United States. But viewed in terms of low-income, native-born children, raised in a culture long steeped in the welfare state vision and its values, as compared to those low-income immigrant children who have been spared that culture and those values, the results are remarkably similar on both sides of the Atlantic.*
In Britain, immigrants who have been successes in school have often also had successes in higher education and in their careers:
The children of immigrants from the Indian sub-continent make up a quarter of all British medical students, twelve times their proportion in the general population. They are likewise overrepresented in the law, science, and economics faculties of our universities.25
While empirical studies of American society that measure mobility ex ante by movement ex post have often been cited by those claiming that social mobility in contemporary America is a “myth,”26 one of the often-cited empirical studies of social mobility, by the Pew Charitable Trust, pointed out that its sample did not include immigrant families. The study itself added that, for immigrant families, “the American Dream is alive and well.”27 This statement is seldom, if ever, quoted by those like Professor Stiglitz who cite the Pew studies to claim that social mobility is a myth.28
In England, physician-author Theodore Dalrymple noted: “I cannot recall meeting a sixteen-year-old white from the public housing estates that are near my hospital who could multiply nine by seven (I do not exaggerate). Even three by seven often defeats them.”29 The Economist magazine reported that white 16-year-olds in the borough of Knowsley had worse test results “than do black 16-year-olds in any London borough.”30
In the course of treating his patients, Dr. Dalrymple has had occasions to ask young, lower-class Britons if they can read and write. When he has asked, “they do not even regard my question as to whether they can read and write as in the least surprising or insulting.”31 Such educational deficiencies have been sufficiently widely known in Britain that there was a popular song there whose first line was “We don’t need no education,” and another popular song whose title was “Poor, White, and Stupid.”32 Dr. Dalrymple also observed that the average Polish immigrant who has been in Britain six months “speaks better, more cultivated English” than lower-class young Britons do.33
Not all British youngsters are lower class, of course. But, comparing foreign youngsters for whom English is not their native language with native white British youngsters as a whole shows a striking pattern. Foreign students whose native language is not English typically do not initially do as well in their early schooling as native white British students as a whole. However, by the end of secondary school “white British pupils are overtaken by ten other ethnic groups.” At that time, Chinese pupils “are twice as likely to score 50 points or higher than their white British peers” on a standard educational test.34
In the United States, particular minority groups also outperform the majority population. Asian American students outnumber white students in each of New York City’s three premier elite public high schools—Stuyvesant, Bronx Science and Brooklyn Tech.35
On the other hand, the percentage of black American students at Stuyvesant High School declined over the years from 1979 to 2012 to just under one-tenth of their proportion in the latter year of what it was three decades earlier.36 At New York’s highly selective Hunter College, blacks were 12 percent of the students in 1995 and Hispanics were 6 percent. But, in 2009, black students were only 3 percent of the students at Hunter College and Hispanic students only 1 percent.37 Among the various external factors usually blamed for substandard academic performances by black or Hispanic students, none has been getting worse over time on such a scale as to account for such a sharp decline.
Despite an abundance of literature blaming disparate educational outcomes on the schools, the society or others, in keeping with the prevailing social vision, statistics on the average number of hours per week spent studying by high school students from different ethnic backgrounds show Asian American students spending more hours studying than either white or black American students.38
How surprised should we be that academic outcomes show a pattern of disparities similar to the pattern of disparities in the amount of time devoted to school work? And how surprised should we be that such input data seldom see the light of day in the media?
Yet all such data, so discordant with the premises of the prevailing social vision, can be turned aside by simply redefining a word, or by redefining a whole category of words whose ex ante and ex post meanings are very different in their implications.
It may not be surprising that an incessant drumbeat of declarations that disparities of outcomes among groups demonstrate malicious treatment of the less successful can promote resentments and violence. What is remarkable is how even outbreaks of violence fail to lead promoters of the invidious social vision to reconsider what they are saying. Instead, it often leads to simply redefining violence. After Harlem riots shocked many people in the 1960s, for example, Professor Kenneth B. Clark declared:
The real danger of Harlem is not in the infrequent explosions of random lawlessness. The frightening horror of Harlem is the chronic day-to-day quiet violence to the human spirit which exists and is accepted as normal.39
A similar equating of social problems with violence occurred in The Nation magazine, where it was said that the “institutional form of quiet violence operates when people are deprived of choices in a systematic way by the very manner in which transactions normally take place.”40 An advertisement in the New York Times by a committee of black clergymen likewise condemned “the silent and covert violence which white middle-class America inflicts upon the victims of the inner city.”41 Books with such titles as Savage Inequalities promote the same notion.
Whatever the substantive merits or demerits of such claims of social injustices, they are not violence. Some might consider such things better or worse than violence but, in either case, that does not make them be violence. In the same sense, some people might consider mountains to be more important than rivers or less important than rivers. But, in any case, nothing can make a mountain be the same as a river or a river be the same as a mountain.
This vogue of equating social problems with violence has spawned such spin-offs as justifying campus speech codes and campus riots as responses to “micro-aggression” by visiting speakers saying things considered offensive by those who believe in a particular social vision.
Even when these things are not said to the people who claim to be offended, but are said by visiting speakers on campus to those who invited them, it is still called “micro-aggression.” But how can A talking to B be considered to be aggression—much less equivalent to violence—against C? Clever people might say that A could be saying things to B that could lead to violence against C. Even if that were true—and it has not been proved, or even tested, in most cases—in the same sense a match can start a forest fire. But nobody calls a match a forest fire.
Verbal vogues have more than verbal consequences. In so far as they create a false equivalence between violence and socioeconomic conditions, they excuse lawlessness and social disorders, whose principal victims are the less affluent, both immediately and in the longer-run repercussions.
For those promoting a particular vision of society, the word “change” often does not refer to changes in general, or even to all changes of a given magnitude or consequence in the lives of people. In practice, even if not in explicit definition, change for many of the intelligentsia often tends to mean only the particular kinds of changes conceived and promoted by their particular social vision.
Other changes, even changes that revolutionize the lives of the great majority of the people, are omitted in many discussions of change by people committed to a particular vision. Eras in which sweeping changes occur may be seen as stagnant or even retrograde eras if, for example, such eras did not include government policies aimed at reducing income disparities. Even widespread improvements in prosperity are no substitute for policies aimed at redistributing income, for those with the prevailing social vision that is focused on invidious comparisons.
When intellectual elites discuss eras of change in the United States, for example, the decade of the 1920s is seldom, if ever, included. Yet there have been few decades with so many, and so large and such consequential changes in the lives of millions of Americans as the 1920s.
The 1920 census was the first census to show more Americans living in urban, rather than rural, communities. While the urban majority of the population was just over 51 percent at the beginning of the 1920s decade, the rate of increase of the urban population was more than eight times the rate of increase of the rural population.42 The 1920s marked a historic transition to a very different kind of society.
At the beginning of the 1920s, just 35 percent of American homes had electric lights—the same proportion that had gaslight, while another 27 percent of those homes were lit by lamps using either kerosene or coal oil.43 But, after that decade ended, 68 percent of the homes in America were lit by electricity in 1930. Home radios were virtually non-existent at the beginning of 1920; the first commercial radio station began broadcasting to the general public that autumn.44 Subsequently, 24 percent of American homes had radios by 1925 and 40 percent in 1930.45
Most American families also had an automobile for the first time during that decade. As of 1920, 26 percent of American families had an automobile. But so many more families were able to buy cars during the 1920s that, when that decade was over, 60 percent of American families had an automobile in 1930.46
The number of Americans attending colleges and universities doubled between 1920 and 1930.47 The 1920s were also the first decade of regularly scheduled airline passenger service, which had fewer than 6,000 passengers in 1926, but more than 170,000 passengers by 1929.48
Sports and entertainment were also revolutionized in the 1920s. Motion pictures talked for the first time during that decade, greatly boosting attendance at movies—which was twice as large in 1929 as it had been just seven years earlier.49 This was also the decade when American popular music was revolutionized by jazz, which not only swept the country but spread internationally. In major league baseball, attendance records for every year of the 1920s exceeded the attendance in any year before 1920.50 The National Football League was founded in the 1920s, and attendance per NFL game in 1928 was three times what it was in 1921.51
Chains of department stores and grocery stores spread rapidly across the country during the 1920s,52 usually selling at lower prices than independent stores,53 reflecting economies of scale that reduced the costs of getting merchandise from the producer to the consumer. There had been chains of stores before, but the decade of the 1920s was the decade when they expanded severalfold, displacing many small independent stores, whose costs of doing business were not low enough to enable them to match the low prices charged by chain stores.
None of these sweeping changes, however, was “change” to many, if not most, of the elite intelligentsia, because these were not the particular kinds of changes they sought, predicted or recognized. The decade of the 1920s was barely over before a widespread denigration of it as a stagnant or reactionary decade began—a view that has continued since then to the present day.
Noted historians Henry Steele Commager and Richard Brandon Morris, for example, compared the decade of the 1920s to a stagnant region of the north Atlantic known as the Sargasso Sea. They referred to the preceding and succeeding decades as “positive” but to the 1920s as “negative,” and “like some Sargasso Sea on the ocean of history.”54 Professor Edward A. Ross, regarded as one of the founders of the profession of sociology in the United States, analogized the years from 1919 to 1931 to the “Great Ice Age.”55
Despite its high economic growth rate, rising real incomes and low unemployment rates, famed historian Arthur M. Schlesinger, Jr. was among many intellectuals who put the word “prosperity” in quotation marks as regards the 1920s.56 Professor Schlesinger referred to wages in the 1920s as “unsatisfactory,”57 without specifying any criterion by which wages might be considered satisfactory otherwise, or some preceding era when wages were higher. Real per capita income in fact increased by nearly one third from 1919 to 1929.58
Even those writers who admitted the material progress of the 1920s often struggled to depict this as not “real” progress in some elusive and undefined sense. The people who denigrated the 1920s have not been little-known figures on the fringes of society but, in many cases, have been among the elite scholars in their respective fields. It was in discussions of government policies in the 1920s that the reasons for the dissatisfaction of many intellectuals with that decade became apparent. Calvin Coolidge, who was President of the United States for just over half of the decade of the 1920s, was widely excoriated and/or ridiculed in later histories.
President Coolidge believed in neither of the two government policies that were fundamental to the Progressives—redistribution of income and government intervention in the economy. The reduction of the 73 percent tax rate on the highest incomes to 24 percent that took place as a result of tax rate cuts during the Harding and Coolidge administrations was denounced then, and has continued to be denounced, as “tax cuts for the rich,” despite the fact that this change in tax rates brought much higher tax revenues from high-income people, both absolutely and as a percentage of all income tax revenues collected. Moreover, this outcome was precisely what President Coolidge said beforehand was his objective.59
As for the effects of the Coolidge administration’s policies on working-class people, in President Coolidge’s last four years in the White House, the annual unemployment rate ranged from a high of 4.2 percent to a low of 1.8 percent.60 This hardly fit a depiction of the Coolidge administration as “unashamedly the instrument of privileged groups,”61 as claimed in a widely used history textbook by Professors Allan Nevins and Henry Steele Commager.
The other key policy of the Progressives, as exemplified in the Woodrow Wilson administration that preceded that of Harding and Coolidge, was government intervention in the economy, as exemplified by the creation of such institutions as the Federal Reserve System and the Federal Trade Commission, as well as innumerable controls over the economy by the Woodrow Wilson administration during the First World War.
Neither President Warren G. Harding, who succeeded President Wilson, nor Vice President Coolidge who became President after Harding’s death, believed in government interventions. But their Secretary of Commerce, Herbert Hoover—who later became President after Coolidge—was more activist-minded in his cabinet role and later made unprecedented interventions in the economy as President, after the 1929 stock market crash.62 Such interventions were then escalated further by his successor, President Franklin D. Roosevelt.
To the intellectual elites of that time and later, relying more on market processes than on political interventions in the economy was abdicating responsibility for the public welfare, rather than simply a different belief by Presidents Harding and Coolidge that the public welfare would be better served by letting markets function under known and stable laws, rather than with unpredictable ad hoc government interventions. The latter policy was perhaps best expressed during the 1930s by President Franklin D. Roosevelt:
The country needs and, unless I mistake its temper, the country demands bold, persistent experimentation. It is common sense to take a method and try it; if it fails, admit it frankly and try another. But above all, try something.63
However more congenial President Roosevelt’s approach might be to the prevailing vision of the intellectuals, both then and now, the opposite approach in the 1920s was based on different assumptions—which the intelligentsia refused to see as different assumptions, but only as a calloused failure to promote the public interest and kowtowing to business and the wealthy. As with many other issues in other places and times, a desire to test the actual consequences of fundamentally different beliefs and policies has seldom matched the fervor of pronouncements defending one view or the other.
In any event, to the intellectuals the decade of the 1920s did not deserve the honorific title of an era of “change.” It was the decade of the 1930s which brought that kind of political change, and which has since been celebrated as much as the 1920s were denigrated.
The highest annual rate of unemployment in the 1920s was 12 percent in one year, while unemployment in the 1930s peaked at 25 percent, and was at or above 20 percent for 35 consecutive months.66 Then, after subsiding somewhat—but still continuously in double digits for years—unemployment rose again to 20 percent or above during six months in 1935, four months in 1938 and one month in the spring of 1939, almost a decade after the stock market crash of 1929 that supposedly caused the high unemployment of the 1930s. In short, unemployment was at or above 20 percent for 46 months—the equivalent of nearly four years—or 38 percent of the much celebrated decade of the 1930s.67
In the wake of many business bankruptcies, thousands of bank failures, mass unemployment and lower total output during the early 1930s, the rising standard of living of the much disdained 1920s was replaced by much lower standards of living for millions of people. This was most strikingly illustrated by numerous breadlines and soup kitchens set up by charitable organizations in communities across the country, for people unable to buy food. An especially painful spectacle was that of vast numbers of Americans, in many regions of the country, scavenging in garbage dumps for food during the Great Depression.68
An economic study of the Great Depression in a leading scholarly journal in 2004 concluded that the effects of government policies had prolonged the Great Depression by several years.69 But the contrary view has largely prevailed by sheer repetition and by its consonance with the prevailing social vision.*
These two past decades are not the fundamental issue. The crucial point for us today is understanding the continuing ability of many intellectuals to ignore blatant realities which threaten their cherished vision. It has been an on-going triumph of words over demonstrable realities. We are expected today to automatically follow the kinds of government interventionist policies of the 1930s and to disdain the policies of the 1920s when, in the world of words, there was no “change” because there was no government income redistribution policy.
Despite the wonders that can be performed in the world of words by those with verbal virtuosity—for example, making ugly and/or dangerous behavior vanish by saying the magic word “stereotypes”—all human beings are nevertheless forced to live their lives in the world of reality, outside the sealed bubble of a vision.
Among the impediments to clearly seeing that world of reality are not only the redefining of words, but also twisting other people’s words and meanings—extending in some cases to attributing to people things the direct opposite of what they actually said. Like other verbal distortions, this one is not confined to unscrupulous politicians or irresponsible journalists, but includes academic scholars renowned within their own respective specialties.
As J.A. Schumpeter said, long ago: “We fight for and against not men and things as they are, but for and against the caricatures we make of them.”70 No clearer example of this can be found than those who fight against what they call the “trickle-down theory” that, by government policies to benefit those who are already rich, their prosperity will somehow “trickle-down” to others, including the poor.
The first and most important thing to understand about the “trickle-down” theory is that there is no such theory. Anyone who doubts this can begin by asking themselves whether they have ever—even once in their entire life—seen, heard or read any human being who actually espoused that theory.
If a negative answer to that question does not suffice, one might consult the monumental, 1,260-page History of Economic Analysis by J.A. Schumpeter, and look in vain for the “trickle-down” theory there. Finally, one can consult the many writings of those critics who have opposed and denounced the “trickle-down” theory to see whom they quote—and discover that they have not quoted any specific individual espousing that theory.
Such prominent contemporary critics as economists Joseph Stiglitz, Alan Blinder and Paul Krugman, for example, have repeatedly denounced this non-existent theory,71 without quoting or even citing anyone who had actually proposed such a theory. Nor were they unique in this. The “trickle-down” theory has been denounced repeatedly in numerous books by numerous authors. It has been denounced in New York Times editorials, by leading columnists and writers in numerous other publications, by politicians, teachers and television commentators. It has been denounced as far away as India.72
Surely the people who have been denouncing this theory for years can tell us who has advocated it, and where the advocates’ own words can be found—if there are any such advocates.
What has sometimes occasioned these denunciations has been an entirely different theory, which has led to proposals and policies to reduce tax rates, in order to collect more tax revenues—especially more tax revenues from high-income people—and spur increased investments that can lead to greater output and employment. Like any other theory, this theory can turn out to be correct or incorrect in a given set of circumstances. But it has nothing to do with existing wealth trickling down, and everything to do with attempts to create additional wealth in the country as a whole, so that, in a phrase popularized by President John F. Kennedy, “a rising tide lifts all the boats.”73
Any given individual might argue for or against that conclusion as well, on either analytical or empirical grounds. But all too often critics instead denounce a non-existent “trickle-down” theory and “tax cuts for the rich” supposedly based on that theory.
This was not always a partisan political issue, nor always even an ideological issue. While today it is usually conservative or free market economists who urge reducing tax rates, in order to collect more tax revenues and spur economic growth, it was none other than John Maynard Keynes—hardly a conservative or free-market economist—who said in 1933 that “taxation may be so high as to defeat its object,” that “given sufficient time to gather the fruits, a reduction of taxation will run a better chance, than an increase, of balancing the Budget.”74
Even earlier, it was two Secretaries of the Treasury during the Democratic administration of Woodrow Wilson who pointed out that tax rates above a certain level no longer necessarily bring in more tax revenue, as President Wilson himself also noted in an address to Congress.75
The first tax rate cuts based on that theory occurred in the Republican administration of President Warren G. Harding in the 1920s, and those cuts were advocated most prominently by Secretary of the Treasury Andrew Mellon. As already noted in Chapter 4, income tax revenues did in fact rise after income tax rates were cut in the 1920s—and both the amount and the proportion of income taxes paid by people with higher incomes rose dramatically.
In 1920, when the highest tax rate on the highest incomes was 73 percent, people with incomes of $100,000 or more paid 30 percent of all income tax revenues. After the tax rate on the highest incomes was cut to 24 percent, people in that same income bracket paid 65 percent of all income tax revenues in 1929.76 Nevertheless, in the world of words, these were called “tax cuts for the rich”—and have remained so ever since, in utter disregard of empirical evidence as to the actual amount of tax revenues collected from high-income taxpayers at different tax rates. There were similar results from later tax rate reductions during the Kennedy, Reagan and George W. Bush administrations.77
Whether this will happen again in other circumstances is something that might be debated, instead of fighting a straw man like a non-existent “trickle-down” theory. Back in 1899, long before the income tax rate controversies arose, Oliver Wendell Holmes advanced the general proposition that catchwords can “delay further analysis for fifty years.”78 The catchwords “trickle-down theory” have been going strong for decades, and show no sign of weakening.
Among the essential requirements for the success of fervent social crusades are grievances, villains and victories over those villains. Given the universality of sins among human beings, grievances are the most assuredly supplied of these essentials. But maintaining a sufficiently dependable supply of villains as broader causal explanations of perceived grievances can be a challenge, especially if accuracy is taken seriously as a constraint.
A widely used history textbook, co-authored by a number of well-known historians, two of whom won Pulitzer Prizes, said of Secretary of the Treasury Andrew Mellon: “It was better, he argued, to place the burden of taxes on lower-income groups” and that a “share of the taxfree profits of the rich, Mellon reassured the country, would ultimately trickle down to the middle- and lower-income groups in the form of salaries and wages.”79
There was neither a quotation nor a citation of any statement in which Secretary Mellon actually made this argument, even though it is a matter of public record what Mellon actually proposed, as regards tax rates on low-income people—and it bears no resemblance to what these historians said.80
Mellon’s own book, Taxation: The People’s Business in fact argued just the opposite, that higher-income people should pay more tax revenue to the government,81 rather than having high tax rates on paper that were merely a “gesture” of taxing the rich,82 while allowing them to escape actually paying those high tax rates by investing in tax-exempt securities.83 He said, quite plainly, that “wealth is failing to carry its share of the tax burden; and capital is being diverted into channels which yield neither revenue to the Government nor profit to the people.”84
Mellon argued that what he called the “evil” of tax-exempt securities should be ended.85 He asserted that it was “repugnant” in a democracy that there should be “a class in the community which cannot be reached for tax purposes,”86 because of “a refuge in the form of tax-exempt securities, into which wealth that has been accumulated or inherited can retire and defy the tax collector.”87 He also called it “incredible” that a man with an income of a million dollars a year would be permitted “to pay not one cent to the support of his Government”88 and an “almost grotesque” consequence that people of more modest incomes would therefore have to make up for the tax shortfall.89
Nevertheless, another widely used history textbook, a best-seller titled The American Pageant, with multiple authors and multiple editions over the decades, declared: “Mellon’s spare-the-rich policies thus shifted much of the tax burden from the wealthy to the middle-income groups.”90 Here as well, there was neither a quotation nor a citation of anything that Secretary Mellon actually said, much less a citation of Internal Revenue Service data that told an opposite story from that of the “tax cuts for the rich” scenario.
After the 1920s income tax rate cuts, people with an income of $5,000 and under paid less than half of one percent of the income tax revenues collected in 1929, while people with incomes of a million dollars or more paid 19 percent. Before, under the higher tax rates on high incomes, people with incomes of $5,000 and under paid 15 percent of all income tax revenues collected in 1920, while people with incomes of a million dollars or more paid less than 5 percent.91
None of these facts requires extraordinary research in esoteric places. One need only read Mellon’s book Taxation to see what he advocated, and look at the published records of the Internal Revenue Service to see what happened. Both of these sources are available in libraries or on the Internet. That renowned historians and economists failed to check these readily available sources is just one sign of what can happen in an academic monoculture where the promotion of a social vision takes precedence over the search for facts—and where there are few people with fundamentally different views who would challenge what was said.
Andrew Mellon was by no means the only person whose views differed from that of the prevailing social vision, and whose words, policies or consequences of those policies were not merely criticized but falsified by those with that vision. This practice has continued on into our own times on a whole range of other issues.
Among those demonized in the second half of the twentieth century was a very unlikely candidate for the role of villain, Daniel Patrick Moynihan. He had two careers, one as an academic intellectual writing about social issues, and a political career as a liberal Democrat supporting civil rights for blacks and social programs designed to help the poor. In the 1960s he held an appointed office as Assistant Secretary of Labor in the Lyndon Johnson administration, and was in later years elected as a Democratic Senator from New York, serving in that capacity for 24 years.
One of the disturbing social problems that caught Assistant Secretary of Labor Moynihan’s attention in the 1960s was that one-third of black children were growing up in broken homes.92 He saw great personal and social dangers in that situation, and prepared a paper for internal use, pointing out the dangers he saw and urging government action to help deal with that problem. His paper was later published as a U.S. Department of Labor document titled The Negro Family and subtitled The Case for National Action. As was common in such government publications, no author was mentioned.
Only after a firestorm of criticism by those angered that black family problems had been aired as contributing to social pathology, was the author revealed to the public, and the document became known thereafter as “the Moynihan Report.” Daniel Patrick Moynihan was fiercely denounced in the public media93 and “bitterly attacked in private” at a White House planning session:
Moynihan sat in and suffered the assaults in silence—though, said a government friend, “he came out of one conference with tears in his eyes. They called him a racist and a fascist.”94
Far from singling out blacks for criticism, Moynihan pointed out the problems of “family disorganization” in the earlier history of his own ethnic group, Irish Americans.95 He also had his own painful personal experience, as a child ten years old, when his father deserted the family during the Great Depression of the 1930s, plunging them from middle-class suburban comfort to dire poverty in one of Manhattan’s poorest and roughest neighborhoods. Though only ten years old at the time, he and his even younger brother tried to earn some much-needed money to bring home by shining shoes in Times Square and Central Park.96
His younger brother cried one day when he came home without money to buy milk, and young Pat Moynihan was on some other occasions robbed of the money he had earned by neighborhood toughs.97 As he grew older and bigger, he was able to take on other jobs, including working as a stevedore on New York’s waterfront.98
The trauma of fatherlessness was something that Moynihan tried to warn others about. Ironically, the one-third of black children who were raised in broken homes, which alarmed Moynihan in the 1960s, became two-thirds in later years—and, among blacks in poverty, more than four-fifths.99
What happened to Mellon and Moynihan, among others in the past, continues to happen to those who deviate from the prevailing vision in the present. Among the prime targets today is Charles Murray, whose many books on social issues have led to attempts to prevent his speaking on college campuses to students who invited him. Many of these attempts have involved disruptions or violence, and virtually all have involved accusing him of having said vile things—none of which has been quoted from any of the books he has written, and some of which are the direct opposite of what he actually said in those books.100
Such practices have become commonplace as regards others, on issues large and small. In 2015, for example, an Associated Press account of an interview with Professor William Julius Wilson of Harvard contained this:
Wilson’s childhood gave him firsthand knowledge of poverty and how to escape it. Referring to Supreme Court Justice Clarence Thomas, who also grew up poor, he says, “He’ll say he pulled himself up by his own bootstraps. I say I was in the right place at the right time.”101
In this case, as in others, there was no quotation or citation of Justice Thomas ever having said any such thing. On the contrary, the Justice’s memoir, My Grandfather’s Son, credited his grandfather, who raised him, with making his advancement possible. When Judge Thomas was sworn in as a Supreme Court Justice, he invited nuns who had taught him in a Southern Catholic school to Washington for his swearing-in ceremony, where they could see that what they did for him was not in vain.102
The word “bootstraps,” like the word “trickle-down,” is almost invariably attributed to somebody else, rather than being either quoted or cited. These catchwords tell us more about those who resort to such straw men than about those to whom they attribute these terms or the ideas behind them.
Black American economist Walter E. Williams, who—like Justice Thomas—has opposed the prevailing social vision, has been said by economist Lanny Ebenstein of the University of California at Santa Barbara to be among those “committed to the welfare of the top few.”103 Professor Ebenstein has every right to disagree with Professor Williams’ analyses or policies, but that is very different from making sweeping claims without providing substantiation.
As someone who has known Walter Williams as a colleague and friend for half a century, I have seen nothing in any of his writings, or in anything that he has said verbally, whether in public or in private, that has indicated the slightest interest in promoting the welfare of the wealthy.104 Anyone who wishes to check for themselves can read any of his books, from Race and Economics to South Africa’s War Against Capitalism, the latter resulting from his research in South Africa during the era of apartheid.
People who attribute to others things that are the opposite of what they actually said are not necessarily lying. They may simply not have bothered to check out what was actually said, but based their conclusions instead on widespread beliefs among like-minded people—beliefs sometimes called what “everybody knows.” But the net result is no less misleading.
When scholars who are also educators do such things, the most important damage that is done is not to those they attack, but to those whom they are paid to educate. Moreover, that damage is not limited to whatever particular false conclusions may be produced, but to the whole way of thinking—and not thinking—that they demonstrate, and which may be emulated by their students.
If students do not acquire systematic methods and standards for testing conflicting beliefs, this can be a major deficiency in their education, for nothing is more certain than that they will encounter conflicting beliefs on many subjects in the years after they have left the politically correct monoculture on many academic campuses.
What words openly declare can be tested against empirical evidence, but what words insinuate can bypass that safeguard. Even an innocent-sounding phrase like “income distribution,” endlessly repeated, can suggest a process in which income exists somehow and is then distributed, as one might distribute food at a dinner table or gifts at Christmas.
In reality there is only a figurative “distribution” of income, in the statistical sense in which there is a distribution of heights among people, ranging from the heights of little toddlers to the heights of professional basketball players more than seven feet tall. But no one imagines that heights exist somehow as independent entities, and are then literally “distributed”—in the sense of being handed out—to individuals.
In the plain, straightforward sense, most income is not distributed at all, either justly or unjustly. Most income in a market economy is earned directly by providing something that someone else wants, and values enough to pay for it, whether what they are paying for is labor, housing or diamonds. People who are unable to understand why John D. Rockefeller in the past, or Bill Gates in the present, received so much money might ask what each of them supplied to others that millions of others were willing to pay for, with their individually modest payments that added up to gigantic fortunes. But that question is seldom asked by most people, and especially not by income redistributionists.
Far more common expressions are like those of economist Joseph Stiglitz, who referred to “the share of income grabbed by the top 1 percent” or a New York Times editorial which referred to the top one percent as having “cornered an ever-larger share of the nation’s wealth.”105 In a similar vein, President Barack Obama said, “The top 10 percent no longer takes in one-third of our income—it now takes half.”106 Such expressions are not peculiar to Americans. An Oxford professor, for example, referred repeatedly to income that people in the top one percent somehow “take” from a presumably pre-existing and collective “national income.”107
In each case, the key trick is to verbally collectivize wealth produced by individuals and then depict those individuals who produced more of it, and received payment for doing so, as having deprived others of their fair share. With such word games, one might say that Babe Ruth took an unfair share of the home runs hit by the New York Yankees.
Sometimes these word games are played on an international scale. The wealth created in the United States by Americans is rhetorically transformed into part of “the world’s wealth,” from which Americans take an unfairly large share. But Americans, like other peoples, essentially consume what they themselves produce. What they import from other countries is exported by those countries in exchange for part of what Americans have produced. But such mundane facts cannot compete, in the world of words, with melodramatic rhetoric conveying a toxic message that disparities in outcomes imply some people being wronged by others.
Someone once said of nineteenth-century French economist Jean-Baptiste Say that “affected ways of talking” constituted a large part of Say’s “doctrine.”108 That charge might more readily apply to many of our contemporaries today, who refer to income that people in the top brackets somehow “take” from a presumably pre-existing and collective “national income,” or “our income,” rather than income they earned directly as payments from those who voluntarily purchased the particular goods or services that those in the top brackets offered for sale.
Although such verbal displays—insinuations as distinguished from arguments—may be in aid of a redistributionist agenda, neither income nor wealth can be redistributed when it was not distributed in the first place, but earned directly from those who valued whatever was sold. What is being promoted by such verbal displays is not simply a different set of income or wealth outcomes but a whole fundamentally different process for determining how much income or wealth everyone receives. In that alternative world, third-party surrogates, armed with the power of government, can override whatever valuations millions of individuals might place on innumerable goods and services they purchase, and substitute instead the notions in vogue among the surrogates.
Euphemisms are another form of insinuation that enables ideas to bypass factual or analytical tests. When John Rawls, in his A Theory of Justice repeatedly referred to outcomes that “society” can “arrange,”109 these euphemisms finessed aside the plain fact that only government has the power to override millions of people’s mutually agreed transactions terms. Interior decorators arrange. Governments compel. It is not a subtle distinction.
Nor is Rawls the only income redistributionist to evade the reality of compulsion—which is to say, the loss of millions of people’s freedom to make their own decisions about their own lives, when an inequality of economic outcomes is replaced by a far more dangerous increased inequality of power. Unequal economic outcomes nevertheless permit even the less fortunate to have rising standards of living. But power is inherently relative, so that more power for some means less freedom for others.
Concealing that crucial trade-off has led many intellectuals to define the proposed benefits of expanded government power as a “new freedom,” as Woodrow Wilson called it.110 Succeeding generations of intellectuals have continued to change the historic definition of freedom to include the supposed benefits of increased government scope and power.
For example, one cannot be free “if one cannot achieve his goals,” according to an influential book by two Yale professors.111 In their definition, consumers “are not free in the market if high costs prohibit a choice that could be made available to them by sharing the commodity through collective choice”112—with “collective choice” apparently being another euphemism for government. Professor Angus Deaton used a similar results-oriented definition of freedom:
In this book, when I speak of freedom, it is the freedom to live a good life and to do the things that make life worth living. The absence of freedom is poverty, deprivation, and poor health—long the lot of much of humanity, and still the fate of an outrageously high proportion of the world today.113
Freedom is not some new or esoteric concept. It is a concept widely understood and deeply felt for centuries—especially deeply felt by those who did not have freedom, such as slaves, serfs, prisoners and people living under totalitarian dictatorships. Many such people have made desperate attempts to escape to freedom, even at the risk of their lives. This was not done to get government benefits. Spartacus was not fighting to get farm subsidies or housing vouchers.
Yet many intellectuals, living in the safety and comfort of free societies, have found it expedient to redefine freedom, so that an expansion of government determination of economic outcomes, through an expansion of government compulsion, is not seen as a tradeoff of freedom, but as simply an expansion of freedom, as conveniently redefined.
In the world of words, the hardest facts can be made to vanish into thin air by a clever catchword or soaring rhetoric. In a public discourse where slogans and images have too often replaced facts and logic, words have indeed become for some what Hobbes called them, centuries ago—the money of fools,114 often counterfeit money created by clever people. Our educational system, which might have been expected to develop students’ ability to “cross-examine the facts,” as the great economist Alfred Marshall once put it, has itself become one of the fountainheads of insinuations and obfuscations.
* In neither England nor the United States are all immigrant groups the same. Some immigrant groups are far more prone to become dependent on welfare than others, and may even exceed the rate of welfare dependency among the native population. But those particular immigrant groups with a different cultural orientation—one resistant to the welfare state vision and values—have risen in both countries, even when these particular immigrant groups have been non-whites.
* It is not only conservative or libertarian economists who see dangers in government interventions. Liberal economist J.K. Galbraith referred to the Federal Reserve’s policies during the Great Depression as showing “startling incompetence.” John Kenneth Galbraith, The Great Crash (Boston: Houghton Mifflin, 1961), p. 32. In the previous century, Karl Marx said that “crackbrained meddling by the authorities” can “aggravate an existing crisis.” Karl Marx and Frederick Engels, Collected Works, Vol. 38 (New York: International Publishers, 1982), p. 275.