Chapter 7

Facts, Assumptions and Goals

Things are what they are, and their consequences will be what they will be. Why, then, should we deceive ourselves?

Winston Churchill

Many people may expect discussions of economic and social disparities to end with “solutions”—usually something that the government can create, institutionalize, staff and pay for with the taxpayers’ money.

The goal here is entirely different. There has never been a shortage of people eager to draw up blueprints for running other people’s lives. But any “solution,” however valid as of a given moment under given conditions, is subject to obsolescence as time goes on and conditions change.

The hope here is that clarification is less perishable, and can be applied to both existing issues related to economic and social disparities and to new issues, involving the same subject, that are sure to arise with the passage of time. Given the limitations of prophecy, the point here is to seek to provide enough clarification to enable others to make up their own minds about the inevitable claims and counter-claims sure to arise from those who are promoting their own notions or their own interests.

What can we conclude from our survey of the many economic, social and technological gaps of our times—and of other times and places, going back for centuries? We have seen, in various ways, how multiple prerequisites can produce skewed distributions of outcomes, in no way resembling a normal bell curve, whether in human endeavors or in natural phenomena such as tornadoes or lightning.

To deny that a particular factor can be assumed a priori to be the cause of inequalities in outcomes is not to deny that the chances of achieving those outcomes have in fact been grossly unequal in societies around the world, and over thousands of years of recorded history. In a sense, life is a relay race, and each of us receives the baton at a time and place over which we have no control. Our parents, our birth order, our country and our surrounding culture have already been predetermined for us. Some of the prerequisites for achievement can be affected later by individual choices or social policies, but by no means 100 percent in most cases, much less in all cases. No human being and no human institution has either sufficient knowledge or sufficient power for that. More important, we have zero control over the past—and, as was said, long ago, “We do not live in the past, but the past in us.”1

EQUALITY: MEANINGS AND PROSPECTS

Critics of disparities often either explicitly or implicitly call for some kind or approximation of equality. But when we speak of “equality” among human beings, what do we mean? We certainly cannot all sing like Pavarotti, think like Einstein or land a commercial airliner safely in the Hudson River like pilot “Sully” Sullenberger. Clearly we cannot all be equally capable of doing concrete things. In terms of specific capabilities in real life, a given man is not even equal to himself at different stages of life—sometimes not even on different days—much less equal to all others who are in varying stages of their own lives.

Even if we all had equal potential at birth, or at conception, too many factors are at work—and at work differently from one individual to another, even within the same family—for us to develop the same capabilities to the same degree. If we cannot have equality of capabilities, then we are left to define equality in some other way. These might include equality of rewards, though breaking the link between productivity and reward has had an unpromising track record in many times and places.2 Rewarding people for their merit is another possibility, though not one free of pitfalls.

Merit versus Productivity

Much controversy about economic and other disparities center on whether what people get reflects their merit—that is, what they deserve as rewards, from a moral perspective, based on what they themselves have chosen to do, out of the particular possibilities open to them. But what they deserve morally is what we do not know and cannot know, for we have not “walked in their shoes.” We can guess, surmise or imagine, but these are hardly sufficient bases for invoking the categorical compulsions of government.

Even if we were to concede that rewarding moral merit would be better, that raises the more fundamental question of our own competence to assess moral merit. Some college admissions officials, for example, implicitly seem to assume that they can assess applicants on the basis of how well an applicant has used the particular educational opportunities previously available to that applicant, rather than on the basis of how one applicant’s record of educational accomplishments compares with another. But that turns the college admissions process into an attempt to assess merit in the past, rather than productivity in the future.

In general, judging merit seems far less likely to be within our competence than judging productivity. In the economy, what we are far more likely to be competent to judge for ourselves, individually, is whether whatever product or service someone offers us is worth what it costs. Judging merit in the sense of the moral worth that we could credit or blame an individual for, if we knew and understood all the myriad factors impinging on that particular individual’s life, seems beyond the realm of human knowledge. But when we are forced to decide whether to part with our own money—that is, to forego other desirable uses of it—in order to purchase some product or service, that can concentrate our attention on demonstrable realities, with less distraction by heady words or sweeping visions.

Although productivity is far easier to assess than moral merit, productivity is often completely missing in discussions of socioeconomic disparities, especially by those promoting what is called “social justice.” A classic example is a large New York Times article that began: “A Walmart employee earning the company’s median salary of $19,177 would have to work for more than a thousand years to earn the $22.2 million that Doug McMillon, the company’s chief executive, was awarded in 2017.”3 In this article, the subject of productivity was conspicuous by its absence.

Far from being a new approach, this New York Times article was in a long-standing tradition. As famed playwright and Fabian socialist George Bernard Shaw put it, in the early twentieth century:

A division in which one woman gets a shilling and another three thousand shillings for an hour of work has no moral sense in it: it is just something that happens, and that ought not to happen. A child with an interesting face and pretty ways, and some talent for acting, may, by working for the films, earn a hundred times as much as its mother can earn by drudging at an ordinary trade.4

Here productivity is not simply missing, but implicitly repudiated, as a basis for income. The child movie star who is paid many times what her mother is paid is obviously being paid by someone who values her movie role more than anyone values her mother’s work. In short, she is being paid for productivity—as judged by whoever pays her, who in turn is paid by vast numbers of other people, who pay for the enjoyment they get from watching her in the movies. It is not “just something that happens” and there is no basis on which a third-party observer can say that this voluntary transaction is something “that ought not to happen”—that his personal opinion should override other people’s right to do as they choose with their own money or their own time.

This is about the power to preempt other people’s decisions, even if it is called by the more appealing words “social justice.” The very choice of targets for the wrath of “social justice” crusaders such as the New York Times is indicative. There are numerous professional athletes and entertainers who have been paid some multiple of what the chief executive of Walmart is paid.5 But no one asks how many millennia someone who performs mundane tasks at a baseball park would have to work to earn as much as the team’s star player is paid every year. No one asks how many millennia someone who performs such chores at a Hollywood studio would have to work, in order to earn as much as a movie star makes from one hit movie.

Why the difference behind the fierce cries of outrage at pay differentials in business, and the passing over in silence of far greater pay differentials in sports and entertainment? One possible explanation is that business owners and managers have roles in which they can be replaced by political decision-makers, who in turn can impose the kinds of policies preferred by those who imagine that their own superior wisdom or virtue entitles them to dictate to others. But professional athletes and entertainers have roles that obviously cannot be taken over by politicians or bureaucrats. So there would be no point in trying to discredit highly paid people in sports or entertainment, or to arouse public outrage against them.

In any event, the fundamental question is whether people should be paid according to what they produce or paid simply because of their presence. But productivity is an often ignored factor in much of the income disparity literature. Moreover, the question is seldom, if ever, raised as to how many thousands of years a low-level worker would have to work to produce as much wealth as was produced by Bill Gates’ computer operating system that allowed billions of people around the world to operate an extremely complex mechanism that only a very few, highly technologically advanced people would be able to operate without such an intermediary system.

In much, if not most, of the literature on income and wealth disparities, the production of income and wealth is glided over, as something that just happens somehow, even though it happens to radically different degrees in different parts of the world and under different economic systems. Even a fraction of the wealth generated around the world by the Microsoft operating system that was received by Gates himself was enough to produce a gigantic fortune.

Contrary to fashionable rhetoric, this fortune was not some share of “the world’s income” that Gates somehow “grabbed,” “took” or “cornered.” It was what billions of people around the world voluntarily paid for purchasing computers containing Gates’ operating system—Microsoft Windows, something they judged individually would benefit themselves sufficiently to make the price worth paying. It was a share of the value added, as judged by the people who chose to spend their own money to get it.

None of this denies that there may well have been other people who were born with potentialities very much like those of Bill Gates, but who never had the same combination of prerequisites that Gates had. Bound up with issues involving moral merit is the reality of luck as a factor in socioeconomic outcomes, beginning with the luck involved in being born in one set of circumstances rather than another. But, although luck is beyond our control, we can nevertheless learn from examining what that luck consisted of. The luck of being an only child, or the first-born, can make us aware of the great importance of parental attention to all children in their earliest years of development, and lead to more attention to children in general, regardless of their particular birth order. Examining other lucky or unlucky influences can also provide clues to what kinds of behavior or policies to embrace or avoid.

If we could somehow determine what specifically are the prerequisites that can develop special individual potentialities into great achievements, that could benefit society as a whole in many ways, whether in technological advancements or by discovering cures or preventatives for devastating diseases. But our education system is too often oriented in the opposite direction, fiercely opposing differing levels and kinds of education for those individuals whose demonstrated capabilities exceed the demonstrated capabilities of others.

In many cases, educators verbally transmute higher capabilities into “privilege.” Through the magic of words, and in the name of “social justice,” such educators oppose using the schools to facilitate the development of special individual abilities that can benefit society as a whole, because that can cause an expansion of educational disparities and the economic disparities that follow. Many of those with this social vision not only proceed as if society is a zero-sum process, in which benefits to one segment necessarily come at the expense of other segments, they also often ignore, dismiss or demonize other ways of looking at the situation.

For others, whose thinking is not confined within the sealed bubble of the prevailing social vision, maximizing the productive potential of those with higher levels of capabilities benefits many others. The willingness of those others to pay for the fruits of those capabilities is what enables those with such capabilities to earn higher incomes through voluntary transactions—as distinguished from either seizing some pre-existing wealth or receiving whatever third-party surrogate decision-makers might deign to dole out to them in the name of “social justice.”

Among the many dangers of surrogate decision-making is that such decision-makers cannot know the situation of millions of other people as well as those people know their own situations, which may not conform to the vision prevailing among the surrogates. Moreover, surrogate decision-makers often pay no price for being wrong, no matter how wrong or how catastrophic the consequences for those whose decisions they have preempted. Given the fallibility of all human beings, the chastening effect of facing the consequences of one’s decisions can be dispensed with only at great peril.

Languages

Ignoring or downplaying productivity has many consequences, because any source of productivity can also be a source of disparities. That includes something as basic as language. Languages are not simply a means of personal communication, important as that is. Knowledge itself—from the most mundane to the most complex and valuable knowledge—is at the heart of productivity, and different languages have stored vastly different amounts of knowledge at different times and places. People with languages that have a spoken version, but no written version, have been forced to rely on the very limited capacity and fallibility of memory.

At one time, in centuries past, most of the higher level knowledge in Western civilization was stored in Latin, the language of the Romans who had long ruled, and created, much of Western civilization. Whether you went to a university in France or England, you were taught in Latin, because that is where the knowledge was stored, long after the Roman Empire was gone. Regardless of whatever great mental potential you might have been born with, if you did not understand Latin, you were not going to get that knowledge.

With the passing centuries, much of the knowledge once confined to writings in Latin was translated into the vernacular languages of Europe. But this took time, had high costs, and not all populations were either large enough or prosperous enough to cover the costs of having their particular language receive as much of that knowledge as others received. If you were a Czech child in the Habsburg Empire, you could not be taught in your own language beyond the elementary school level until 1848.6 If you wanted to go on to the university level, it would be even longer before there were sufficient writings in your own language to make that possible. For much of Eastern Europe at that time, you had to learn German in order to get a university education. And millions of hard-working people had neither the time nor the money to do that.

There was nothing resembling equality of opportunity under these circumstances. At various times and places around the world, there have been many languages that had not yet developed written versions. Even within the same continent, the languages of Western Europe developed written versions centuries before the languages of Eastern Europe, and it took centuries more for the latter to catch up with the scope and variety of written material available in Western European languages. This was not a matter of genetics, discrimination or merit. It was simply a fact of life. And like other facts of life, it meant gross disparities in opportunities and outcomes.

Against that background, it is hardly surprising that differences in languages have polarized whole societies in various times and places,7 leading even to violence and terrorism in Bohemia,8 Canada9 and India,10 for example, and to outright civil war in Sri Lanka.11 These have not been controversies about the qualities of the languages, as such, but about the socioeconomic consequences for people speaking different languages.

Controversies in contemporary America about whether Hispanic youngsters should be taught in English, or whether the particular variety of “black English” found in low-income ghettos should be used in schools, likewise affect what socioeconomic outcomes can be expected from following one policy rather than another. Here too, the issue is not about the particular qualities of the languages or dialects involved, as these might be judged by linguistic scholars. It is about how the future outcomes of these youngsters can be affected by their speech, and by their access to the knowledge stored in the language of the society around them.

Linguistic scholar John McWhorter, for example, has sought to justify using “black English” in schools to teach ghetto youngsters.12 But where are the books on mathematics, science, engineering, medicine and innumerable other subjects that are written in “black English”? For Hispanic youngsters, there are books on these subjects in Spanish, but they are unlikely to be books found in American public schools and, even if they were, would the knowledge obtained be something that Hispanics could readily communicate to most of the Americans around them who do not speak Spanish? In both cases, the issue is the scope of the cultural universe available to youngsters living in an English-speaking country.

In an era of group-identity politics, various group spokesmen, activists or “leaders” may be preoccupied with languages as badges of cultural identity, but cultures exist to serve human beings. Human beings do not exist to preserve cultures, or to preserve a socially isolated constituency for the benefit of “leaders.” Why create or perpetuate cultural handicaps for minority youngsters in today’s America that were inescapable handicaps for minority youngsters in many other countries in earlier times with fewer and narrower options?

Where languages are seen as productive tools, rather than social symbols, the policies are very different. A Japanese-owned multinational company has decreed that English will be the sole language of the enterprise, wherever the company’s branches are located around the world.13 In other words, they recognized that English is the lingua franca of international commerce, as it is the language of international airline pilots communicating with airports around the world. Their decision was not based on the qualities of the English or Japanese languages, or their symbolic value, but on hard facts about the economics of doing business around the world.

In Singapore, with an overwhelmingly Asian population—whose languages spoken at home are Chinese, Malay or other—not only are all school children required to learn English, the language of instruction in other subjects is also English.14 Singapore’s productivity and prosperity as a major international port depends on its ability to communicate with many nations whose shipments pass through its harbor. English as a lingua franca is key to doing that.

In such cases, the choice of language is based on practical considerations for the welfare of people, rather than on symbolic or ideological issues. Despite how much some intellectuals may be drawn toward such issues in the world of words, symbolism is a luxury that the poor can afford least of all in the world of reality.

DISPARITIES RECONSIDERED

Much attention has been paid to possible reasons why some individuals and groups have traveled farther or faster on the road to achievements, but not nearly as much attention has been paid to the question of why others are not even on that road in the first place.

Ability no doubt plays a role in achievements, but large disparities in outcomes among men who were all in the top one percent in IQ suggest that ability may be necessary, but hardly sufficient, as an explanation. Indeed, the fact that two men, who failed to make the IQ cutoff of 140 for Professor Terman’s landmark study, nevertheless won Nobel Prizes in physics suggests that other factors must have a large influence, since none of the hundreds of men who did make the 140 IQ cutoff won a Nobel Prize in anything.15

Before we can say who has failed, or who has succeeded, in some endeavor, we must first know who was trying to succeed in that endeavor in the first place. Those who are not trying are not likely to succeed, regardless of how much innate ability they may have, and regardless of how much opportunity may exist.

When all children are forced by law to go to school, for example, there is no basis for believing that they are all equally oriented toward getting an education. When, on both sides of the Atlantic, there are many children who not only do not put their own efforts into learning, but disrupt classrooms and harass, threaten and assault other children who do try to learn, surely the time is overdue to stop proceeding as if all educational deficiencies are due to external factors, or that the only internal factor that matters is genetic potential.

Education is just one of many areas where the seemingly invincible fallacy of presupposing a background probability of equality of outcomes defies both evidence and logic. Major demographic differences between different groups within nations, and between one nation and another—with median ages differing by a decade, or two decades or more—give an air of unreality to sweeping expectations of equal outcomes among groups or nations, and sweeping outrage when such expectations are not fulfilled.

Any serious empirical examinations of social groups, nations or races turn up major differences in their respective environments. Severe isolation has left some peoples centuries or even millennia behind others, whether these were Caucasians in the Canary Islands living at a Stone Age level during the Middle Ages16 or black Australian aborigines still living as hunter-gatherers in the eighteenth century.17 Progress is no more automatic than equality, whether for races, nations or other social groupings. We cannot argue as if good things happen automatically, and bad things are somebody’s fault.

Illiteracy isolates people from thousands of years of accumulated knowledge, skills and insights from people around the world. After literacy, or even higher education, has been acquired, that does not make their benefits equally available to all who have acquired these things. Jews were known for centuries as “people of the book.” But for most other people around the world, illiteracy remained common for most of those centuries, and higher education was the exception, rather than the rule, even in the twentieth century.

After most people in particular places had both literacy and education extending to the university level, those who were the first member of their family to achieve either of these things were not in the same position as those who came from a background where literacy and education had been common for generations—and common in the homes in which they grew up. During those generations and centuries intellectual interests, intellectual habits and intellectual standards and traditions could develop. Jewish boys, for example, faced an intellectual task when they reached the age for a Bar Mitzvah to mark their passage from the world of childhood onto the road to manhood.

In Eastern Europe during the years between the two World Wars, young people who were the first member of their families to become educated had no such tradition to prepare them for the world of higher education. Not surprisingly, such students lacked the intellectual background of Jewish students with whom they were in competition in universities, and students from such groups became prominent among members of anti-Semitic movements.18

In many less developed countries in the twentieth century—whether in Europe, Asia or Africa—where higher education was a new experience for many, those students whose cultural backgrounds provided no intellectual experience, traditions or norms as guidance, tended to study softer, more superficially attractive subjects in the “social sciences” or humanities rather than the hard sciences, engineering, medicine or other challenging subjects with major practical applications in the real world.

When Malaysian Prime Minister Mahathir bin Mohamad complained that Malay students admitted preferentially to the universities neglected their academic studies and were drawn toward politics,19 he was in effect echoing what had been said of Romanian universities between the two World Wars, that they were “numerically swollen, academically rather lax, and politically overheated.”20 Most Romanians were illiterate at the beginning of the twentieth century. When there was a great expansion of the younger generation into higher education, Romanian students tended to choose the softer subjects to study, with only one percent studying medicine, for example.21 There were similar patterns in Third World nations that gained their independence after the Second World War, such as Sri Lanka22 and the African nations of Nigeria and Senegal.23

Why should we be surprised to see similar patterns among college students from groups lagging economically and educationally in American society today? Or surprised that such things are seldom even discussed in the politically correct monoculture of academia or in most of the media?

All of this merely scratches the surface of factors impeding equality of outcomes. Deliberate, biased suppressions of other people’s opportunities are just one of the various other impediments to equal outcomes. But those things which offend our moral sense do not automatically have more causal weight than morally neutral factors such as demography, geography or language differences. Determining particular reasons for particular differences at particular times and places requires the hard work of examination and analysis, rather than heady rhetoric and sweeping presuppositions.

As a young scholar and the first black man to receive a Ph.D. from Harvard, W.E.B. Du Bois posed the question as to what would happen if all white people were to lose their racial prejudices overnight. He said that it would make little difference in the economic situation of most blacks. Although “some few would be promoted, some few would get new places” as a result of an end of racial discrimination, nevertheless “the mass would remain as they are” until the younger generation began to “try harder” and the race “lost the omnipresent excuse for failure: prejudice.”24

Whether or not Du Bois’ conclusion was justified, either at the time or more generally, his key point was that white racism—which he fiercely fought against all his life—was not automatically the main reason for racial disparities in outcomes.

We have seen in Chapter 2 that the actual effects of even undoubted and openly proclaimed racism, as in South Africa under apartheid, can depend greatly on the costs of discrimination to discriminators, in a variety of institutional settings. Similar factors and their consequences apply when there is bias against other groups, whether these are groups defined by sex, religion or other differences.

If injustices and persecutions were always causally paramount, Jews would be some of the poorest and least educated people in the world today. Few other groups can trace their victimhood back even half as many centuries or millennia as the Jews. There can be no doubt that a pervasive hostility to Jews often blighted their lives, impeded their progress and left many needlessly in poverty. At various times and places this hostility also made them targets of lethal violence. But the plain fact is that Jews today are by no stretch of the imagination among the poorest or least educated people.

Nor are a number of other groups who have played similar economic roles in countries around the world—and faced similar hostility, punctuated at times by outright mob violence and/or mass expulsions by governments. These other groups who played similar economic roles and faced similar hostility would include the overseas Chinese, sometimes called “the Jews of Southeast Asia,” Parsees described as “the Jews of India” and Lebanese called “the Jews of West Africa,” among others.

The violence unleashed against successful groups has often exceeded the violence unleashed against lagging groups disdained as “inferior.” The number of overseas Chinese killed by mobs in Vietnam, in just one year, exceeded the number of lynchings of black Americans recorded in the history of the United States.25 So did the number of Armenians killed in just one year by rampaging mobs in the Ottoman Empire,26 and so has the number of Jews killed in a given year, at numerous times and places throughout history,27 even before millions were murdered in the Holocaust.

In an era in which invidious gaps and disparities preoccupy so many people, it is necessary to point out that the purpose of making such comparisons here is not to praise, blame or rank different groups. The purpose is to try to get some sense of causation, and to apply whatever insights we can derive from that to human beings in general.

If nothing else, we can learn how dangerous it is, to a whole society, to incessantly depict outcome differences as evidence or proof of malevolent actions that need to be counter-attacked or avenged. Contrary to much that has been said, disparities in socioeconomic outcomes are neither improbable from a theoretical standpoint nor uncommon from an empirical standpoint. Among the real life corollaries of this is that taboos against discussing anything that might be considered negative in the individual behavior or social culture of lagging groups are counterproductive. Given the fallibility of all human beings—demonstrated innumerable times around the world and over thousands of years of recorded history—to exempt any group of people from criticism is not a blessing but a curse.

Against that background, calls for reducing either performance standards or behavioral standards in schools for young people from lagging groups may simply increase the number of members of such groups who develop almost all the prerequisites of success.

To have young people from lagging groups needlessly fail, because third parties preparing them decided not to hold them accountable for punctuality or standard English, or some other quality that their backgrounds may not have given them, is a personal misfortune and a social tragedy. There are few things more painfully frustrating than having done 90 percent of what is necessary for success, and yet failing nevertheless, despite all the efforts and sacrifices made.28

How can such unnecessary failures be avoided? Put bluntly, by paying more attention to facts than to assumptions or visions. Third parties whose lives have been quite different from the lives of those members of lagging groups whom they teach, advise or run programs for, may be forgiven for not understanding the situation at the outset. But vast amounts of evidence, accumulated over the years, show that the particular circumstances in which members of lagging groups have not only succeeded but excelled, have almost invariably been circumstances where there was no lowering of standards for them, and where ruthless competition was the norm for all.

Low-income and lagging groups in America—whether they were the Irish in the nineteenth century or blacks and Hispanics in the twentieth century—have often risen first and most spectacularly in sports and entertainment.29 Both of these are fields of unsparing competition, in which even the stars of yesteryear are ruthlessly cast aside when their performances begin to fade.

In educational institutions as well, all-black Dunbar High School in Washington, D.C., during its 85 years of academic success from 1870 to 1955, had unsparing standards both for school work and for such behavioral qualities as punctuality and social demeanor.30 The sheer volume of work required was also more than in most other public schools. Some parents of Dunbar students even protested to the Board of Education about the large amount of homework required.31

Today, in such highly successful charter schools as those of the KIPP school network and the Success Academy schools, standards have been at least as unsparing, with longer school days, longer school years, rigorous academic requirements and little tolerance for disruptive behavior, much less for the gross behavior so often overlooked or excused in other public schools.

The payoff to such uncompromising demands has been as dramatic in education as in sports or entertainment. As already noted in Chapter 3, black graduates of Dunbar High School were attending some of the most elite colleges in the country a hundred years ago, and graduating with honors, as well as going on to become the first blacks to have various career achievements in a number of fields. The academic achievements of low-income black and other minority students in many charter schools today have been similarly remarkable.

In school year 2016–2017, for example, the various Success Academy schools in New York City enrolled 14,000 students. On statewide tests given in 2017, the highest percentage of students in any of New York State’s regular public school districts who passed the English Language Arts (ELA) test was 81 percent. In the Success Academy schools, 84 percent of the students passed the ELA test. In mathematics, the highest-scoring regular public school district in the state had 85 percent of its students pass. In the Success Academy schools, 95 percent passed.

This would be an outstanding record for Success Academy charter schools under normal circumstances. Under the actual circumstances—including the predominantly low-income black and Hispanic students who constitute the great majority in these schools, where students are admitted by lottery rather than ability—it is truly extraordinary, considering how poorly such students usually do in the regular public schools.

In New York State’s regular public school district with the highest percentage of its students passing the math and English exams, 65 percent of those students were Asian and 29 percent were white. In fact, among the state’s top five regular public school districts with the highest proportion of their students passing the statewide math and English exams, white-and-Asian majorities ranged from 86 percent to 94 percent. Black and Hispanic students, put together, were less than 10 percent of the students in each of these five highest-achieving regular public school districts.

By contrast, in the Success Academy schools, with even higher percentages of their students passing those same exams, 86 percent of the students were either black or Hispanic, and only 6 percent of the students were white and 3 percent Asian. The average family income of the children in the five highest-scoring regular public school districts in New York state ranged from four times the average family income of children in the Success Academy schools to more than nine times that of the Success Academy children’s families.32

How many observers—of whatever race, class or political orientation—can honestly say that they expected such outcomes? Such results are a challenge, if not a devastating contradiction, to prevailing beliefs about either heredity or environment, as those terms are conventionally used. Neither the genes said by some to be a crippling intellectual handicap, nor the poverty said by others to blight minority children’s educational prospects, turned out to be such insurmountable obstacles as many across the ideological spectrum believed.

Nevertheless, education is just one of the areas in which beliefs, arguments and policies are too often guided not by what has demonstrably worked but by what fits a prevailing vision. Racial “integration” in the schools, which the prevailing social vision proclaimed to be a prerequisite for equality of education—because, in the catchwords of Chief Justice Warren separate schools “are inherently unequal”—went from being a means to an end to becoming an end in itself. Accordingly, charter schools have been opposed by many minority “leaders,” including the National Association for the Advancement of Colored People, which has advocated a ban on charter schools.33

Astonishing as such reactions might seem, Dunbar High School faced similar hostility in segments of the black population during the era of its academic excellence.34 In many contexts around the world, egalitarianism as an abstract philosophy has often meant resentment of success as a social reality. More broadly, outstanding achievements of various sorts—whether educational, economic or other—have provoked hostile responses in many countries around the world and in many periods of history.

The time is long overdue to count the costs of runaway rhetoric and heedless accusations—especially since most of those costs, including the high social cost of a breakdown of law and order, are paid by vulnerable people for whose benefit such rhetoric and such accusations are ostensibly being made.

CULTURE

The impact of social visions and social policies, for good or ill, is not uniform across a society. Different groups with different cultures, faced with the same objective circumstances, can react in very different ways.

While the same welfare state benefits are available to Asian Americans as to other Americans, the culture and educational performances of Asian Americans provide them with far better options than a life on welfare. Similarly, to Scandinavians with some of the highest standards of living in the world, and some of the highest standards of honesty,* living on welfare may not be as attractive as to some members of lower-income groups in England or the United States.

The Scandinavian countries have so often been used as an example of welfare states which avoided some of the serious problems found in other welfare states that a closer look at Scandinavia may be useful, especially now that the situation in those countries has begun to evolve into circumstances much more like those in other welfare states.

Scandinavia

For most of their history, the Scandinavian countries have had populations especially homogeneous culturally, sparing them many of the internal conflicts complicating the welfare states of the United States and Britain. Homogeneous populations provide few opportunities for careers as polarizing ethnic “leaders” and activists promoting a sense of historic grievances behind differences in current outcomes.

In Sweden, for example, only about 1 percent of the Swedish population in 1940 was born outside of Sweden, and that rose over the years to just 7 percent by 1970. Moreover, back in that era, immigrants to Sweden came predominantly from Western European countries, and were typically well-educated, and often had higher labor force participation rates and lower unemployment rates than the native-born Swedes.35 As late as 1970, 90 percent of foreign-born persons in Sweden had been born in Europe, including 60 percent from “Nordic” countries—that is, countries culturally similar to Sweden.

That all changed in the late twentieth century and early twenty-first century. By 2007, immigrants were 12 percent of the population of Sweden. Moreover, it was not simply the increasing numbers of immigrants, but the changed national and cultural origins of those immigrants that has been crucial.

Turmoil in the Middle East sent increasing numbers of people from that region of the world to Sweden, Denmark and Norway as refugees. As of 2012, there were more immigrants from Iraq than from any other country in these three Scandinavian nations.36 Given the very small populations in the Scandinavian countries—only about 10 million people in all of Sweden—a relatively modest number of Middle Eastern refugees have had a major impact on Swedish society. Moreover, this represented a drastic departure from the previous history of Sweden and of the immigrants who settled there.

The changing origins of immigrants to Sweden were reflected in changing behavior patterns within the Swedish welfare state. There was a sharply rising use of the government’s “social assistance” programs by immigrants. Just 6.2 percent of the predominantly European immigrants in the pre-1976 era resorted to these “social assistance” programs, compared to 40.5 percent of the immigrants in the 1996–1999 period, when those immigrants were refugees predominantly from the Balkans and the Middle East.

The main difference was not in the times but in the people. Even in 1999, just 6.8 percent of the immigrants from Nordic countries received “social assistance” from the government, which was not very different from the 4.7 percent among native-born Swedes. But 44.3 percent of the immigrants from the Middle East received that same welfare state benefit.37

The non-judgmental aspects of the prevailing welfare state vision opened Sweden to an influx of people based on those people’s status as asylum-seekers, rather than being based on the effect of this influx on the existing Swedish population and their social values. For example, more than half of the people accepted under Sweden’s refugee policies lacked a high-school education.38

Among the consequences have been that unemployment among foreigners has become more than twice as high as among native Swedes. Moreover, “after 10 years in Sweden, only half of asylum seekers have a job.”39 Immigrants, who are now 16 percent of Sweden’s small population, have become 51 percent of the long-term unemployed and 57 percent of the recipients of welfare payments.40

In Norway, the cost of supporting a single refugee in the manner prescribed by the country’s many welfare state provisions has been calculated as $125,000, which would be enough to support a number of Syrian refugees in Jordan.41 In Denmark, where the labor force participation rate is 76 percent for native Danes, it is less than 50 percent for immigrants from non-Western countries, ranging as low as 14 percent for immigrants from Somalia.42

As the ethnic and cultural homogeneity within Scandinavian countries has been changed by an influx of immigrants within recent decades—and especially immigrants admitted from non-Western nations—some of the same social problems as those in Britain and the United States have also begun to appear in Scandinavia.

In Sweden, immigrants from the Middle East show few signs of assimilating to the Swedish culture and many signs of transplanting their own culture to Sweden:

There have been a “steady stream of ‘honor killings’” among some Middle Eastern groups, usually involving “girls executed by their brothers or fathers for wearing short skirts or dating Swedish men.”44 The proportion of foreigners in Sweden’s prisons is five times their proportion in Sweden’s population.45 For the more serious crimes, such as murder, rape and major drug dealing, about half the prison inmates are foreign-born.46

Among the implications of such patterns in Scandinavia is that the welfare state is an influence, rather than a predestination. Put differently, it is the interaction of the welfare state with differing existing cultures in the population which produces varying socioeconomic outcomes, whether within nations or between nations. But here, as in other contexts, the invincible fallacy often trumps the hardest facts, so that very different people are treated as if they were the same.

Immigration Issues

Neither sweeping attacks on immigrants in general nor sweeping defenses of immigrants in general make any sense, because there are no immigrants in general. Instead, there are very different behavior patterns—in education, employment and crimes, for example—among immigrants from different countries and cultures.

Verbal virtuosity can blur such distinctions and produce such wonderful-sounding generalities as judging each person as an individual, or declaring all cultures equally valid or valuable in some elusive and unverifiable sense. In this world of words, much controversy is based on assertions and counter-assertions, rather than on hard facts about such things as the educational levels, welfare state dependency, automobile accident rates, or crime rates of particular groups from particular countries or cultures. Rhetoric, visions and catchwords often serve as substitutes for such basic information.

Particular immigrant groups have greatly benefitted many societies, whether in Latin America, Southeast Asia or the United States. Whole industries have been created by immigrants in Argentina, Brazil and other South American countries,47 as well as in Malaysia, Thailand and other Southeast Asian countries.48 Even in advanced economies such as in Britain and the United States, there have been industries that did not exist before immigrants created them. The watch-making industry in Britain was created by Huguenot refugees from France,49 and the first pianos in colonial America were built by German immigrants.50

Ironically, in some countries the immigrants who brought skills most lacking in the native population have been among the most resented and even hated. Often their predominance in particular industries has led to accusations that they have “taken over” those industries, even when the industries did not exist until the immigrants created them.

Not only countries, but also local communities most dependent upon outsiders to supply the economic skills most lacking in their own general population, tend to have the most resentment toward those who supply such skills and services, and who prosper by doing so.

In the 1992 ghetto riots in Los Angeles, for example, more than 2,000 stores owned by Koreans were burned and looted, creating $350 million worth of damage51—though Koreans had nothing to do with either the causes of that riot or with slavery or other calamities suffered by blacks. The prevailing social vision that blames “society” for disparities makes individuals and groups in that society targets, whether or not those individuals or groups had anything to do with the problems of lagging groups. This is not peculiar to the United States. North African immigrants who attacked Chinese immigrants in France with knives said that it was because the Chinese had “nice clothes” and “big cars.”52

Despite wide disparities among immigrant groups from different countries and cultures, empirical facts about such differences are seldom part of public debates about immigration policies. The very attempt to discuss such issues in factual terms has been treated as morally unworthy. Any concerns about a need to preserve a domestic culture that has produced a level of prosperity, order and freedom seldom found in some other cultures risks being dismissed as phobias or racism. It is as if the only morally legitimate way to discuss immigration issues is in terms of the prevailing social vision, based on the seemingly invincible fallacy of assuming a sameness of developed capabilities among both peoples and cultures.

PROCESS GOALS VERSUS OUTCOME GOALS

People with different visions of the world may have not only different goals but also different kinds of goals. Some kinds of goals are process goals, such as “free markets” or “a government of laws and not of men.” Other goals are outcome goals, such as eliminating socioeconomic “gaps” or “disparities” between individuals or groups. Moreover, different kinds of institutions may be more suited to achieving these different kinds of goals.

Even those who seek to promote certain process goals recognize that outcomes are what ultimately matter. But the crucial question is: Matter to whom? In a free market, each individual transactor decides what particular outcomes that particular transactor wants, and at what cost, whether in money or in toil and sacrifice. Institutional structures that seek to maintain market processes leave individual decisions in the market to the particular individuals transacting directly with each other within the framework of that process.

By contrast, those who are seeking to have more women employed in Silicon Valley or more minority students admitted to Ivy League colleges are directly pursuing specific outcome goals chosen by third parties, to be imposed on others. Whatever the pros and cons of the particular goals, these pros and cons are not left to be weighed by those people directly affected, but by third-party surrogate decision-makers who may claim or assume superior knowledge, compassion or whatever.*

By contrast, those who are promoting process goals are seeking to have incremental trade-offs made by individuals directly experiencing both the benefits and the costs of their own decisions. Those who are promoting outcome goals are seeking to create categorical priorities chosen by third parties, and imposed by government compulsion on those who directly experience both the benefits and the costs.

Those who seek to establish priorities to eliminate gaps do not necessarily say that this is to be done “at all costs” or “by all means necessary.” But, at the very least, the weighing of those costs and benefits is not left in the hands of those who will experience both. More important, the knowledge of all the costs—not only in money terms but in human terms as well—cannot possibly be known as well to distant surrogates as to the people who directly bear those costs.

Those surrogate decision-makers who have demolished whole neighborhoods, in order to replace these neighborhoods with new housing, planned and controlled by the government, not only destroy physical structures but also destroy an invisible web of valuable human connections that make a viable community. These include not only families related to each other who live nearby in the same community but also ties to particular neighbors, friends and connections with particular businesses and professionals known for years.

When all these people are scattered to the winds by the demolition of a neighborhood, they have to settle individually in whatever new places they can find, where they have no such connections. For businesses that have lost their long-time customers and professionals who have lost long-time clients, these costs can be measured in dollars and cents, while other costs that cannot be quantified may be no less important to those who pay those human costs, despite how easily third-party surrogates can proceed as if those costs do not exist.

If the government had to pay people a price for their property sufficient to compensate them for voluntarily leaving the neighborhood, all those hidden costs would be included in that price. But by the use of compulsion, under the law of eminent domain, those hidden costs have no way of being expressed, as they would be in a free market. Even if the government pays the current market price for all the property it takes, that is clearly not sufficient compensation, for that price was already available to the current owners, and they obviously had not chosen to sell.

Similarly, those who want to see more women working in Silicon Valley cannot know what inescapable costs must be weighed by women contemplating the prospect of working there. These costs may be especially high for women who have young children to care for, and who know that not being there when their child happens to need them cannot be made up by arranging “quality time” after work, despite how much such glib rhetoric may sound good to others.

Women who are either mothers, or contemplating becoming mothers, also know that interrupting their career for a few years, due to child-rearing responsibilities—in a place like Silicon Valley, where fast-moving technological changes can leave them far behind when they later return to work—may not be a promising career prospect in the long run. In a freely competitive labor market, the amount of pay required to compensate some women for all such considerations might be well in excess of what it makes sense to pay anyone for the particular job to be done.

Even aside from the problems inherent in getting human costs, known only to those who have those costs, reflected when voluntary market transactions are replaced by compulsion from third-party surrogate decision-makers, there is the more fundamental question of why attempts to foster economic or other progress must take the form of eliminating “gaps” between groups. Is it of no consequence if everyone’s income, education and life expectancy double over some span of time, even if that necessarily increases the gaps?

Why should eliminating gaps be the goal when different individuals and groups do not want the same things, or do not have the same priorities or urgencies about these gaps? Why should the gross “under-representation” of Asian Americans in professional basketball be a “gap” to be closed, if Asian Americans do not have nearly as much interest in that sport as black Americans have? Why should the “under-representation” of women in chess clubs or men in nursing be a gap to be closed? The process goal of preventing biased decision-making from arbitrarily closing off opportunities is an understandable goal. Creating a tableau to match the preconceptions of a vision is something very different.

People who depict markets as cold, impersonal institutions, and their own notions as humane and compassionate, have it directly backwards. It is when people make their own economic decisions, taking into account costs that matter to themselves, and known only to themselves, that this knowledge becomes part of the trade-offs they choose, whether as consumers or producers.

Much of the difference between those who promote process goals and those who promote outcome goals seems to reflect differences in how they conceive what knowledge is, and whether relevant knowledge is concentrated in a few or widely diffused among the many. Such knowledge includes knowledge of costs. Whatever the amount of socially consequential information that is known to surrogate decision-makers, no given decision-maker is likely to know more than a small fraction of what is necessary to know, in order to make the best decisions for a whole society. That can be a much more serious problem when prescribing outcome goals than when prescribing process goals.

John Stuart Mill saw this problem back in the nineteenth century, when he said, “even if a government were superior in intelligence and knowledge to any single individual in the nation, it must be inferior to all the individuals of the nation taken together.”53 In other words, Mill saw that the consequential knowledge and understanding relevant to making complex social trade-offs is too vast to be known and comprehended by any given individual or any manageably small set of individuals.

Process goals enable decisions incorporating that knowledge to be made through innumerable complex interactions among vast numbers of people who, in the aggregate, have far more consequential and highly specific knowledge than any given surrogate decision-maker, or any small group of surrogate decision-makers, can have.

Innumerable specific considerations which only those individuals involved could know are therefore mobilized in decisions made through complex market interactions linking innumerable transactors, most of whom are not in direct contact with most of the other transactors. But all those factors influence the innumerable transactions linked through prices across the market. “In general, ‘the market’ is smarter than the smartest of its individual participants,” is the way Wall Street Journal editor Robert L. Bartley once put it.

To people who conceive of consequential knowledge as concentrated in a highly educated few with high IQs, specifying particular outcome goals for a whole society may seem far more doable than to people who see vast amounts of consequential knowledge as highly diffused among the people at large, in individually unimpressive fragments. It may be virtually impossible for any given individual, or any manageable number of surrogate decision-makers collectively, to take all the factors into account. But where decisions are made by vast numbers of individuals transacting in a marketplace, each with their own fragment of the necessary knowledge of factors to be considered, and all are forced to reach mutually compatible terms, that is when all the knowledge available to all those concerned affects the economic outcome.

Twentieth-century experience with economic central planning, which seemed so promising before it was tried, led ultimately to its being scaled back or abandoned, even by socialist and communist governments around the world, who eventually decided to allow more economic decisions to be made through market processes. In many countries, including notably India and China, the decision to allow freer markets led to significantly higher economic growth rates and striking reductions in poverty rates.54

This is a remarkable—almost impossible—outcome, if relevant knowledge is as concentrated as the prevailing social vision assumes. How could taking major economic decisions out of the hands of trained experts—armed with superior knowledge and vast amounts of data, and backed by the power of government—lead to higher economic growth rates when key economic decisions are transferred to millions of ordinary people lacking all these qualifications and competing in an uncontrolled market? Yet this result has been found in many other countries besides India and China.55

The clash of these very different kinds of goals is fought out on many fronts, involving a wide variety of issues. Controversies over whether minimum wage laws make the poor better off or worse off, for example, are meaningful only within the context of priorities set by third-party surrogates. During the Progressive era in the early twentieth century, Progressives who accepted the proposition that minimum wage laws priced low-skilled workers out of jobs were not at all deterred by that prospect, for the Progressives of that era specifically welcomed that outcome, especially when the low-skilled workers displaced were non-white.56 That was what would fit the particular tableau they sought in that era.

If the costs and benefits of low-paying jobs were to be weighed by the low-skilled and inexperienced workers themselves, there would be no argument for having minimum wage laws in the first place. Similarly with other policies in which third parties specify outcomes, rather than promoting processes in which outcomes are systemic results of tradeoffs made individually by those experiencing both benefits and costs within the framework of a process.

Given the fallibility of human beings in general, the role of feedback—that is, consequential feedback, as distinguished from simply information—can be crucial in any kind of decision-making process. The feedback from process goals is inescapable for those who directly experience the costs and benefits of their own decisions, while adverse experiences for those directly affected can be ignored, rationalized or obfuscated by third-party surrogates reluctant to admit it to others, and perhaps even to themselves, when their decisions have made matters worse.

In short, outcome-specific goals mean third-party preemption of other people’s decisions about their own lives. What is remarkable is how seldom a basis for that preemption is specified. In an earlier era, the divine right of kings was cited as a justification for surrogate decision-making on issues ranging from work to religion. Today, the burden of justification is often put on those individuals whose desire to make decisions about their own lives is seen as claims for a special exemption from third-party supervision. As philosopher Thomas Nagel characterized this argument, the fact that one’s socioeconomic benefits are not all due to one’s own personal merits means that there is no “moral sanctity” about the current distribution of those benefits.57

In other words, because no individual was solely responsible for that individual’s benefits, therefore politicians, bureaucrats and judges—that is, the government, Rawls’ “society” which can “arrange” things—are to preempt decisions and redistribute benefits, presumably in a more moral way. But no burden of proof of either superior morality or superior efficiency in government is required for this preemption. The brazen non sequitur—that if “you didn’t build that”58 it is something the government is justified in taking over—is a fitting companion to the invincible fallacy that people tend to have comparable outcomes in the absence of biased treatment.

Take away both the invincible fallacy and the brazen non sequitur—and the prevailing social vision loses much, if not most, of its foundation. Such terms as “social justice” or “the common good” may be invoked by those with the prevailing vision, but it is not the common people who are to determine what is “the common good.” That decision is reserved for third-party surrogates. Louis XIV said, “L’état c’est moi” (I am the state); today’s income redistributionists say “social justice” or “the common good.” But it all means essentially the same thing in decision-making terms—third-party compulsion to preempt individual choices.

Nor are the supposed beneficiaries of these supposedly more enlightened policies even to be presented with the choice as to how much of their own freedom they are prepared to give up in exchange for the presumed benefits of government policies. On the contrary, that trade-off itself is concealed by redefining words, so that the presumed benefits of government policies have been depicted as a “new freedom,” verbally banishing consideration of the trade-off of freedom for government-promised benefits.

What is remarkable is not simply surrogates’ preemption of other people’s decisions about their own lives, by simply putting the burden of proof on those who wish to be exempted from this preemption, but that the prerogatives of the surrogates have no time limit nor revocation conditions, either explicit or implicit.

People who admit that race-based “affirmative action” has been counterproductive, for example, nevertheless advocate affirmative action based on poverty or some other socioeconomic criteria.59 The fact that their policies have already inflicted decades of racial strife, polarization and lasting bitterness—among both the ostensible beneficiaries and those who resent the preferences given to the ostensible beneficiaries60—leaves those who orchestrated this policy undaunted in seeking to continue exercising their preemptive prerogatives. The boldness of their presumptions contrasts sharply with their suppression of relevant data61 and the silencing and demonizing of those with different views, instead of answering their arguments.

“SOCIAL JUSTICE”

Much of what is said in the name of “social justice” implicitly assumes three things: (1) the seemingly invincible fallacy that various groups would be equally successful in the absence of biased treatment by others, (2) the cause of disparate outcomes can be determined by where statistics showing the unequal outcomes were collected, and (3) if the more fortunate people were not completely responsible for their own good fortune, then the government—politicians, bureaucrats and judges—will produce either efficiently better or morally superior outcomes by intervening.

When we look at facts in the real world, we repeatedly find skewed distributions of outcomes, whether among human beings or in nature. But, when we look at social visions or political agendas, we find equal outcomes to be the prevailing presumption, and the norm to be imposed by government policies when that presumption is not met. If some social categories of people are not equally represented in particular occupations, institutions or income brackets, that is regarded as someone’s fault that the supposedly natural equality of outcomes has been thwarted. This is the seemingly invincible fallacy behind much that is said and done.

There is a fundamental asymmetry in burdens of proof. No matter how much empirical evidence of skewed distributions of outcomes is presented as evidence against the invincible fallacy, there is no corresponding burden of proof on the other side to present even a single example of the equal representation of various social groups in any given endeavor. In what country, or in what kind of endeavor, or in what century out of the vast millennia of human history, has there ever been a proportional representation of various groups in any activity where people have been free to compete? One can read reams of arguments that statistical disparities imply biased treatment without finding a single empirical example of the even distribution of social groups in any endeavor, in any country or in any period of history.

Equally missing in most “social justice” arguments for a redistribution of wealth is the question of the extent to which such a redistribution is actually possible, in any comprehensive, long-term sense. Certainly there have been many examples of times and places where money or other physical wealth has been confiscated by governments or looted by mobs. But physical wealth is a product of human capital—the knowledge, skills, talents and other qualities that exist inside the heads of people—where it cannot be confiscated.

Confiscating physical wealth for the purpose of redistribution is confiscating something that will be used up over time, and cannot be replaced without the human capital that created it. Nor is human capital itself easily created by third-party decision-makers. While it is possible to hire teachers and buy books, it is not possible to purchase a cultural past that will prepare and orient all people toward the acquisition of the skills, habits and attitudes that are decisive for human capital.

Over the centuries, many countries have confiscated the physical wealth created by the human capital of productive people. Where the confiscated physical wealth was owned by foreign investors, this process has often been called “nationalization” and celebrated as a patriotic triumph over foreign “exploitation.” Where the confiscated physical capital belonged to productive domestic groups, similar rationales have been used, often leading to an exit of many such people from the country, whether fleeing from aroused mobs or as a result of adverse actions by governments—sometimes including mass expulsions.

In any case, the net result has often been such people’s arrival as destitute refugees in some other country. Meanwhile, the consequences in their country of origin have often included economic decline after people with much human capital were gone. Examples would include the collapse of Uganda’s economy after Asians were expelled in the 1970s and the economic rise of the Asian refugees in Britain, where many of them fled.62 Refugees who fled Cuba after the Communist takeover in the mid-twentieth century likewise arrived in the United States destitute and survived by taking low-level, poorly paid jobs. But, in later years, the total revenue of Cuban-owned businesses in the United States exceeded the total revenue of the nation of Cuba.63

Variations on this theme can be found in many times and places. These would include the Jews expelled from Spain in the late fifteenth century, and forced to leave their physical wealth behind, but who rose again to prosperity in the Netherlands, contributing in the process to the economy of the Netherlands.64 Huguenot refugees, fleeing France in the sixteenth and seventeenth centuries, made Switzerland the leading watch-making country in the world.65 Expelling the vast majority of Germans from Czechoslovakia at the end of World War II left the Sudetenland region where they had been concentrated still economically stricken, decades later.66 Similar or worse devastations followed the driving of white farmers out of Zimbabwe, in the late twentieth century.67

Despite how persuasive the words of John Rawls and other “social justice” advocates may be in the world of words, demonstrated facts in the world of reality raise the crucial question as to whether the redistribution of income or wealth can actually be done, in any comprehensive and sustainable sense. Where, instead, there is simply a humanitarian desire to see the less fortunate have better prospects for a better life, the “social justice” argument is both unnecessary and an impediment to joining forces toward that end with others who do not happen to share the implicit assumptions of that particular social vision.

The undeniable fact that life has never been remotely “fair”—in the sense of presenting equal likelihoods of achieving economic prosperity or other benefits—has led many people to conclude that human biases are the reason. There is no question that human biases have contributed to unfair prospects. But it is a complete non sequitur to say that human biases are the sole, or even primary, causes of unequal prospects, without hard evidence to support that conclusion.

When there are major disparities in outcomes among men who are all in the top one percent in IQ, and among siblings raised under the same roof, as well as discriminated-against minorities being more economically successful than those discriminating against them—as has happened in the Ottoman Empire, many Southeast Asian countries, and much of Eastern Europe, for example—the insistence on believing that human biases are the primary cause of disparities in outcomes ignores a vast range of evidence to the contrary.

This is not to say that nothing can be done to offer more people more opportunities. Much has already been done, and much can and will be done. But how it is done can be either helpful or harmful, depending on how well we understand and deal with the world as it is, rather than according to some vision that might seem more attractive, for whatever reason.

Despite the inability to confiscate and redistribute human capital, nevertheless human capital is—ironically—one of the few things that can be spread to others without those with it having any less remaining for themselves. But one of the biggest obstacles to this happening is the “social justice” vision, in which the fundamental problem of the less fortunate is not an absence of sufficient human capital, but the presence of other people’s malevolence. For some, abandoning that vision would mean abandoning a moral melodrama, starring themselves as crusaders against the forces of evil. How many are prepared to give up all that—with all its psychic, political and other rewards—is an open question.

THE PAST AND THE FUTURE

Looking back over the centuries of human history, there is much to inspire and much to appall. As for the future, all that we can be certain of is that it is coming, whether we are well-prepared or ill-prepared for it.

Perhaps the most heartening things about the past are the innumerable examples of whole peoples who lagged far behind their contemporaries at a given time and yet, in later times, overtook them and moved to the forefront of human achievements.

These would include Britons, who were an illiterate tribal people in the ancient world, while the ancient Greeks and Romans were laying the intellectual and material foundations of Western civilization. Yet, more than a millennium later, it was the Britons who led the world into the industrial revolution.

At various times and places, China and the Islamic world were more advanced than Europe, and later fell behind, while Japan rose from poverty and backwardness in the middle of the nineteenth century to the forefront of economic and technological achievements in the twentieth century. Jews, who had played little or no role in the revolutionary emergence of science and technology in the early modern era, later produced a wholly disproportionate share of all the scientists who won Nobel Prizes in the twentieth century.

Among the many appalling things about the past, it is hard to know which was the worst, since there are all too many candidates, from around the world, for that designation. That something like the Holocaust could have happened, after thousands of years of civilization, and in one of the most advanced societies, is almost as incomprehensible intellectually as it is devastating morally and in terms of showing what depths of depravity are possible in all human beings. It is a painful reminder of a description of civilization as “a thin crust over a volcano.”

If longevity and universality are criteria, then slavery must be among the leading candidates for the most appalling of all human institutions, for it existed around the world, for thousands of years, as far back as the history of the human species goes. Yet its full scope is often grossly under-estimated today, when slavery is so often discussed as if it were confined to one race enslaving another race, when in fact slavery existed virtually wherever it was feasible for some human beings to enslave other human beings—including in many, if not most, cases people of their own race.68 This was as true in Europe and Asia as it was in Africa, or in the Western Hemisphere before Columbus’ ships ever appeared on the horizon.

Despite how widely condemned slavery is today, the painful fact is that it reigned virtually unchallenged, prior to the eighteenth century, even though there were challenges to the abuse of slaves, or to the enslavement of particular peoples. But the institution itself was accepted as a fact of life—another disturbing reflection on human nature in all its branches—even among leading philosophers and religious leaders. Christian monasteries in Europe and Buddhist monasteries in Asia both had slaves.69

It was not until the eighteenth century that a serious movement arose to advocate abolishing the whole institution of slavery—and, at that point, this was a development solely within Western civilization and initially only among a minority there. Anti-slavery views remained largely confined to Western societies in the next century, and slaves continued to be bought and sold in the Ottoman Empire, among other places, after slavery had been abolished in all Western nations.

Europeans enslaved other Europeans for centuries before Europeans brought the first African slaves—most purchased from other Africans, who had enslaved them—to the Western Hemisphere. Nor was it unknown for Europeans to be enslaved by non-Europeans. Just one example were the European slaves brought to the coast of North Africa by Barbary Coast pirates. These European slaves were more numerous than the African slaves brought to the United States and to the American colonies from which it was formed.73

Other pirates made such widespread slave raids along the Mediterranean and Adriatic coasts of Europe that numerous watchtowers were built in those places, so that coastal peoples could be warned and flee when pirate ships were seen approaching. There were more than a hundred such watchtowers on the island of Sicily alone.74

The confining of discussions of slavery to that of blacks held in bondage by whites is just one of the many ways in which the agendas of the present distort our understanding of the past, forfeiting valuable lessons that an unfiltered knowledge of the past could teach. At a minimum, the worldwide history of slavery should be a grim warning for all people, and for all time, against giving any human beings unbridled power over other human beings, regardless of how attractively that unbridled power might be packaged rhetorically.

It was the twentieth century—the first century after slavery had been nearly eradicated around the world—that saw a new form of brutal human bondage arise, with the creation of totalitarian dictatorships that collectively killed tens of millions of their own people in peacetime during that century, and made life a nightmare for many who survived.

The last Western nation to end slavery (Brazil) did so in 1888, and the first totalitarian dictatorship arose in Russia in 1917. There was barely a generation between the suppression of one form of monumentally brutal subjugation of human beings and the creation of another. Yet these dehumanizing dictatorships were often founded on stirring rhetoric and lofty visions that resonated with many leading intellectuals in countries around the world. There could hardly be a clearer example of the need for the historic warning: “Eternal vigilance is the price of liberty.”

As Edmund Burke said, more than two centuries ago, “In history a great volume is unrolled for our instruction, drawing the materials of future wisdom from past errors and infirmities of mankind.” But he warned that the past could also be a means of “keeping alive, or reviving, dissensions and animosities.”75

It is in this second sense that history is too often taught today,* under the banner of “social justice,” and using the same toxic mixture of heady rhetoric and heedless visions that led to such monumental tragedies in the totalitarian dictatorships of the twentieth century.

After territorial irredentism has led nations to slaughter each other’s people over land that might have little or no value in itself, simply because it once belonged in a different political jurisdiction, at a time beyond any living person’s memory, what is to be expected from instilling the idea of social irredentism, growing out of historic wrongs done to people long dead?

Such wrongs abound in times and places around the world—inflicted on, and perpetrated by, people of virtually every race, creed and color. But what can any society today hope to gain by having newborn babies in that society enter the world as heirs to prepackaged grievances against other babies born into that same society on the same day?

Nothing that we can do today can undo the many evils and catastrophes of the past, but we can at least learn from them, and not repeat the mistakes of the past, many of which began with lofty-sounding goals. Obvious as all this might seem, it is too often forgotten. Nothing that Germans can do today will in any way mitigate the staggering evils of what Hitler did in the past. Nor can apologies in America today for slavery in the past have any meaning, much less do any good, for either blacks or whites today. What can it mean for A to apologize for what B did, even among contemporaries, much less across the vast chasm between the living and the dead?

The only times over which we have any degree of influence at all are the present and the future—both of which can be made worse by attempts at symbolic restitution among the living for what happened among the dead, who are far beyond our power to help or punish or avenge. Galling as these restrictive facts may be, that does not stop them from being facts beyond our control. Pretending to have powers that we do not in fact have risks creating needless evils in the present, while claiming to deal with the evils of the past.

Any serious consideration of the world as it is around us today must tell us that maintaining common decency, much less peace and harmony, among living contemporaries is a major challenge, both among nations and within nations. To admit that we can do nothing about what happened among the dead is not to give up the struggle for a better world, but to concentrate our efforts where they have at least some hope of making things better for the living.

*    See, for example, Eric Felten, “Finders Keepers?” Reader’s Digest, April 2001, pp. 102–107; “So Whom Can You Trust?” The Economist, June 22, 1996, p. 51; “Scandinavians Prove Their Honesty in European Lost-Wallet Experiment,” Deseret News, June 20, 1996; Michael Booth, The Almost Nearly Perfect People: Behind the Myth of the Scandinavian Utopia (New York: Picador, 2014), p. 40.

*    Similar principles apply internationally. The principle of the “self-determination of peoples” announced by Woodrow Wilson during the First World War was in fact never self-determination. It was instead the determination of the fates of whole peoples by the triumvirate of President Wilson, French Premier Georges Clemenceau and British Prime Minister David Lloyd George—that is, by the victors in the war. Certainly there was no thought that the Irish in Ireland or the Germans in the newly created nation of Czechoslovakia would determine what sovereignty they would live under.

*    Any who doubt this can read Howard Zinn’s A People’s History of the United States, and reflect on the fact that it has been one of the most widely used textbooks in America—having sold more than two and a half million copies in North America as of 2015. Howard Zinn, A People’s History of the United States (New York: Harper Perennial, 2015), p. xviii.