Chapter 5

WORDS,   DEEDS   AND   DANGERS

… we must be aware of the dangers which lie in our most generous wishes.

Lionel Trilling1

People who may share many of the same basic concerns that social justice advocates have do not necessarily share the same vision or agenda, because they do not make the same assumptions about options, causation or consequences. Iconic free-market economist Milton Friedman, for example, said:

Everywhere in the world there are gross inequities of income and wealth. They offend most of us. Few can fail to be moved by the contrast between the luxury enjoyed by some and the grinding poverty suffered by others.2

Similarly, F.A. Hayek— another iconic free-market economist— said:

It has of course to be admitted that the manner in which the benefits and burdens are apportioned by the market mechanism would in many instances have to be regarded as very unjust if it were the result of a deliberate allocation to particular people.3

Clearly, Hayek also saw life in general as unfair, even with the free markets he advocated. But that is not the same as saying that he saw society as unfair. To Hayek, society was “an orderly structure,” but not a decision-making unit, or an institution taking action.4 That is what governments do.5 But neither society nor government comprehends or controls all the many and highly varied circumstances— including a large element of luck— that can influence the fate of individuals, classes, races or nations.

Even within the same family, as we have seen, it matters whether you were the first-born child or the last-born child. When the first-born child in five-child families constituted 52 percent of the children from such families to become National Merit Scholarship finalists, while the fifth-born child in those families became the finalist just 6 percent of the time,6 that is a disparity larger than most disparities between the sexes or the races.

In a growing economy, it also matters which generation of the family you were born into.7 A facetious headline in The Economist magazine— “Choose your parents wisely”8— highlighted another important truth about inequalities, illustrated with this impossible advice. Circumstances beyond our control are major factors in economic and other inequalities. Trying to understand causation is not necessarily the same as looking for someone to blame.

The totality of circumstances around us Hayek called a “cosmos”9 or universe. In this context, what others call “social justice” might more fittingly be called “cosmic justice,”10 since that is what would be required to produce the results sought by many social justice advocates.

This is not simply a question about different names. It is a more fundamental question about what we can and cannot do— and at what costs and risks. When there are “differences in human fates for which clearly no human agency is responsible,”11 as Hayek put it, we cannot demand justice from the cosmos. No human beings, either singly or collectively, can control the cosmos— that is, the whole universe of circumstances surrounding us and affecting everyone’s chances in life. The large element of luck in all our lives means that neither society nor government has either causal control or moral responsibility extending to everything that has gone right or wrong in everybody’s life.

Some of us may be able to think of some particular individual, whose appearance in our lives at one particular juncture altered the trajectory of our lives. There may be more than one such person, at different stages of our lives, who changed our prospects in different ways, for better or worse. Neither we nor surrogate decision-makers control such things. Those who imagine that they can— that they are either a “self-made man” or surrogate saviors of other people or the planet— operate in dangerous territory, littered with human tragedies and national catastrophes.

If the world around us happened to provide equal chances for all people in all endeavors— whether as individuals or as classes, races or nations— that might well be seen as a world far superior to the world we actually see around us today. Whether called social justice or cosmic justice, that might be seen as ideal by many people who agree on little else. But our ideals tell us nothing about our capabilities and their limits— or the dangers of trying to go beyond those limits.

As just one example, from the earliest American Progressives onward, there has been an ideal of applying criminal laws in a manner individualized to the criminal, rather than generalized from the crime.12 Before even considering whether this is desirable, there is first the question of whether human beings are even capable of doing such a thing. Where would officials acquire such sweeping, intimate and accurate knowledge about a stranger, much less have the superhuman wisdom to apply it in the incalculable complications of life?

A murderer may have had an unhappy childhood, but does that justify gambling other people’s lives, by turning him loose among them, after some process that has been given the name “rehabilitation”? Are high-sounding notions and fashionable catchwords important enough to risk the lives of innocent men, women and children?

F.A. Hayek’s key insight was that all the consequential knowledge essential to the functioning of a large society exists in its totality nowhere in any given individual, class or institution. Therefore the functioning and survival of a large society requires coordination among innumerable people with innumerable fragments of consequential knowledge. This put Hayek in opposition to various systems of centrally directed control, whether a centrally planned economy, systems of comprehensive surrogate decision-making in the interests of social justice, or presumptions of “society” being morally responsible for all its inhabitants’ good or bad fates, when nobody has the requisite knowledge for such responsibility.

The fact that we cannot do everything does not mean that we should do nothing. But it does suggest that we need to make very sure that we have our facts straight, so that we do not make things worse, while trying to make them better. In a world of ever-changing facts and inherently fallible human beings, that means leaving everything we say or do be open to criticism. Dogmatic certitudes and intolerance of dissent have often led to major catastrophes, and nowhere more so than in the twentieth century. The continuation and escalation of such practices in the twenty-first century is by no means a hopeful sign.

Back in the eighteenth century, Edmund Burke made a fundamental distinction between his ideals and his policy advocacies. “Preserving my principles unshaken,” he said, “I reserve my activity for rational endeavours.”13 In other words, having high ideals did not imply carrying idealism to the extreme of trying to impose those ideals at all costs and oblivious to all dangers.

Pursuing high ideals at all costs has already been tried, especially in twentieth-century creations of totalitarian dictatorships, often based on egalitarian goals with the highest moral principles. But powers conferred for the finest reasons can be used for the worst purposes— and, beyond some point, powers conferred cannot be taken back. Milton Friedman clearly understood this:

A society that puts equality— in the sense of equality of outcome— ahead of freedom will end up with neither equality nor freedom. The use of force to achieve equality will destroy freedom, and the force, introduced for good purposes, will end up in the hands of people who use it to promote their own interests.14

F.A. Hayek— having lived through the era of the rise of totalitarian dictatorships in twentieth-century Europe— and having witnessed how it happened— arrived at essentially the same conclusions. But he did not regard social justice advocates as evil people, plotting to create totalitarian dictatorships. Hayek said that some of the leading advocates of social justice included individuals whose unselfishness was “beyond question.”15

Hayek’s argument was that the kind of world idealized by social justice advocates— a world with everyone having equal chances of success in all endeavors— was not only unattainable, but that its fervent but futile pursuit can lead to the opposite of what its advocates are seeking. It was not that social justice advocates would create dictatorships, but that their passionate attacks on existing democracies could weaken those democracies to the point where others could seize dictatorial powers.

Social justice advocates themselves obviously do not share the conclusions of their critics, such as Friedman and Hayek. But the differences in their conclusions are not necessarily differences in fundamental moral values. Their differences tend to be at the level of fundamentally different beliefs about circumstances and assumptions about causation that can produce very different conclusions. They envision different worlds, operating on different principles, and describe these worlds with words that have different meanings within the framework of different visions.

When visions and vocabularies differ so fundamentally, an examination of facts offers at least a hope of clarification.

VISIONS  AND  VOCABULARIES

In a sense, words are just the containers in which meanings are conveyed from some people to other people. But, like some other containers, words can sometimes contaminate their contents. A word like “merit,” for example, varies in its meanings. As a result, this word has contaminated many discussions of social policies, whether it has been used by advocates or critics of the social justice vision.

Merit

Opponents of group preferences, such as affirmative action for hiring or for college admissions, often say that each individual should be judged by that individual’s own merit. In most cases, “merit” in this context seems to mean individual capabilities that are relevant to the particular endeavor. Merit in this sense is simply a factual question, and the validity of the answer depends on the predictive validity of the criteria used to compare different applicants’ capabilities.

Others, however— including social justice advocates— see not only a factual issue, but also a moral issue, in the concept of merit. As far back as the eighteenth century, social justice advocate William Godwin was concerned not only about unequal outcomes, but especially “unmerited advantage.”16 Twentieth-century Fabian socialist pioneer George Bernard Shaw likewise said that “enormous fortunes are made without the least merit.”17 He noted that not only the poor, but also many well-educated people, “see successful men of business, inferior to themselves in knowledge, talent, character, and public spirit, making much larger incomes.”18

Here merit is no longer simply a factual question about who has the particular capabilities relevant to success in a particular endeavor. There is now also a moral question as to how those capabilities were acquired— whether they were a result of some special personal exertions or were just some “unmerited advantage,” perhaps due to being born into unusually more favorable circumstances than the circumstances of most other people.

Merit in this sense, with a moral dimension, raises very different questions, which can have very different answers. Do people born into certain German families or certain German communities deserve to inherit the benefits of the knowledge, experience and insights derived from more than a thousand years of Germans brewing beer? Clearly, they do not! It is a windfall gain. But, equally clearly, their possession of this valuable knowledge is a fact of life today, whether we like it or not. Nor is this kind of situation peculiar to Germans or to beer.

It so happens that the first black American to become a general in the U.S. Air Force— General Benjamin O. Davis, Jr.— was the son of the first black American to become a general in the U.S. Army, General Benjamin O. Davis, Sr. Did other black Americans— or white Americans, for that matter— have the same advantage of growing up in a military family, automatically learning, from childhood onward, about the many aspects of a career as a senior military officer?

Nor was this situation unique. One of the most famous American generals in World War II— and one of the most famous in American military history— was General Douglas MacArthur. His father was a young commanding officer in the Civil War, where his performance on the battlefield brought him the Congressional Medal of Honor. He ended his long military career as a general.

None of this is peculiar to the military. In the National Football League, quarterback Archie Manning had a long and distinguished career, in which he threw more than a hundred touchdown passes.19 His sons— Peyton Manning and Eli Manning— also had long and distinguished careers as NFL quarterbacks, which in their cases included winning Super Bowls. Did other quarterbacks, not having a father who had been an NFL quarterback before them, have equal chances? Not very likely. But would football fans rather watch other quarterbacks who were not as good, but who had been chosen in order to equalize social justice?

The advantages that some people have, in a given endeavor, are not just disadvantages to everyone else. These advantages also benefit all the people who pay for the product or service provided by that endeavor. It is not a zero-sum situation. Mutual benefit is the only way the endeavor can continue, in a competitive market, with vast numbers of people free to decide what they are willing to pay for. The losers are the much smaller number of people who wanted to supply the same product or service. But the losers were unable to match what the successful producers offered, regardless of whether the winners’ success was due to skills developed at great sacrifice or skills that came their way from just happening to be in the right place at the right time.

When computer-based products spread around the world, both their producers and their consumers benefitted. It was bad news for manufacturers of competing products such as typewriters, or the slide rules that were once standard equipment used by engineers for making mathematical calculations. Small computerized devices could make those calculations faster, simpler and with a vastly larger range of applications. But, in a free-market economy, progress based on new advances inevitably means bad news for those whose goods or services are no longer the best. Demographic “inclusion” requires some surrogate decision-makers, empowered to over-rule what consumers want.

A similar situation exists in the military. A country fighting for its life, on the battlefield, cannot afford the luxury of choosing its generals on the basis of demographic representation— “looking like America”— rather than on the basis of military skills, regardless of how those skills were acquired. Not if the country wants to win and survive. That is especially so if the country wants to win its military victories without more losses of soldiers’ lives than necessary. In that case, it cannot put generals in charge of those soldiers when these are not the best generals available.

In the social justice literature, unmerited advantages tend to be treated as if they are deductions from the well-being of the rest of the population. But there is no fixed or predestined amount of well-being, whether measured in financial terms or in terms of spectators enjoying a sport, or soldiers surviving a battle. When President Barack Obama said: “The top 10 percent no longer takes in one-third of our income, it now takes half,”20 that would clearly be a deduction from other people’s incomes if there were a fixed or predestined amount of total income.

This is not an incidental subtlety. It matters greatly whether people with high incomes are adding to, or subtracting from, the incomes of the rest of the population. Insinuations are a weak basis for making decisions about a serious issue. It is too important to have that issue decided— or obfuscated— by artful words. In plain English: Is the average American’s income higher or lower because of the products created and sold by some multi-billionaire?

Again, there is no fixed or predestined total amount of income or wealth to be shared. If some people are creating more wealth than they are receiving as income, then they are not making other people poorer. But if they are creating products or services that are worth less than the income they receive, then equally clearly they are making other people poorer. But, although anyone can charge any price they want to, for whatever they are selling, they are not likely to find people who will pay more than the product or service is worth to themselves.

Arguing as if some people’s high incomes were deducted from some fixed or predestined total income— leaving less for others— may be clever. But cleverness is not wisdom, and artful insinuations are no substitute for factual evidence, if your goal is knowing the facts. But, if your goals are political or ideological, there is no question that one of the most politically successful messages of the twentieth century was that the rich have gotten rich by taking from the poor.

The Marxian message of “exploitation” helped sweep communists into power in countries around the world in the twentieth century, at a pace and on a scale seldom seen in history. There is clearly a political market for that message, and communists are just one of the ideological groups to use it successfully for their own purposes, despite how disastrously that turned out to be for millions of other human beings living under communist dictatorships.

The very possibility that poor Americans, for example, are having a rising standard of living because of progress created by people who are getting rich— as suggested by Herman Kahn21— would be anathema to social justice advocates. But it is by no means obvious that empirical tests of that hypothesis would vindicate those who advocate social justice. It seems even less likely that social justice advocates would put that hypothesis to an empirical test.

For people seeking facts, rather than political or ideological goals, there are many factual tests that might be applied, in order to see if the wealth of the wealthy is derived from the poverty of the poor. One way might be to see if countries with many billionaires— either absolutely or relative to the size of the population— have higher or lower standards of living among the rest of their people. The United States, for example, has more billionaires than there are in the entire continent of Africa plus the Middle East.22 But even Americans living in conditions officially defined as poverty usually have a higher standard of living than that of most of the people in Africa and the Middle East.

Other factual tests might include examining the history of prosperous ethnic minorities, who have often been depicted as “exploiters” in various times and places over the years. Such minorities have, in many cases over the years, been either expelled by governments or driven out of particular cities or countries by mob violence, or both. This has happened to Jews a number of times over the centuries in various parts of Europe.23 The overseas Chinese have had similar experiences in various southeast Asian countries.24 So have Indians and Pakistanis expelled from Uganda in East Africa.25 So have the Chettiar money-lenders in Burma, after that country’s laws confiscating much of their property in 1948, drove many of them out of Burma.26

The Ugandan economy collapsed in the 1970s, after the government expelled Asian business owners,27 who had supposedly been making Africans worse off economically. Interest rates in Burma went up, not down, after the Chettiars were gone.28 It was much the same story in the Philippines, where 23,000 overseas Chinese were massacred in the seventeenth century, after which there were shortages of the goods produced by the Chinese.29

In centuries past, it was not uncommon for Jews in Europe to be driven out— denounced as “exploiters” and “bloodsuckers”— from various cities and countries, whether forced out by government edict or mob violence, or both. What is remarkable is how often Jews were in later years invited back to some of the places from which they had been expelled.30

Apparently some of those who drove them out discovered that the country was worse off economically after the Jews were gone.

Although Catherine the Great banned Jews from immigrating into Russia, in her later efforts to attract much-needed foreign skills from Western Europe, including “some merchant people,” she wrote to one of her officials that people in the occupations being sought should be given passports to Russia, “not mentioning their nationality and without enquiring into their confession.” To the formal Russian text of this message she added a postscript in German saying, “If you don’t understand me, it will not be my fault” and “keep all this secret.”31

In the wake of this message, Jews began to be recruited as immigrants to Russia— even though, as a historian has noted, “throughout the whole transaction any reference to Jewishness was scrupulously avoided.”32 In short, even despotic rulers may seek to evade their own policies, when it is impolitic to repeal those policies, and counterproductive to follow them.

These historical events are by no means the only factual tests that could be used to determine whether more prosperous people are making other people less prosperous. Nor are these necessarily the best factual tests. But the far larger point is that a prevailing social vision does not have to produce any factual test, when rhetoric and repetition can be sufficient to accomplish their aims, especially when alternative views can be ignored and/or suppressed. It is that suppression which is a key factor— and it is already a large and growing factor in academic, political and other institutions in our own times.

Today it is possible, even in our most prestigious educational institutions at all levels, to go literally from kindergarten to a Ph.D., without ever having read a single article— much less a book— by someone who advocates free-market economies or who opposes gun control laws. Whether you would agree with them or disagree with them, if you read what they said, is not the issue. The far larger issue is why education has so often become indoctrination— and for whose benefit.

The issue is not even whether what is being indoctrinated is true or false. Even if we were to assume, for the sake of argument, that everything with which students are being indoctrinated today is true, these issues of today are by no means necessarily the same as the issues that are likely to arise during the half-century or more of life that most students have ahead of them after they have finished their education. What good would it do them then, to have the right answers to yesterday’s questions?

What they will need then, in order to sort out the new controversial issues, is an education that has equipped them with the intellectual skills, knowledge and experience to confront and analyze opposing views— and subject those views to scrutiny and systematic analysis. That is precisely what they do not get when being indoctrinated with whatever is currently in vogue today.

Such “education” sets up whole generations to become easy prey for whatever clever demagogues come along, with heady rhetoric that can manipulate people’s emotions. As John Stuart Mill put the issue, long ago:

He who knows only his own side of the case, knows little of that…Nor is it enough that he should hear the arguments of adversaries from his own teachers, presented as they state them, and accompanied by what they offer as refutations. That is not the way to do justice to the arguments, or bring them into real contact with his own mind. He must be able to hear them from persons who actually believe them; who defend them in earnest, and do their very utmost for them. He must know them in their most plausible and persuasive form…33

What Mill described is precisely what most students today do not get, in even our most prestigious educational institutions. What they are more likely to get are prepackaged conclusions, wrapped securely against the intrusion of other ideas— or of facts inconsistent with the prevailing narratives.

In the prevailing narratives of our time, someone else’s good luck is your bad luck— and a “problem” to be “solved.” But when someone has, however undeservedly, acquired some knowledge and insights that can be used to design a product which enables billions of people around the world to use computers— without knowing anything about the specifics of computer science— that is a product which can, over the years, add trillions of dollars’ worth of wealth to the world’s existing supply of wealth. If the producer of that product becomes a multi-billionaire by selling it to those billions of people, that does not make those people poorer.

People like British socialist George Bernard Shaw may lament that the producer of this product may not have either the academic credentials or the personal virtues which Shaw seems to attribute to himself, and to others like himself. But that is not what the buyers of the computerized product are paying for, with their own money. Nor is it obvious why a third-party’s laments should be allowed to affect transactions which are not doing the third party any harm. Nor is the general track record of third-party preemptions encouraging.

None of this suggests that businesses have never done anything wrong. Sainthood is not the norm in business, any more than in politics, in the media or on academic campuses. That is why we have laws. But it is not a reason to create ever more numerous and sweeping laws to put ever more power in the hands of people who pay no price for being wrong, regardless of how high a price is paid by others who are subject to their power.

Slippery words like “merit”— with multiple and conflicting meanings— can make it hard to clearly understand what the issues are, much less see how to resolve them.

Racism

“Racism” may be the most powerful word in the social justice vocabulary. There is no question that racism has inflicted an enormous amount of needless suffering on innocent people, punctuated by unspeakable horrors, such as the Holocaust.

Racism might be analogized to some deadly pandemic disease. If so, it may be worth considering the consequences of responding to pandemics in different ways. We certainly cannot simply ignore the disease and hope for the best. But we cannot go to the opposite extreme, and sacrifice every other concern— including other deadly diseases— in hopes of reducing fatalities from the pandemic. During the Covid-19 pandemic, for example, death rates from other diseases went up,34 because many people feared going to medical facilities, where they might catch Covid from other patients.

Even the most terrible pandemics can subside or end. At some point, continued preoccupation with the pandemic disease can then cause more dangers and death from other diseases, and from other life stresses resulting from continued restrictions that may have made sense when the pandemic was in full force, but are counterproductive on net balance afterwards.

Everything depends on what the specific facts are at a given time and place. That is not always easy to know. It may be especially difficult to know, when special interests have benefitted politically or financially from the pandemic restrictions, and therefore have every incentive to promote the belief that those restrictions are still urgently needed.

Similarly, it can be especially hard to know about the current incidence and consequences of racism, when racists do not publicly identify themselves. Moreover, people who have incentives to maximize fears of racism include politicians seeking to win votes by claiming to offer protection from racists, or leaders of ethnic protest movements who can use fears of racists to attract more followers, more donations and more power.

No sane person believes that there is zero racism in American society, or in any other society. Here it may be worth recalling what Edmund Burke said, back in the eighteenth century: “Preserving my principles unshaken, I reserve my activity for rational endeavours.”35 Our principles can reject racism completely. But neither a racial minority nor anyone else has unlimited time, unlimited energy or unlimited resources to invest in seeking out every possible trace of racism— or to invest in the even less promising activity of trying to morally enlighten racists.

Even if, by some miracle, we could get to zero racism, we already know, from the history of American hillbillies— who are physically indistinguishable from other white people, and therefore face zero racism— that even this is not enough to prevent poverty. Meanwhile, black married-couple families, who are not exempt from racism, have nevertheless had poverty rates in single digits, every year for more than a quarter of a century.36 We also know that racists today cannot prevent black young people from becoming pilots in the Air Force, or even generals in the Air Force, nor from becoming millionaires, billionaires or President of the United States.

Just as we need to recognize when the power of a pandemic has at least subsided, so that we can use more of our limited time, energy and resources against other dangers, so we also need to pay more attention to other dangers besides racism. That is especially so for the younger generation, who need to deal with the problems and dangers actually confronting them, rather than remain fixated on the problems and dangers of the generations before them. If racists cannot prevent today’s minority young people from becoming pilots, the teachers unions can— by denying them a decent education, in schools whose top priorities are iron-clad job security for teachers, and billions of dollars in union dues for teachers unions.37

It is by no means certain whether the enemies of American minorities are able to do them as much harm as their supposed “friends” and “benefactors.” We have already seen some of the harm that minimum wage laws have done, by denying black teenagers the option of taking jobs that employers are willing to offer, at pay that teenagers are willing to accept, because unaffected third parties choose to believe that they understand the situation better than all the people directly involved.

Another “benefit” for minorities, from those with the social justice vision and agenda, is “affirmative action.” This is an issue often discussed in terms of the harm done to people who would have gotten particular jobs, college admissions or other benefits, if these had been awarded on the basis of qualifications, rather than demographic representation. But the harm done to the supposed beneficiaries also needs to be understood— and that harm can be even worse.

This possibility especially needs to be examined, because it goes completely counter to the prevailing social justice agenda and its narrative about the sources of black Americans’ advancement. In that narrative, blacks’ rise out of poverty was due to the civil rights laws and social welfare policies of the 1960s, including affirmative action. An empirical test of that narrative is long overdue.

Affirmative Action

In the prevailing narrative on the socioeconomic progress of black Americans, statistical data have been cited, showing declining proportions of the black population living in poverty after the 1960s, and rising proportions of the black population employed in professional occupations, as well as having rising incomes. But, as with many other statements about statistical trends over time, the arbitrary choice of which year to select as the beginning of the statistical evaluation can be crucial in determining the validity of the conclusions.

If the statistical data on the annual rate of poverty among black Americans were to be presented, beginning in 1940— that is, 20 years before the civil rights laws and expanded social welfare state policies of the 1960s— the conclusions are very different.

These data show that the poverty rate among blacks fell from 87 percent in 1940 to 47 percent over the next two decades38— that is, before the major civil rights laws and social welfare policies of the 1960s. This trend continued after the 1960s, but did not originate then and did not accelerate then. The poverty rate among blacks fell an additional 17 points, to 30 percent in 1970— a rate only slightly lower than that in the two preceding decades, but certainly not higher. The black poverty rate fell yet again during the 1970s, from 30 percent in 1970 to 29 percent in 1980.39 This one-percentage-point decline in poverty was clearly much less than in the three preceding decades.

Where does affirmative action fit in with this history? The first use of the phrase “affirmative action” in a Presidential Executive Order was by President John F. Kennedy in 1961. That Executive Order said that federal contractors should “take affirmative action to ensure that applicants are employed, and that employees are treated during employment, without regard to their race, creed, color, or national origin.”40 In other words, at that point affirmative action meant equal opportunity for individuals, not equal outcomes for groups. Subsequent Executive Orders by Presidents Lyndon B. Johnson and Richard Nixon made numerical group outcomes the test of affirmative action by the 1970s.

With affirmative action now transformed from equal individual opportunity to equalized group outcomes, many people saw this as a more beneficial policy for blacks and other low-income racial or ethnic groups to whom this principle applied. Indeed, it was widely regarded as axiomatic that this would better promote their progress in many areas. But the one-percentage-point decline in black poverty during the 1970s, after affirmative action meant group preferences or quotas, goes completely counter to the prevailing narrative.

Over the years, as controversies raged about affirmative action as group preferences, the prevailing narrative defended affirmative action as a major contributor to black progress. As with many other controversial issues, however, a consensus of elite opinion has been widely accepted, with little recourse to vast amounts of empirical evidence to the contrary. Best-selling author Shelby Steele, whose incisive books have explored the rationales and incentives behind support for failed social policies,41 cited an encounter he had with a man who had been a government official involved in the 1960s policies:

“Look,” he said irritably, “only— and I mean only— the government can get to that kind of poverty, that entrenched, deep poverty. And I don’t care what you say. If this country was decent, it would let the government try again.”42

Professor Steele’s attempt to focus on facts about the actual consequences of various government programs of the 1960s brought a heated response:

“Damn it, we saved this country!” he all but shouted. “This country was about to blow up. There were riots everywhere. You can stand there now in hindsight and criticize, but we had to keep the country together, my friend.”43

From a factual standpoint, this former 1960s official had the sequence completely wrong. Nor was he unique in that. The massive ghetto riots across the nation began during the Lyndon Johnson administration, on a scale unseen before.44 The riots subsided after that administration ended, and its “war on poverty” programs were repudiated by the next administration. Still later, during the eight years of the Reagan administration, which rejected that whole approach, there were no such massive waves of riots.

Of course politicians have every incentive to depict black progress as something for which politicians can take credit. So do social justice advocates, who supported these policies. But that narrative enables some critics to complain that blacks ought to lift themselves out of poverty, as other groups have done. Yet the cold facts demonstrate that this is largely what blacks did, during decades when blacks did not yet have even equal opportunity, much less group preferences.

These were decades when neither the federal government, the media, nor intellectual elites paid anything like the amount of attention to blacks that they did from the 1960s on. As for the attention paid to blacks by governments in Southern states during the 1940s and 1950s, that was largely negative, in accordance with the racially discriminatory laws and policies at that time.

Among the ways by which many blacks escaped from poverty in the 1940s and 1950s was migrating out of the South, gaining better economic opportunities for adults and better education for their children.45 The Civil Rights Act of 1964 was an overdue major factor in ending the denial of basic Constitutional rights to blacks in the South.46 But there is no point trying to make that also the main source of the black rise out of poverty. The rate of rise of blacks into the professions more than doubled from 1954 to 196447 that is, before the historic Civil Rights Act of 1964. Nor can the political left act as if the Civil Rights Act of 1964 was solely their work. The Congressional Record shows that a higher percentage of Republicans than Democrats voted for that Act.48

In short, during the decades when the rise of black Americans out of poverty was greatest, the causes of that rise were most like the causes of the rise of other low-income groups in the United States, and in other countries around the world. That is, it was primarily a result of the individual decisions of millions of ordinary people, on their own initiative, and owed little to charismatic group leaders, to government programs, to intellectual elites or to media publicity. It is doubtful if most Americans of that earlier era even knew the names of leaders of the most prominent civil rights organizations of that era.

Affirmative action in the United States, like similar group preference policies in other countries, seldom provided much benefit for people in poverty.49 A typical teenager in a low-income minority community in the United States, having usually gotten a very poor education in such neighborhoods, is unlikely to be able to make use of preferential admissions to medical schools, when it would be a major challenge just to graduate from an ordinary college. In a much poorer country, such as India, it could be an even bigger challenge for a rural youngster from one of the “scheduled castes”— formerly known as “untouchables.”50

Both in the United States and in other countries with group preference policies, benefits created for poorer groups have often gone disproportionately to the more prosperous members of these poorer groups51— and sometimes to people more prosperous than the average member of the larger society.52

The central premise of affirmative action is that group “under-representation” is the problem, and proportional representation of groups is the solution. This might make sense if all segments of a society had equal capabilities in all endeavors. But neither social justice advocates, nor anyone else, seems able to come up with an example of any such society today, or in the thousands of years of recorded history. Even highly successful groups have seldom been highly successful in all endeavors. Asian Americans and Jewish Americans are seldom found among the leading athletic stars or German Americans among charismatic politicians.

At the very least, it is worth considering such basic facts as the extent to which affirmative action has been beneficial or harmful, on net balance, for those it was designed to help— in a world where specific developed capabilities are seldom equal, even when reciprocal inequalities are common. One example is the widespread practice of admitting members of low-income minority groups to colleges and universities under less stringent requirements than other students have to meet.

Such affirmative action in college admissions policies has been widely justified on the ground that few students educated in the public schools in low-income minority neighborhoods have the kind of test scores that would get them admitted to top-level colleges and universities otherwise. So group preferences in admissions are thought to be a solution.

Despite the implicit assumption that students will get a better education at a higher-ranked institution, there are serious reasons to doubt it. Professors tend to teach at a pace, and at a level of complexity, appropriate for the particular kinds of students they are teaching. A student who is fully qualified to be admitted to many good quality colleges or universities can nevertheless be overwhelmed by the pace and complexity of courses taught at an elite institution, where most of the students score in the top ten percent nationwide— or even the top one percent— on the mathematics and verbal parts of the Scholastic Aptitude Test (SAT).

Admitting a student who scores at the 80th percentile to such an institution, because that student is a member of a minority group, is no favor. It can turn someone who is fully qualified for success into a frustrated failure. An intelligent student who scored at the 80th percentile in mathematics can find the pace of math courses far too fast to keep up with, while the professor’s brief explanations of complex principles may be readily understood by the other students in the class, who scored at the 99th percentile. They may already have learned half this material in high school. It can be much the same story with the amount and complexity of readings assigned to students in an elite academic institution.

None of this is news to people familiar with top elite academic institutions. But many young people from a low-income minority community may be the first member of their family to go to college. When such a person is being congratulated for having been accepted into some big-name college or university, they may not see the great risks there may be in this situation. Given the low academic standards in most public schools in low-income minority communities, the supposedly lucky student may have been getting top grades with ease in high school, and can be heading for a nasty shock when confronted with a wholly different situation at the college level.

What is at issue is not whether the student is qualified to be in college, but whether that student’s particular qualifications are a match or a mismatch with the qualifications of the other students at the particular college or university that grants admission. Empirical evidence suggests that this can be a crucial factor.

In the University of California system, under affirmative action admissions policies, the black and Hispanic students admitted to the top-ranked campus at Berkeley had SAT scores just slightly above the national average. But the white students admitted to UC Berkeley had SAT scores more than 200 points higher— and the Asian American students had SAT scores somewhat higher than the whites.53

In this setting, most black students failed to graduate— and, as the number of black students admitted increased during the 1980s, the number graduating actually decreased.54

California voters voted to put an end to affirmative action admissions in the University of California system. Despite dire predictions that there would be a drastic reduction in the number of minority students in the UC system, there was in fact very little change in the total number of minority students admitted to the system as a whole. But there was a radical redistribution of minority students among the different campuses across the state.

There was a drastic reduction in the number going to the two top-ranked campuses— UC Berkeley and UCLA. Minority students were now going to those particular UC campuses where the other students had academic backgrounds more similar to their own, as measured by admissions test scores. Under these new conditions, the number of black and Hispanic students graduating from the University of California system as a whole rose by more than a thousand students over a four-year span.55 There was also an increase of 63 percent in the number graduating in four years with a grade point average of 3.5 or higher.56

The minority students who fail to graduate under affirmative action admissions policies are by no means the only ones who are harmed by being admitted to institutions geared to students with better pre-college educational backgrounds. Many minority students who enter college expecting to major in challenging fields like science, technology, engineering or mathematics— called STEM fields— are forced to abandon such tough subjects and concentrate in easier fields. After affirmative action in admissions was banned in the University of California system, not only did more minority students graduate, the number graduating with degrees in the STEM fields rose by 51 percent.57

What is crucial from the standpoint of minority students being able to survive and flourish academically is not the absolute level of their pre-college educational qualifications, as measured by admissions test scores, but the difference between their test scores and the test scores of the other students at the particular institutions they attend. Minority students who score well above the average of American students as a whole on college admissions tests can nevertheless be turned into failures by being admitted to institutions where the other students score even farther above the average of American students as a whole.

Data from the Massachusetts Institute of Technology illustrate this situation. Data from MIT showed the black students there had average SAT math scores at the 90th percentile. But, although these students were in the top ten percent of American students in mathematics, they were in the bottom 10 percent of students at MIT, whose students’ math scores were at the 99th percentile. The outcome was that 24 percent of these extremely well-qualified black students failed to graduate at MIT, and those who did graduate were concentrated in the lower half of their class.58 In most American academic institutions, these same black students would have been among the best students on campus.

Some people might say that even those students who were concentrated in the lower half of their class at MIT gained the advantage of having been educated at one of the leading engineering schools in the world. But this is implicitly assuming that students automatically get a better education at a higher-ranked institution. However, we cannot dismiss the possibility that these students may learn less where the pace and complexity of the education is geared to students with an extraordinarily stronger pre-college educational background.

To test this possibility, we can turn to some fields, such as medicine and the law, where there are independent tests of how much the students have learned, after they have completed their formal education. The graduates of both medical schools and law schools cannot become licensed to practice their professions without passing these independent tests.

A study of five state-run medical schools found that the black-white difference in passing the U.S. Medical Licensing Examination was correlated with the black-white difference on the Medical College Admission Test before entering medical school.

In other words, blacks trained at medical schools where there was little difference between black and white students— in their scores on the test that got them admitted to medical school— had less difference between the races in their rates of passing the Medical Licensing test years later, after graduating from medical school.59 The success or failure of blacks on these tests after graduation was correlated more with whether they were trained with other students whose admissions test scores were similar to theirs, rather than being correlated with whether the medical school was highly ranked or lower ranked. Apparently they learned better where they were not mismatched by affirmative action admissions policies.

There were similar results in a comparison of law school graduates who took the independent bar examination, in order to become licensed as lawyers. George Mason University law school’s student body as a whole had higher law school admissions test scores than the admissions test scores of the student body at the Howard University law school, a predominantly black institution. But the black students at both institutions had law school admissions test scores similar to each other. The net result was that black students entered the law school at George Mason University with admissions test scores lower than that of the other law school students there. But apparently not so at Howard University.

Data on the percentage of black students admitted to each law school who both graduated from law school and passed the bar examination on the first try showed that 30 percent of the black students at George Mason University law school did so— compared to 57 percent of the black students from the Howard University law school who did so.60 Again, the students who were mismatched did not succeed as well as those who were not. As with the other examples, the students who were not mismatched seemed to learn better when taught in classes where the other students had educational preparation similar to their own.

These few examples need not be considered definitive. But they provide data that many other institutions refuse to release. When UCLA Professor Richard H. Sander sought to get California bar examination data, in order to test whether affirmative action admissions policies produced more black lawyers or fewer black lawyers, a lawsuit was threatened if the California Bar Association released that data.61 The data were not released. Nor is this an unusual pattern. Academic institutions across the country, that proclaim the benefits of affirmative action “diversity,” refuse to release data that would put such claims to the test.62

A study that declared affirmative action admissions policies a success— The Shape of the River by William Bowen and Derek Bok— was widely praised in the media, but its authors refused to let critics see the raw data from which they reached conclusions very different from the conclusions of other studies— based on data these other authors made available.63 Moreover, other academic scholars found much to question about the conclusions reached by former university presidents Bowen and Bok.64

Where damaging information about the actual consequences of affirmative action admissions policies are brought to light and create a scandal, the response has seldom been to address the issue, but instead to denounce the person who revealed the scandalous facts as a “racist.” This was the response when Professor Bernard Davis of the Harvard medical school said in the New England Journal of Medicine that black students there, and at other medical schools, were being granted diplomas “on a charitable basis.” He called it “cruel” to admit students unlikely to meet medical school standards, and even more cruel “to abandon those standards and allow the trusting patients to pay for our irresponsibility.”65

Although Professor Davis was denounced as a “racist,” black economist Walter E. Williams had learned of such things elsewhere,66 and there was a private communication from an official at the Harvard medical school some years earlier that such things were being proposed.67

Similarly, when a student at Georgetown University revealed data showing that the median score at which black students were admitted to that law school was lower than the test score at which any white student was admitted, the response was to denounce him as a “racist,” rather than concentrating on the serious issue raised by that revelation.68 That median score, incidentally, was at the 70th percentile, so these were not “unqualified” students, but students who would probably have more chance of success at some other law schools, and when later confronting the need to pass a bar exam to become lawyers.

Being a failure at an elite institution does a student no good. But the tenacity with which academic institutions fiercely resist anything that might force them to abandon counterproductive admissions practices suggests that these practices may be doing somebody some good. Even after California voters voted to end affirmative action admissions practices in the University of California system, that led to continuing efforts to circumvent this prohibition.69 Why? What good does having a visible minority student presence on campus do, if most of them do not graduate?

One clue might be what many colleges have long done with their athletic teams in basketball and football, which can bring in millions of dollars in what are classified as “amateur” sports. Some successful college football coaches have incomes higher than the incomes of their college or university presidents. But the athletes on their teams have been paid nothing70 for spending years providing entertainment for others, at the risk of bodily injuries— and the perhaps greater and longer-lasting risk to their character, from spending years pretending to be getting an education, when many are only doing enough to maintain their eligibility to play. An extremely small percentage of college athletes in basketball and football go on to a career in professional sports.

A disproportionate number of college basketball and football stars are black71— and academic institutions have not hesitated to misuse them in these ways. So we need not question whether these academic institutions are morally capable of bringing minority youngsters on campus to serve the institution’s own interests. Nor need we doubt academics’ verbal talents for rationalization, whether trying to convince others or themselves.72

The factual question is simply whether there are institutional interests being served by having a visible demographic representation of minority students on campus, whether those students get an education and graduate or not. The hundreds of millions of dollars of federal money that comes into an academic institution annually can be put at risk if ethnic minorities are seriously “under-represented” among the students, since that raises the prospect of under-representation being equated with racial discrimination. And that issue can be a legal threat to vast amounts of government money.

Nor is this the only outside pressure on academic institutions to continue affirmative action admissions policies that are damaging to the very groups supposedly being favored. George Mason University’s law school was threatened with losing its accreditation if it did not continue admitting minority students who did not have qualifications as high as other students, even though data showed that this was not in the minority students’ own best interests.73 The reigning social justice fallacy that statistical disparities in group representation mean racial discrimination has major impacts. Minority students on campus are like human shields used to protect institutional interests— and casualties among human shields can be very high.

Many social policies help some groups while harming other groups. Affirmative action in academia manages to inflict harm on both the students who were not granted admissions, despite their qualifications, and also many of those students who were admitted to institutions where they were more likely to fail, even when they were fully qualified to succeed in other institutions.

Economic self-interest is by no means the only factor leading some individuals and institutions to persist in demonstrably counterproductive affirmative action admissions policies. Ideological crusades are not readily abandoned by people who are paying no price for being wrong, and who could pay a heavy price— personally and socially— for breaking ranks under fire and forfeiting both a cherished vision and a cherished place among fellow elites. As with the genetic determinists and the “sex education” advocates, there have been very few people willing to acknowledge facts that contradict the prevailing narrative.

Even where there is good news about people that surrogate decision-makers are supposedly helping, it seldom gets much attention when the good results have been achieved independently of surrogate decision-makers. For example, the fact that most of the rise of blacks out of poverty occurred in the decades before the massive government social programs of the 1960s, before the proliferation of charismatic “leaders,” and before widespread media attention, has seldom been mentioned in the prevailing social justice narrative.

Neither has there been much attention paid to the fact that homicide rates among non-white males in the 1940s (who were overwhelmingly black males in those years) went down by 18 percent in that decade, followed by a further decline of 22 percent in the 1950s. Then suddenly that reversed in the 1960s,74 when criminal laws were weakened, amid heady catchwords like “root causes” and “rehabilitation.” Perhaps the most dramatic— and most consequential— contrast between the pre-1960s progress of blacks and negative trends in the post-1960s era was that the proportion of black children born to unmarried women quadrupled from just under 17 percent in 1940 to just over 68 percent at the end of the century.75

Intellectual elites, politicians, activists and “leaders”— who took credit for the black progress that supposedly all began in the 1960s— took no responsibility for painful retrogressions that demonstrably did begin in the 1960s.

Such patterns are not peculiar to blacks or to the United States. Group preference policies in other countries did little for people in poverty, just as affirmative action did little for black Americans in poverty. The benefits of preferential treatment in India, Malaysia and Sri Lanka, for example, tended to go principally to more fortunate people in low-income groups in these countries,76 just as in the United States.77

IMPLICATIONS

Where, fundamentally, did the social justice vision go wrong? Certainly not in hoping for a better world than the world we see around us today, with so many people suffering needlessly, in a world with ample resources to have better outcomes. But the painful reality is that no human being has either the vast range of consequential knowledge, or the overwhelming power, required to make the social justice ideal become a reality. Some fortunate societies have seen enough favorable factors come together to create basic prosperity and common decency among free people. But that is not enough for many social justice crusaders.

Intellectual elites may imagine that they have all the consequential knowledge required to create the social justice world they seek, despite considerable evidence to the contrary. But, even if they were somehow able to handle the knowledge problem, there still remains the problem of having enough power to do all that would need to be done. That is not just a problem for intellectual elites. It is an even bigger problem— and danger— for the people who might give them that power.

The history of totalitarian dictatorships that arose in the twentieth century, and were responsible for the deaths of millions of their own people in peacetime, should be an urgent warning against putting too much power in the hands of any human beings. That some of these disastrous regimes were established with the help of many sincere and earnest people, seeking high ideals and a better life for the less fortunate, should be an especially relevant warning to people seeking social justice, in disregard of the dangers.

It is hard to think of any power exercised by human beings over other human beings that has not been abused. Yet we must have laws and governments, because anarchy is worse. But we cannot just keep surrendering more and more of our freedoms to politicians, bureaucrats and judges— who are what elected governments basically consist of— in exchange for plausible-sounding rhetoric that we do not bother to subject to the test of facts.

Among the many facts that need to be checked is the actual track record of crusading intellectual elites, seeking to influence public policies and shape national institutions, on a range of issues extending from social justice to foreign policies and military conflict.

As regards social justice issues in general, and the situation of the poor in particular, intellectual elites who have produced a wide variety of policies that claim to help the poor, have shown a great reluctance to put the actual consequences of those policies to any empirical test. Often they have been hostile to others who have put these policies to some empirical test. Where social justice advocates have had the power to do so, they have often blocked access to data sought by scholars who want to do empirical tests on the consequences of such policies as affirmative action academic admissions policies.

Perhaps most surprising of all, many social justice advocates have shown little or no interest in remarkable examples of progress by the poor— when that progress was not based on the kinds of policies promoted in the name of social justice. The striking progress made by black Americans in the decades before the 1960s has been widely ignored. So has the demonstrable harm suffered by black Americans after the social justice policies of the 1960s. These included a sharp reversal of the homicide rate decline and a quadrupling of the proportion of black children born to unmarried women. Government policies made fathers a negative factor for mothers seeking welfare benefits.

Social justice advocates who denounce elite New York City public high schools that require an entrance examination for admissions pay no attention to the fact that black student admissions to such schools were much higher in the past, before the elementary schools and middle schools in black communities were ruined by the kinds of policies favored by social justice advocates. Back in 1938, the proportion of black students who graduated from elite Stuyvesant High School was almost as high as the proportion of blacks in the New York City population.78

As late as 1971, there were more black students than Asian students at Stuyvesant.79 As of 1979, blacks were 12.9 percent of the students at Stuyvesant, but that declined to 4.8 percent by 1995.80 By 2012, blacks were just 1.2 percent of the students at Stuyvesant.81 Over a span of 33 years, the proportion of black students at Stuyvesant High School fell to less than one tenth of what it had been before. Neither of the usual suspects— genetics or racism— can explain these developments in those years. Nor is there any evidence of soul-searching by social justice advocates for how their ideas might have played a role in all this.

On an international scale, and on issues besides education, those with the social justice vision often fail to show any serious interest in the progress of the less fortunate, when it happens in ways unrelated to the social justice agenda. The rate of socioeconomic progress of black Americans before the 1960s is a classic example. But there has been a similar lack of interest in the ways by which poverty-stricken Eastern European Jewish immigrants, living in slums, rose to prosperity, or how similarly poverty-stricken Japanese immigrants in Canada did the same. In both cases, their current prosperity has been dealt with rhetorically, by calling their achievements “privilege.”82

There have been many examples of peoples and places around the world that lifted themselves out of poverty in the second half of the twentieth century. These would include Hong Kong,83 Singapore,84 and South Korea.85 In the last quarter of the twentieth century, the huge nations of India86 and China87 had vast millions of poor people rise out of poverty. The common denominator in all these places was that their rise out of poverty began after government micro-managing of the economy was reduced. This was especially ironic in the case of China, with a communist government.

With social justice advocates supposedly concerned with the fate of the poor, it may seem strange that they seem to have paid remarkably little attention to places where the poor have risen out of poverty at a dramatic rate and on a massive scale. That at least raises the question whether the social justice advocates’ priorities are the poor themselves or the social justice advocates’ own vision of the world and their own role in that vision.

What are those of us who are not followers of the social justice vision and its agenda to do? At a minimum, we can turn our attention from rhetoric to the realities of life. As the great Supreme Court Justice Oliver Wendell Holmes said, “think things instead of words.”88 Today it is especially important to get facts, rather than catchwords. These include not only current facts, but also the vast array of facts about what others have done in the past— both the successes and the failures. As the distinguished British historian Paul Johnson said:

The study of history is a powerful antidote to contemporary arrogance. It is humbling to discover how many of our glib assumptions, which seem to us novel and plausible, have been tested before, not once but many times and in innumerable guises; and discovered to be, at great human cost, wholly false.89