Epilogue
RACE AND SOCIETY
Up to this point we have been mostly concerned with what races most emphatically are not: meaningful units into which members of the species Homo sapiens can be grouped. Still, for reasons we have sporadically touched upon, it is useless to deny that to many people the idea of race feels real; and indeed, in most human societies it is a concept that unconsciously or otherwise pervades people’s experience, mediated by ways of thinking about human variety and interaction that are absorbed remarkably early in life. Those social attitudes are often reinforced by the penchants of governments to classify their citizens along racial lines.
An excellent example of how racial attitudes can insidiously permeate a society, even one in which “affirmative action” policies are widely implemented to address inequality among social groups, is provided by a recent analysis of the extensive opposition to paying college athletes, the only participants in a hugely lucrative industry who do not get lavishly remunerated. True, top college athletes receive scholarships to cover the costs of their education—although the rigor of that education is sometimes compromised in the interests of maximizing their availability to generate income for the institutions for which they play. But even under the best of circumstances, the financial value of the education the athletes receive is peanuts compared with the huge sums that colleges rake in from sports. In college football alone, and merely for TV rights to broadcast the seven annual playoff games, ESPN is contracted to pay the National College Athletic Association (NCAA) $7.3 billion from 2014 to 2026.
Rewarding top student athletes with scholarships is clearly the equivalent of paying star actors chump change for appearing in blockbuster movies; and this striking departure from standard ideas of fairness is normally justified by resorting to the “special status” of amateur sport, in which supposedly it is not winning or losing but the game itself that is important. According to the NCAA, “one of the biggest reasons fans like college sports is that they believe the athletes are really students who play for the love of the sport.” Such self-serving rationalizations have deservedly attracted scorn from commentators. One of them, the author Taylor Branch, wrote in the Atlantic that “two of the noble principles on which the NCAA justifies its existence—‘amateurism’ and the ‘student-athlete’—are cynical hoaxes, legalistic confections propagated by the universities so that they can exploit the skills and fame of young athletes” (Branch 2015).
While the motives of the NCAA and the colleges in all of this are transparent, those of the audience are more opaque. Accordingly, a group led by the political scientist Kevin Wallsten of California State University–Long Beach recently looked in detail at the fan base to which the NCAA typically appeals to justify its inequitable position. True, a 2015 Marist poll had found that a whacking 65 percent of respondents overall opposed paying college football and basketball players, a group among which African Americans are disproportionately represented. But when Wallsten and colleagues broke down respondents in the 2014 Cooperative Congressional Election Study (CCES) by race, the researchers discovered that those who identified themselves as black were significantly more likely than self-identified whites to support payment to those college athletes. More than half of the blacks surveyed wanted them to be paid, in contrast to fewer than a quarter of the whites.
Digging deeper, the researchers undertook a survey of the 2014 CCES sample specifically to discover whether prejudiced racial views influenced the white respondents’ antipathy toward paying college athletes. And, disturbingly, they reported that the more negatively white respondents felt about blacks, the more vehemently they opposed paying the athletes. To double-check this result, some white respondents were shown pictures of young black men before being asked their feelings about paying college athletes, while others were not. Tellingly, those who had been shown the pictures were significantly more likely to say they opposed payment; and the strength of that opposition scaled with the degree of their expressed resentment of blacks.
On the strength of this example alone, America clearly still carries a damaging burden of racial resentment and prejudice. The days of overt lynchings may be over, but those of hugely disproportionate incarcerations of disadvantaged African Americans most certainly are not; and even such ostensibly race-neutral topics as whether or not college athletes should be paid for their efforts are evidently deeply influenced by racial feelings.
This makes it very plain that, although stratifying respondents by race in the college sports study might have been a biologically meaningless exercise, in cultural terms it was not. Human beings are, and always have been, intensely social creatures, and as such they crave group identity. The evolutionary psychologist Robin Dunbar has pointed out that in practical terms the maximum number of close associates the average person can keep close track of is a mere 150 or so; and although our modern urban society requires that we need to interact closely with significantly larger numbers of individuals over time, there is certainly a limit to the number of people we can directly relate to on a personal level (although there always seems to be room for one more).
In any complex human society, individuals need to identify with larger groups than the ones composed of the people they know and can identify with directly; and the only way to define groups of that kind is not as the sum of the individuals who compose them, but more abstractly, via some perceived property: nationality group, religious group, interest group, social stratum, and so on. And of all such categories, the racial grouping is intuitively among the most obvious, because it is signaled by physical appearance and can be perceived at a distance. Until we look more closely, of course.
We have already discussed at length the biological fallacy inherent in trying to classify people into racial groups; and now in closing we need to draw attention once again to the substantial disconnect that exists between perceived race and culture. Within a given society, racial self-attribution might well broadly indicate an individual’s affinity with a specific cultural group. But identity is not a biological attribute; and indeed, in a multiethnic society it is something an individual often merely assumes. What is more, if it is the cultural entity that is of interest, it is surely much more accurate and meaningful to denote and characterize it in cultural terms.
Across societies and continents, of course, the correlation between race and culture breaks down entirely. Knowing how an individual is racially classified doesn’t tell you much about him or her as an individual or even what social norms he or she espouses. A “black” person from Jackson and another from Juba or Jaffna will find little culturally or linguistically in common; and it is impossible from anyone’s racial classification, whether self-assigned or governmentally imposed, to predict such fundamental individual qualities as his or her personality type, brainpower, or talents.
Caution is also in order, because racial classification is something of a self-fulfilling prophecy that allows us to dig ourselves ever deeper into a self-sustaining cycle. For when we attempt to deconstruct the population into racial categories, even for the most benign of purposes such as the remedying of social inequities, we also find ourselves reinforcing those categories—and, incidentally, the entire litany of stereotypes that inevitably comes along with them. “Us versus them” lies at the heart of all internecine conflicts. It is surely much better to recognize immediately that breaking the population into supposed biological groupings helps perpetuate the divisions among those assumed groups. If we wish to indulge in social engineering, we would plainly be much better off identifying the cultural or economic groups we wish to habilitate and defining them in the appropriate terms.
But there remains that awkward matter of that innate human need to name and classify everything around us. This process of atomization and categorization is, after all, fundamental to language; and it is language that provides the armature for our intellectual understanding of the world around us, including our own species. Language demands that we give things names; and while people invariably exist along a spectrum of variation in any behavioral or physical quality you might want to specify, names are hard-and-fast categories. Even when we attach modifiers, the very act of naming presupposes archetypes. The bottom line is that when we ascribe a limited number of names to elements lying along a spectrum, we create a set of artificial and arbitrary categories; and if the underlying entities are multidimensional ones, varying in more respects than, say, the colors of the visible spectrum, the distortions only worsen as each new consideration is added—as, of course, inevitably happens when we carve up the gloriously heterogeneous human population into discrete “races.”
Take, for example, the doomed efforts of well-meaning biological anthropologists of the immediate post–World War II period to reclassify variation within our species Homo sapiens. With the racial atrocities of the 1930s and 1940s only too fresh in their minds, they were painfully aware of the dangers and difficulties attendant on biologically classifying people; but many were not yet able to extricate themselves from the feeling that, since variation was evidently there, scientists should nonetheless classify it. This was clearly the point of view of the thoughtful developmental anthropologist Stanley Garn, who attempted a new taxonomy of Homo sapiens in 1961. Having settled, after much agonizing, on nine major “geographic races,” Garn rapidly found himself bogged down in a morass of “local races” within each of these larger units; and he ultimately found it necessary to concede that each local race contained innumerable “micro-races.” Fortunately, he ran out of steam well before the potential infinite extension of this logic forced him to conclude that every individual belonged to his or her own unique race.
We mention Garn’s experience merely to illustrate the point that the deformation of fact introduced by the attempt to divide up a heterogeneous, complex, and dynamically reticulating entity such as Homo sapiens can only be eliminated by totally abandoning the effort to subdivide. The resulting categories cannot simply be fine-tuned to eliminate distortion. Anthropologists of Garn’s generation bore a burden of erroneous received wisdom that it proved difficult for many to shed, even after the experience of living through the darkest racial episode of recent history. But perhaps, going forward, the ultimate abandonment of scientific and official racial categorization will be made easier by the explicit realization that, certainly from a biological point of view, doing so will cost us absolutely nothing, intellectually, scientifically, or financially. After all, it is becoming increasingly evident that making racial distinctions within the human species not only adds no useful heuristic to our understanding of human variety, but makes any useful attempt to understand it harder.
All well and good; but concepts of race, explicit or otherwise, are clearly and tenaciously entrenched in almost everyone’s consciousness. And they still lie at the root of some of the most distressing and stubborn social problems that exist around the world. How can we eliminate them—or at least attenuate them—in favor of understanding variation within our species in more constructive and useful ways? Well, one size will clearly never fit all in a problem this vast. Racial attitudes are embedded in such a variety of different narratives, in such a multiplicity of cultures and contexts across the planet, that there will never be a single solution to the problem of easing the evils associated with the social perception of race. Governments can help to move the dial by passing legislation designed to guide and modify social attitudes over the long term; but a genuinely enduring commitment to clearly visualized goals is invariably required, something that it is often hard to envisage. Still, in some places there do appear to be modest grounds for optimism, at least for those prepared to take a long view.
One of those places is the United States, where in 2013 the sociologists Tatishe Nteta and Jill Greenlee used the 2008 election of Barack Obama to the presidency to test the “impressionable years” hypothesis, which proposes that political events experienced in early youth can have lasting effects on attitudes held in later life. Previous studies had suggested that, among American whites born since the 1970s, long-held beliefs in the biological inferiority of American blacks had been consistently waning, and that Obama’s ascent to the presidency had accelerated this decline. Using data from the 2010 CCES survey, Nteta and Greenlee approached the issue from a slightly different perspective, asking not whether Obama’s election could be viewed as a product of this existing trend, but rather if it had the potential to catalyze further change in social attitudes.
Nteta and Greenlee’s first finding was that an “Obama generation” of those whites who fell into the critical 18–25 “young adult” age bracket during Obama’s presidential campaign could indeed be identified, via racial attitudes that were consistently more liberal than typical of earlier generations. Looking more closely, though, they additionally discovered that higher levels of education in this generation did not, as in its predecessors, correlate with reduced levels of racial resentment (as opposed to “old-fashioned racism,” which involves a more general belief in inferiority). They examined two possible explanations for this. One was that, by promoting individualistic beliefs, education may simultaneously enhance negative views of perceived work ethic. The other was that “if social norms regarding racial equality are increasingly widespread, the role of educational institutions in promoting racial equality may be less important” (Nteta and Greenlee 2013, 892). The authors leaned toward the more optimistic latter conclusion; but we cannot help observing that the issue would not exist were it not for the prior categorization of the populations concerned.
Nteta and Greenlee are frank about the difficulties involved in predicting the future of what is a very complex social dynamic. And, of course, their research did not touch on any potential evolution in the attitudes of black Americans toward white ones. But they end their study on a hopeful note, more encouraged by the apparent waning of “old-fashioned racism” than discouraged by the reverse effect of education on racial resentment—a complex of attitudes that will always be much more greatly affected by prevailing economic circumstances than by anything else. Nteta (website) has since observed: “The ascendance and eventual election of Obama may have led to the formation of a new generational grouping, and…this generation’s racial attitudes represent the culmination of the nation’s steady march toward racial reconciliation and equality.” Whether there will be good reason to maintain such optimism over the life of the succeeding administration remains to be seen; but the auguries are not necessarily good. In February 2017, the New York Times reported that Stephen K. Bannon, then President Donald Trump’s chief strategist, was versed in the works of Julius Evola, an Italian philosopher who considered the fascists “overly tame” on matters of race, and who has recently become a darling of the emerging “alt-right.” As science, race may (or should) be a dead issue; but it shows zombie-like tenacity on the social and political fronts.