I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops.
—STEPHEN JAY GOULD1
As she stood in the doorway of the class surveying a sea of small, mostly dark, faces, Cathy* suddenly realized that she was clutching her handbag tightly, too tightly. She forced herself to relax and offer a smile she did not feel. “I don’t understand. Why is Edgar in a special education class?”
Cathy knew her son was bright. He loved school, read well above his third-grade level, and, in New Jersey, he had always hovered near the top of his class. She had ascribed his recent moodiness and reluctance to attend school to difficulty adjusting to their move to Fort Lauderdale, but when she prodded him, he blurted, “They have me in the class for slow kids, like I’m a dummy. Some of those kids can’t even read!”
Incredulous, Cathy dialed Teri Marche, Edgar’s teacher, and was appalled to hear that he had indeed been placed in special education. “Why did no one tell me?” she demanded. “We need to talk.”
On her ensuing tour of the school, Cathy noted that nearly all the children in the gifted program were white, and asked, “On what test scores are these placements based?”
Teri explained that each instructor’s assessment of the student, not standardized tests, determined placement. Cathy insisted, “My child is very intelligent. Test him.”
“Please, we can optimize Edgar’s training in these focused classes and he’ll perform better in the end. We must be realistic: the IQ gap is a fact, and that, unfortunately, explains the racial disparity we have here.”
Training? Now Cathy was really angry, and she whirled around to face Teri. “You don’t know my son’s IQ. I promise you that if he is not tested I will pursue legal recourse.” She turned to leave. “Thank you for your time.”
The next week, Edgar was tested and reassigned to a gifted class. In the following year, 2005, the Broward County Department of Education began administering standardized tests to all second graders.
As a result, the number of gifted African American children in the district soared 80 percent, and that of gifted Hispanic children skyrocketed 130 percent. Shortly thereafter, the policy shift resulted in a tripling of both black and Hispanic “gifted” students.2 None of the students’ IQs—or genes—had changed.
The average 15-point black-white gap in measured IQ—and the sort of rampant assumptions that Cathy faced—are more than fodder for political debates about inherent racial intelligence. The widespread (and inaccurate) belief that IQ limits are innate and permanent directly affects the trajectories of lives like Edgar’s, often imposing a low ceiling on an individual’s opportunities from an early age, despite our knowledge of environmental risks to IQ, such as iodine deficiency. The fact that iodine deficiency was responsible for a 15-point IQ gap between residents of different geographic areas should have impressed upon us the environment’s power to create the same 15-point IQ gap between U.S. whites and African Americans, especially considering the extent to which this country remains geographically segregated along racial lines.
We also should have noted that addressing a damaging environmental factor is an effective tool that allows us to close the gap. Iodine deficiency is far from the only environmental condition known to lower IQ, and it is very far from the most potent. Heavy metals such as lead, mercury, and arsenic, and nearly 150,000 untested industrial chemicals including pesticides, phthalates, PCBs, halogenated hydrocarbons, alcohol, and more have been shown to depress IQ, stymie intelligence, and distort human behavior. Moreover, copious scientific data documents how people of color suffer the effects of these “brain poisons” in wildly disproportionate numbers. Industrial waste dumps, workplace exposures, Superfund sites, and bus depots haunt their neighborhoods. Scientists are beginning to quantify these effects in a manner that supports the connection between IQ and environmental exposures in communities of color.
Yet we have largely failed to appreciate the well-documented connection between environmental exposures and lower intelligence and IQ. Certainly we have failed to act upon it. Acknowledging the potent effect of environmental poisons on intelligence would contradict the hereditarian theories of innate, permanent inferiority as expressed by IQ scores, theories that are uncritically accepted by many. The hereditarian view of IQ dominates the national perception of IQ as a measure of human potential. As I mention in the Introduction, hereditarianism’s racially loaded position makes for grabby headlines, and those who oppose it spend a great deal of time in point-by-point refutations of arguments and insults, so that instead of promulgating a more accurate view of intelligence and IQ we get a shrill, reductionist national conversation.
This myopia robs us of a potent tool—the reduction and removal of human exposures to environmental poisons that cause damage far beyond cancers, lung disease, and other serious physical illnesses. They also damage the brain and lower intelligence, creating not only individual tragedies but a needless collective loss of intelligence.
For a better understanding of what IQ is and isn’t, let’s go back to the beginning for a brief look at the nature of intelligence and IQ.
Intelligence seems necessary for material and social success, and is central to our idea of success in general: personal freedom, good grades, holding a meaningful, profitable job, providing for our families, maintaining a rewarding social network, financial security, and generally navigating the complexities of modern life in the West. However, if you’re not sure what, exactly, intelligence is, don’t worry: You’re in good company. When asked to look up from their tools and metrics to offer a definition, scholars who’ve devoted careers to refining our understanding of this mercurial attribute—or collection of attributes—struggle to define it.
“That’s a good question,” responded one researcher I interviewed, pausing briefly before adding, “A very good question,” and falling into silence.
Christopher Eppig responded, “The best definition that I’ve heard of intelligence is that intelligence is an emergent property of mental functioning. You can compare it to attractiveness, which is not based on one single trait like the distance between someone’s pupils. It emerges from thousands of traits upon a person’s body and mind and it’s not even the sum of lots of things, but it’s the interaction of these things that gives you an emergent property.”3
“A good definition of intelligence? That’s a great question,” said Brink Lindsey. “I don’t know. There’s really no definition except ‘performing well on IQ tests,’ although there is the speed of learning and retention and how fast one processes information and how well one retains it.”4
In 1969 Arthur Jensen threw down a modern hereditarian gauntlet when he alleged the genetic intellectual inferiority of African Americans to whites. In an essay for the Harvard Educational Review entitled “How Much Can We Boost IQ and Scholastic Achievement?,” Jensen opined that IQ scores indicate unchangeable, inborn, genetically inherited intelligence for all groups of people. Differences in IQ scores between races, he claimed, prove group differences in fixed native intelligence.5
But when asked to define intelligence, Jensen sidestepped the question, intoning, “Intelligence, like electricity, is easier to measure than to define.”6 Most high school physics students could refute this nonsensical aphorism by explaining that electricity is a form of energy resulting from charged particles, either statically, as an accumulation of charge, or dynamically, as a current.
Jensen’s inability to define intelligence didn’t stop him from joining his forebears in ranking some Asians and Europeans near the top and the diverse denizens of Africa—from Nigerian urbanites to Tanzanian scientists to Kalahari bushmen to Congo “pygmies” to Ethiopian Jews—near the bottom of an IQ gradient that persists today.
Researchers write of intelligence, but they measure IQ, and these are not synonymous. “There is no direct test of general mental ability. What IQ tests measure is the display of particular cognitive skills such as vocabulary and reading comprehension,” explains Christopher Eppig. “Any conclusions about general mental ability are inferences drawn from the test-taker’s mastery of those various skills.”
In 2002, when IQ and the Wealth of Nations, by Richard Lynn and Tatu Vanhanen, ranked nations by their average IQ scores, most of Europe and the more affluent portions of Asia hovered a bit above 100 points. But the news for Africa, Hispanic nations, and South America was grimmer. Average IQs for most African states fell below 70 (except for South Africa, which came in at 72), although Ethiopia’s score of 63 was later revised to 71. Equatorial Guinea defined the nadir at 59. The authors suggested that these nations’ low average IQs denoted low intelligence among their citizens, which in turn explained the poverty and scant economic development in sub-Saharan countries. Low intelligence, indeed. By most measures, an IQ below 70 denotes mental retardation; thus Lynn and Vanhanen claim that nearly all of Africa is mentally retarded.
Critics pointed out that their conclusion was unsupported by the data. Their methodology lacked rigor, was illogical,7 and their IQ calculations used very small, non-representative populations and were often derived from older papers and assessments of dubious credibility.8
But even if their methodology had been sound, their conclusions would still have been unreliable. As Brink Lindsey points out, the very concept of comparative IQ scores is flawed.
[IQ] scores are only a good indicator of relative intellectual ability for people who have been exposed to equivalent opportunities for developing those skills—and who actually have the motivation to try hard on the test. IQ tests are good measures of innate intelligence—if all other factors are held steady. But if IQ tests are being used to compare individuals of wildly different backgrounds, then the variable of innate intelligence is not being tested in isolation. Instead, the scores will reflect some impossible-to-sort-out combination of ability and differences in opportunities and motivations.9
Girma Berhanu, a professor of education at the University of Gothenburg, agrees, adding, “Statistics can lead us to accurate conclusions only by using representative samples that are selected at random.” In Ethiopia, however, the average IQ score was calculated based on a sample of a mere 134 children in a single orphanage in Jimma. As Berhanu wrote, “An orphanage in Jimma in 1989 was an extraordinary and traumatic experience for children who were victims of famine, resettlement, and on a massive scale. The experience of orphaned children who survived harrowing experiences of death and starvation cannot be seen as a representative sample for IQ testing.”10
Thus, Lynn and Vanhanen’s rankings are nonsensical because the developing-world sites used in establishing IQs have so very little in common with Western and affluent Asian nations. In many such countries, compulsory education and literacy are relatively rare, and not all the tested subjects were fluent in English or in the language in which the test was administered.
As Berhanu points out, social conditions in very poor and unstable countries can militate against success in test-taking. The incentive to perform well on such lengthy, grueling tests is strong for Americans, who know that their future success in academia and industry hinges on a high score. This motivation doesn’t exist among test subjects in developing nations who are almost certainly correct in seeing no connection between a high score and their lot in life.
Moreover, health status stands between the people of the global South and a normal IQ score. Poor nutrition alone depresses brain development and intellectual functioning. So does exposure to neurotoxic agents, which is ubiquitous in countries where industrial emissions are rampant and all but uncontrolled. This means that even had testing practices in these countries been logical, meticulous, and fair, environmental factors alone would still cause a disparity in IQ scores.
In this respect, the denizens of low-IQ nations have a lot in common with residents of lower-IQ ethnic enclaves of the United States. They suffer exposure to the same sort of brain-damaging chemicals as do members of U.S. racial minority groups and with the same potent but underappreciated effect on their cognition and intelligence as is apparent in their depressed IQ scores.
The various IQ tests used in Lynn’s book and still in use elsewhere have something else in common: They measure abilities that are important to success in Western culture and that are more commonly practiced in relatively affluent Western cultures than in other nations and cultures. The tests, for example, focus heavily on reading skills, which are not important in every culture. Developing nations that ranked low in IQ often lacked compulsory schooling, and many of those tested were rural agrarians who had rarely if ever held a pencil.
Some dismiss such rankings as not only scientifically flawed but racist, or at least ethnocentric, especially because many of the hereditarians who espouse them are funded and supported by politically active groups with eugenicist or anti-immigrant leanings, such as the Pioneer Fund. (It’s worth noting that Asians, not Europeans, often occupy the very apex so that these rankings serve to denigrate dark-skinned people, not to elevate the palest people.) Yet “racist” can be an unhelpful assessment. Surveys show that various groups define the term quite differently, so it often impedes rather than clarifies communication. Moreover, even if such IQ assessments are motivated by racial disdain, this doesn’t prove them wrong. IQ is a crude, inaccurate, and biased proxy for intelligence, but most scholars agree that it measures something to do with the capacity to learn, at least in the affluent West.
The error of assuming that IQ scores indicate inborn lower intellectual capacity dictated by race is a product of bias, intentional or otherwise, that inaccurately depresses some groups’ IQs.11 However, this belief is also due to misunderstandings about what IQ can and cannot measure.
If, for the sake of argument, we assumed that the global rankings are more accurate and valid than they are, then there would be a reason that the IQ of the average Ethiopian is lower than that of the average U.S. resident. And knowing what this reason is would be a necessary first step toward a corrective.
Hereditarians say the reason is that inferior IQ means that the intelligence of racial groups such as African Americans and Hispanics is
• innate,
• primarily genetic and so impervious to prevention, and
• immutable, and cannot be ameliorated through better education, or better physical and mental health.
Perhaps these assumptions, not the findings of lower relative IQs, are the most damning tenets of hereditarian dogma. Perhaps it is not the fictitious assessment of lower IQs that is most malignant, but rather the unsupported assumptions about the nature of IQ and intelligence that invest the IQs with dire meaning.
Fortunately, the hereditarians are wrong: The facts belie all their beliefs.
IQ is not innate. Studies that compare the IQs of mixed-race children reveal the determinant role of their various environments. In Germany, children with one parent of African descent and one of European descent who were raised in white households (such as the offspring of African American or African soldiers whose fathers return to the States and whose mothers subsequently marry white Germans) scored higher on IQ tests than mixed-race children from similar backgrounds who were raised in black households. The same thing happened when comparing mixed-race children in the United States.12
IQ is not “genetic.” Most experts consider intelligence to be multifactorial, and no genes have been proven to determine IQ.13 Furthermore, although we know that genetics are an important factor in most aspects of human health, the early experience of the brain and nervous system is far more important when it comes to intelligence. Pertinent studies include a German research study in which the overall average IQ of biracial boys with a white father was found to be 96, while that of such boys with a black father was found to be 96.5: virtually identical, even though the latter group’s boys were subjected to racial prejudice.14 A U.S. study compared black and biracial children adopted by middle-class black or white parents with similar results.
Genetic potential is just that—potential—and whatever potential a given fetus has, exposures and deprivation can ensure that his or her future IQ will be lower than average, sometimes devastatingly so. Whether a child in utero is exposed to alcohol during a key moment of development, assaulted by PCBs, arsenic, or mercury, or assailed by a pathogen associated with the loss of cognition or memory, it is the brain’s experience, not genetics, that will dictate the child’s intelligence and IQ.
IQ is not impervious to change. We have been watching average IQs change within populations for decades in the West as well as in some of those “irremediably low-IQ” nations.
Consider Kenya. IQ tests were administered twice in certain regions of Kenya, once in 1984 and again in 1998. The later test reported an average IQ at least 14 points higher than the first. This increase occurred far too swiftly to have been driven by genetics. However, this spike in average IQ coincided with a period of dramatic improvement in health status in the regions, including lowered rates of infectious disease. It is highly unlikely that these improvements were unrelated.
In fact, this IQ increase is a variant of the “Flynn effect,” named for James R. Flynn, whose 2012 book, Are We Getting Smarter? Rising IQ in the Twenty-First Century, documented an average IQ rise of three points per decade between 1932 and 1978 in the United States and some other Western nations. Subsequent analyses have confirmed this, such as a 2014 meta-analysis in the Psychological Bulletin, which calculated the same IQ gains between 1972 and 2006.15 This suggests that the scores reflect not innate intelligence but acquired skills.
So IQ is indeed malleable—we just need to determine what causes it to change. If we do that, we will be empowered to craft strategies and tools for raising it within marginalized ethnic minority groups of African Americans, Hispanics, and Native Americans, just as we raised the low IQs within pockets of America by using iodine supplementation.
If IQ measures acquired skills (rather than innate ability), if it is affected most strongly by environmental forces (not transmitted by genetic ones), and if it depends upon a shared culture and strength of motivation to tell us anything meaningful about ability, why do some cling to notions that a racial difference in IQ means an innate, unchanging difference in African American intellectual potential?
To understand this, it is important to realize that the belief in the innate, irreversible, intellectual inferiority of people of color predated any objective tests or measurements of their abilities. As I explain in Medical Apartheid: The Dark History of Medical Experimentation on Black Americans from Colonial Times to the Present, the assertion that African Americans and some other people of color were inferior and perhaps not even human was a necessary myth for justifying enslavement, and was supported by the nineteenth century’s most prominent U.S. scientists, the American School of Ethnology. Intellectual inferiority was a key tenet of that inferiority, repeated often in the scientific and popular literature.16
Nineteenth-century scientists portrayed the enslaved African American as inherently debased and permanently so: no amount of training, education, or good treatment could make him the equal of a white man. Most of these scientists were polygenists, who thought that African Americans and whites belonged to different species. They held that black Americans were physically inferior, dishonest, malingering, hypersexual, and indolent. This inferiority was documented in entire catalogs of black flaws that filled medical journals and textbooks.17
In 1839, Samuel George Morton published Crania Americana, a book written to demonstrate how human skull measurements, or craniometry, revealed a hierarchy of racial types. Morton determined that Caucasians had the largest skulls, and therefore the largest brains, and blacks the smallest. His tests were the forerunner of phrenology, which sought to determine character and intelligence by interpreting the shape of the skull.18
Nine years later, Louisiana’s Samuel A. Cartwright, M.D., suggested that blacks’ physical and mental defects made it impossible for them to survive without white supervision and care, alleging that the crania of blacks were 10 percent smaller than those of whites, preventing full development of the brain and causing a stunting of the intellect.
French scientist Louis-Pierre Gratiolet added that in the Negro “the cranium closes on the brain like a prison. It is no longer a temple divine, to use the expression of Malpighi, but a sort of helmet for resisting heavy blows.”19
IQ is measured by a myriad of examinations that purport to evaluate and score a person’s intelligence—partial or entire. Haunted by race bias, class bias, and a plethora of common misapplications, the IQ test is a widely applied but very imperfect yardstick, useful in measuring intelligence, but only within a limited context.
And how could it be otherwise? A pervasive American belief in the intellectual inferiority of people of African descent and of formerly colonized peoples predates any tests formulated to document it. These include people we would now group under the labels of Hispanic Americans, as well as some Asian Americans and Pacific Islanders. From the beginning, racial measurements of intelligence have been plagued by rigged research and other misapplications of science designed to enforce racial bias, as Stephen Jay Gould documented extensively in The Mismeasure of Man and as Robert V. Guthrie reveals in Even the Rat Was White: A Historical View of Psychology. Continued belief in the inferiority of people of color has survived even the grossest logical failures and embarrassing revelations of racial bias.20
National data used to dictate policy have also been manipulated in bad faith, as when the 1840 U.S. census employed falsified data to promulgate a pro-enslavement message of racial mental inferiority. Census data purported to show that freedom was unhealthy for the limited minds of African Americans and caused freedmen to suffer eleven times the mental disease of their enslaved peers.
It took James McCune Smith, M.D., a Glasgow-trained African American physician and statistical scientist, to refute the scientific racists on their own ground, and he found it necessary to do so frequently. His 1837 lecture exposing the scientific fallacy of phrenology offered scathing criticisms of the logical sins inherent in imputing character and intelligence from physiology, and his 1844 report to the U.S. Senate entitled “The Memorial of 1844 to the U.S. Senate” exposed the intentional bias and manipulation used to impugn African American intelligence in the 1840 census data.21
Guthrie reveals many instances of such racial bias in test taking. Among them, the administration of “identical” intelligence tests to armed forces recruits in the 1920s. Whites and blacks were given the same test, but middle-and upper-class whites took a paper-and-pencil “alpha” test whereas blacks and some poor whites were assumed illiterate, so the test questions were “given by demonstration and pantomime”22—acted out—by proctors in the “beta” variant of the test.23 Clarity was sacrificed because at least some abstract test questions were utterly unsuitable for miming.
Other Army tests utilized “picture completion” exercises that rated the soldiers’ ability to draw the missing element in an image such as the balls in the hands of bowlers and the missing net in a depiction of a tennis game. For a sharecropper’s son on his first foray outside the deep South these were likely to be unfamiliar objects. Among the black and lower-class participants, the test generated a great deal of confusion, and very low scores. W. E. B. Du Bois observed:
For these tests were chosen 4730 Negroes from Louisiana and Mississippi and 28,052 white recruits from Illinois [emphasis original]. The result? Do you need to ask? M. R. Trabue, Director, Bureau of Educational Service, Columbia University, assures us that the intelligence of the average southern Negro is equal to that of a 9-year-old white boy and that we should arrange our educational program to make “waiters, porters, scavengers, and the like” of most Negroes!24
In the early twentieth century, W. E. B. Du Bois criticized the Army intelligence tests, which supported belief in innate African American intellectual inferiority. (Public domain)
Picture completion exercises, part of the Army intelligence tests, often relied upon knowledge (such as the proper placement of a tennis net or powder puff) that would have been foreign to many poor black recruits, resulting in lower scores for them. (Courtesy of Arlene Shaner, New York Academy of Medicine)
In an often-mystifying display, the proctors “acted out” instructions for the early twentieth–century beta Army intelligence test instead of giving verbal directions. The beta test was ostensibly designed for the illiterate but was given indiscriminately to “negro” recruits. (Courtesy of Arlene Shaner, New York Academy of Medicine)
During the 1920s, eugenicists influenced the passage of the National Origins Act of 1924, which barred immigrants from Southern European countries as “dysgenic.” This line of thinking persists to this day: within the past decade, hereditarian Jason Richwine sought to convince lawmakers to similarly restrict immigration from Hispanic nations partly on the basis of their supposedly lower intelligence. Richwine’s 2009 Harvard doctoral dissertation, entitled “IQ and Immigration Policy,” claimed that Hispanic immigrants had lower IQs than non-Hispanic whites, a disparity that he claimed persisted for generations.25
The belief that IQ scores are hereditary, that they measure general intelligence, that they are innate, and fixed, and that they can be compared across groups are assumptions that have been made for more than a century, without proof. Today, they are made in spite of proof to the contrary, as recent research has roundly dismantled most of these myths about IQ. Such myths should never have been embraced. In fact, they contradict the very research, goals, and findings of the man who invented the notion of IQ.
Psychologist Alfred Binet, originator of the twentieth-century intelligence test known as the Stanford-Binet Intelligence Test, was frustrated in his initial attempts to create a bias-free intellectual assessment based on craniometry, the standard method at the time. He traveled from school to school measuring cranial sizes and attempting to correlate them with student performance, noting that teachers made the subjective assessments of which students were stellar or underperforming. But such determinations were validated by no objective measure.
Binet found only minuscule differences in performance relative to cranial size: mere millimeters separated the brightest from the dunce, and this was not useful in predicting intellectual performance.
In 1904 the French minister of public education asked Binet to find ways of identifying the specific cognitive problems affecting poorly performing schoolchildren so that ways of addressing and improving their individual learning issues could be devised. Binet did so, this time rejecting craniometry in favor of a tool he developed himself, specifying that it was to be used only to evaluate the learning disabled and low academic achievers. He assigned children brief, scored tasks of counting and reasoning and used the scores to determine what he called their mental age. By subtracting this mental age from the child’s chronological age Binet arrived at a quotient, an individual measure of academic achievement. The intelligence quotient, or IQ, was born.26
From the beginning, Binet insisted that the test was valid only for measuring the weaknesses of pupils who were already performing inadequately, not for predicting who might perform inadequately. He always denied that it could be used for the latter purpose. In fact, writes University of Wisconsin–Madison professor of law and bioethics Pilar Ossario, “[Binet] rejected the notion that his test measured a person’s inborn or fixed cognitive ability. He also declined to use his test to rank individuals according to cognitive ability.”27
“What Binet feared most, about an IQ number,” wrote Robert Anemone, a biological anthropology professor at the University of North Carolina at Greensboro, “was its negative uses in society. He thought that it could be used as an indelible label rather than a tool to identify the needs of the child.… Therefore, Binet declined to label IQ as inborn intelligence and refused to regard it as a device for ranking individuals based on the mental capacity.”28
Of course, this is precisely how the IQ test is used today—as a device for ranking individuals’ mental capacity and stamping them with an indelible label.
While a history of IQ testing is beyond the scope of this book, it is helpful to understand that later psychologists adopted Binet’s IQ test for the very different purpose of assessing the intellectual capacity of anyone. And this was just the first of many misinterpretations of what IQ can and cannot tell us. Thus, the chief principles of Binet’s IQ test were reversed as early psychologists translated his scale into a universal tool for testing all children. They further assumed this score to be fixed for a person’s entire life, and hereditary—handed down from parents to child—all without evidence.
So do contemporary American hereditarians. Twin fallacies drove this misapplication of the IQ test. Reification, which bestowed an independent determinative and predictive reality and power on the test; and hereditarianism, the assumption that IQ is inherited, passed from parents to child.
These assumptions fit well into the pervasive preexisting racial mythologies that already held the intellectual inferiority of the “lower races” to be rigid and heritable. IQ became another tool to validate and reinforce their lower station in life.
Today many intelligence tests give a single intelligence quotient, held to be a measure of general intelligence. IQ tests purport to measure only fluid intelligence. Fluid intelligence is associated with the ability to solve problems that require no prior knowledge, such as abstract logic puzzles that one has never seen before and for which one has no context. Crystallized intelligence, on the other hand, is related to one’s ability to use learned knowledge. IQ questions regarding vocabulary and analogies, which can more readily be answered by those who have read widely and know a lot of words, fall into this category.29 So IQ tests actually measure both fluid and crystallized intelligence. As Brink Lindsey notes, “Fluid scores peak by the twenties but crystallized intelligence continues to improve until approximately age sixty.”30 Head Start and other pre-K enrichment programs boost both crystallized and fluid intelligence.
Other tests measure discrete aspects of intelligence, such as verbal abilities. The SAT, which measures crystallized knowledge such as literacy, mathematical and writing skills, and the ability to solve problems that are needed for academic success in college, is often used as a proxy for an intelligence test, even though the SAT does not purport to measure intellectual potential but rather acquired skills.
The hereditarian arguments now hinge upon an average 15-point gap between U.S. white and African American IQ scores. Because the intelligence IQ measures is genetically passed from parent to child, they argue, the lower IQ scores of African Americans must be genetic and thus cannot be improved by early or intensive education. Some people even assume the average gap between African American and white scores applies to all individuals, as Edgar’s teacher, described earlier in this chapter, may have done when she cited the IQ gap as the reason for his baseless school placement.
Hereditarian claims are refuted in detail by contemporary works such as J. L. Kincheleo et al.’s Measured Lies, C. S. Fisher et al.’s Inequality by Design, M. K. Brown et al.’s Whitewashing Race, and R. E. Nisbett’s Intelligence and How to Get It.31 Invoking an average African American IQ score that is 15 points lower than that of whites errs in its assumption that the difference is innate. We’ve seen such assumptions of genetic mediation invoked before, as Stephen Jay Gould reminds us. Before World War II, the shorter stature of the average Japanese national was assumed to be a racial, genetically transmitted trait. But after Westernization and economic prosperity led to an enriched diet, the average height of the Japanese increased dramatically, and far too quickly to be ascribed to genetics.
Not only are the hereditarians’ claims wrong, their arguments are illogical and reflect naivete about genetics. They often espouse futility and urge the defunding of educational enrichment programs such as Head Start because, they insist, nothing can change “genetically” lower IQs. But not only has no specific gene been identified that mediates the IQ differences they reference, these enrichment opponents are conveniently forgetting a key fact about genetic disorders and conditions: “Genetic” does not equal intractable; so genetic conditions are not by definition “without remedy.” Even biological flaws due to demonstrably genetic causes can be remedied. For example, children born with the genetic disorder phenylketonuria (PKU) suffer profound mental retardation—unless they are diagnosed in time to withhold phenylalanine from their diet, in which case they suffer no retardation or ill effects at all. So even if IQ were genetically mediated, this would be no reason to abandon efforts to elevate it.
Gould also offers the example of nearsightedness, a visual defect that can be inherited and pose a serious bar to achievement if it is severe enough to prevent the affected person from learning, reading, functioning well on the job, and even negotiating the world safely. But eyeglasses and contact lenses erase the disability and close the performance gap between the nearsighted and the normally sighted.
I submit that in today’s communities of color IQ is largely compromised by disproportionate health assaults like a legion of chemical exposures, nutritional deficiencies and poisons, alcohol, drugs, and pathogens that cripple the nervous system. In IQ and the Wealth of Nations, Lynn and Vanhanen agree that intelligence is causally related to economic and health status, but they argue that high intelligence creates well-being, not the other way around. They also insist that beyond improving nutrition, very little can be done about poor health caused by low intelligence.32 But as Girma Berhanu wrote in the journal Educational Review, “Vanhanen has got his argument backwards. It makes far more sense to argue that the populations of rich countries do better on IQ tests because they have access to better nutrition and education.”33
In 2006 evolutionary psychologist Satoshi Kanazawa pushed the theory of reverse causation further by theorizing that the low intelligence among people of color in the developing world has created their poor health status. In “Mind the Gap… in Intelligence: Re-examining the Relationship between Inequality and Health,” published in the British Journal of Health Psychology, he posited that distance from the African birthplace of man directly correlates to both IQ and to health status. “The farther away a nation is from sub-Saharan Africa, both latitudinally and longitudinally,” he added a few years later, “the higher the average intelligence of the nation’s population.”34
According to Kanazawa, a reader in management at the London School of Economics, Africa, and by extension the larger tropical and subtropical world, provided a gardenlike environment that was warm and nurturing with abundant food—familiar and easy to live off, he argues, so that success within it requires very little intellectual effort. But those early men and women who left the continent for the north and west encountered unfamiliar environments characterized by freezing winters and seasonally disappearing food—environments that Kanazawa says were far more difficult to survive. Evolutionary pressures selected for those clever enough to adapt, so these people became healthier, smarter Westerners, including scholars who now congratulate themselves on having picked the right forebears.35
But experts point out that Kanazawa’s basic assumption is erroneous. Africa is no benign Garden of Eden with fruit hanging from every tree. Africans had to contend with apex predators, poisonous plants and animals, drought, oppressive rainy seasons, unfriendly neighbors, and, in the Southern and elevated parts of the continent, with snow and cold, just as in Europe. Not to mention a prodigious variety of infectious diseases. University of New Mexico evolutionary biology professor Randy Thornhill responded, “… to say just generally that these new environments were more difficult, is open to criticism I think, because the tropical environment is terribly difficult, especially the diseases. People describe how they work in the tropics and they’re just covered in parasites. It’s like parasite rain.” We need only reflect that the diseases of West Africa killed so many nineteenth-century European soldiers that it earned the sobriquet “White Man’s Grave.”
Eppig also disagrees with Kanazawa’s insistence that the high disease rate in Africa (much of it infectious disease) can be caused by low human intelligence. “We know that the distribution of infectious disease is largely not determined by humans, it’s determined by the diseases, and the types of environments that they thrive in. No one’s ever caught a case of malaria in Alaska because the mosquitoes that carry it can’t survive in Alaska.… That is entirely independent of human intervention.”
Kanazawa’s evolutionary biologist peers found his analysis sloppy and illogical. In spite of his unsupported claims, Kanazawa’s racialized “blame the victim” approach to health harms is a strategy that industry has found useful in sidestepping responsibility for poisoning communities; it recurs often in these pages.
Perhaps Kanazawa’s claims should be considered in light of his other writings that lack rigor but deliver shock value: For example, his poorly researched paid blog post for Psychology Today entitled, “Why Are Black Women Less Physically Attractive Than Other Women?,” which refers to “race differences in intelligence” as if they were fact while elaborating on his inflammatory subject.36 Psychology Today fired him, and Kanazawa apologized to the London School of Economics (but not to the women of African descent he insulted), admitting, “Some of my arguments may have been flawed and not supported by the available evidence.” Despite my repeated e-mailed and telephoned requests over the course of three years, Kanazawa would not agree to speak with me or even respond to my queries regarding his perspective.
Hereditarian scientists who inspire far more reverence than Kanazawa have also embraced African American intellectual inferiority. From accounts in Wired to the New York Times, much is made of the fact that William Shockley, an old-guard hereditarian who is quoted above, was a Nobel laureate.37 However, as a member of the Bell Labs research group in Manhattan he was jointly awarded the 1956 Nobel Prize in Physics for co-discovery of the transistor effect: His screeds on racial inferiority have naught to do with his celebrated achievement. Despite his ignorance of biology and genetics, he became a fervid eugenicist who advocated forcing or paying women of color to undergo sterilization in order to protect the gene pool. He wrote, “My research leads me inescapably to the opinion that the major cause of the American Negro’s intellectual and social deficits is hereditary and racially genetic in origin and, thus, not remediable to a major degree by practical improvements in the environment.”38
Perhaps Shockley’s foray into racial calumny can be dismissed as an example of the “Nobel curse” in which laureates stray from their area of expertise to espouse risibly dubious beliefs. But another hereditarian Nobel laureate has far more credibility in genetics: James Watson, who shared the 1962 Nobel for discovering DNA’s structure.
A genetic analysis by Iceland’s deCODE revealed that James D. Watson, a Nobel laureate who claimed that African Americans are innately less intelligent than whites, has more “African” genes than the average European. (Public domain)
Watson’s memoir entitled The Double Helix reveled in dismaying 1950s-era misogyny, and a sequel, Girls, Genes, and Gamow, updated his gender bias. But he took a break from his decades of hoary sexism in a 2007 interview with The Telegraph in which he declared his belief in African intellectual inferiority and theorized that black people have higher libidos.
He was, he said, “inherently gloomy about the prospect of Africa” because “all our social policies are based on the fact that their intelligence is the same as ours—whereas all the testing says not really.” There is a natural desire that all human beings should be equal, but, he said, “people who have to deal with black employees find this not true.”39 He also predicted that genes responsible for creating differences in human intelligence could be found within a decade: That was in 2007.
I suspect that Watson’s “gloom” only deepened in the furor afterward, when despite his belated disclaimer that “there is no scientific basis for such beliefs,” he stepped down under pressure from his directorate at Cold Spring Harbor Laboratories.40 Almost immediately afterward, an analysis by Iceland’s deCODE Genetics showed that Watson has sixteen times more genes of African origin than does the average white European.
Kari Stefansson, deCODE’s CEO, equated Watson’s genome with the African DNA complement found in someone with an African great-grandparent, and told the New York Times, “This came up as a bit of a surprise, especially as a sequel to his utterly inappropriate comments about Africans.”
The fact that a Nobelist subscribes to racial theories of intellectual inferiority is important because in our era of scientific complexity and rampant specialization, the role of authority is key. Many people who adhere to racial inferiority theories do not understand genetics well, and they are understandably swayed by the credentials of its high-profile espousers. Appointments at respected academic institutions, popular publications that boast impressively dense charts, graphs, and columns of data (as the racially fictitious 1840 census did), and Nobel Prizes persuade us, sometimes more powerfully than evidence can convince us.
The late J. Philippe Rushton, a professor at the University of Western Ontario (UWO), is referred to with the respect due his station even though he conducted research seeking to correlate race, intelligence, and sexuality by stationing himself in Toronto’s upscale Eaton Centre shopping mall in 1988. There he surveyed white, black, and Asian shoppers in detail about their penis size and how far they could ejaculate.
Perhaps he used mall shoppers because the university had barred him from using students in his research after he pressured and deceived them into answering the same questions.
Although his university criticized his Eaton Centre study as “a serious breach of scholarly procedure,”41 he was undeterred, and thereafter his eugenic screeds cropped up often in newspapers and magazines as well as in academia.
In 2005 Rushton, who had grown up in South Africa and became the head of the eugenic Pioneer Fund, told the Ottawa Citizen that he blamed the destruction of “Toronto the Good” on its black inhabitants42 and that equalizing outcomes across groups was “impossible.” He followed this with a 2009 speech at the Preserving Western Civilization conference in Baltimore, organized for the stated purpose of “addressing the need to defend America’s Judeo-Christian heritage and European identity” from immigrants, Muslims, and African Americans.
Some scientists criticized Rushton’s crude methods and outmoded scholarship, most notably his UWO colleague professor of psychiatry Zack Cernovsky. In 2010 he and I discussed Rushton when we shared a dais at the American Association of Psychiatry conference in New Orleans. Cernovsky noted, “Scientific scrutiny shows that Rushton’s methodology (e.g., measuring head circumference by tape as a substitute for IQ tests) and his generalizations from inadequate samples discredit his work.… his books have probably misled at least some physicians who may resort to discriminatory practices in their clinical decisions about black patients.”43
Yet before his death in 2012, Rushton was a respected member of his profession. In addition to holding a full UWO professorship, he was a Guggenheim fellow as well as a fellow of the American, British, and Canadian psychological associations.
Science journalists wield even wider authority. They are trusted to interpret complex medical and scientific subjects for laypersons, and none are trusted more than those staffing the august New York Times, “the paper of record.” In 2014, Nicholas Wade, former Times science editor, wrote a book entitled A Troublesome Inheritance: Genes, Race and Human History, in which he asserted that scholars have proven human races to be a biological reality driven by evolution that forged racial differences in economic and social behavior. Wade suggests these genetic differences tell us why some live in tribal societies and others in advanced civilizations, why African Americans are (supposedly) more violent than whites, and why Asians may be good at business. However, the book traffics in shoddy logic and tortured data, which were disparaged by scientists and reviewers alike.
The New York Times review by David Dobbs condemned it as a “deeply flawed, deceptive and dangerous book.” But the book’s ultimate embarrassment was its disavowal by the very scientists whose work Wade cited to make his case. More than one hundred stellar evolutionary biologists signed a letter decrying Wade’s “incomplete and inaccurate account of our research,” concluding “there is no support from the field of population genetics for Wade’s conjectures.”44
In 1990, the United States boasted nearly three hundred races or ethnic groups as well as six hundred Native American tribes. Hispanics had seventy additional categories.45 Medical journals routinely address race as a variable to be corrected for, as the focus of a medical question, and even as a consistent element of patients’ profiles.
But rarely is “race” defined in these journals, and it often is used illogically. Studies often equate race with a genetic profile without attempting any genetic analysis, as a study of the congestive heart failure drug BiDil did in 2005. Some studies refer to “black Americans” in opposition to Hispanic Americans, even though the latter can be members of any race. “Latino” and “Hispanic” are often treated as synonymous, although the former refers to land of origin and the latter to language.
(For the record, although the terms “race” and “ethnicity” are sometimes used interchangeably, race is typically used to refer to affiliations that are assumed to be based on shared physical characteristics, whereas ethnicity refers to nonphysical affiliations based on culture, such as a shared religion or language, which in the case of Hispanics is Spanish.)
The sloppy use of racial categories in popular and scientific discourse is a holdover from the “one-drop rule,” which held that any complement of African ancestry was enough to confer some degree of African American status on a person, or more accurately, to bar him from whiteness. Early in our nation’s history, these regulations varied from state to state, as different degrees of interracial mating produced mixtures such as “octoroons” and “quadroons.” In many cases, slave owners fathered children with the African American women they owned. Laws dictated what percentage of white forebears was necessary to allow a person to be defined as “white,” to be so labeled in the census, to vote, to marry a white person, and so on.
In reality, there has been a great deal of unacknowledged mating between black and white Americans that hides our true genealogies and renders our racial labels meaningless.
Much attention is paid to Thomas Jefferson’s genes going into Sally Hemings’s progeny, for example, but Hemings’s genes also enriched the white gene pool. Driven by racial abuse and marginalization, many people who looked “white” enough simply chose to shake off the racial fetters by becoming white. Hemings’s and Jefferson’s daughter Harriet, for example, married a white man with whom she had children and then disappeared into whiteness and subsequently was lost to history, as Barbara Chase-Riboud revealed in her 1979 fictionalized but historically meticulous account Sally Hemings and its 1994 sequel The President’s Daughter.46 More recently, historian Catherine Kerrison recounts Harriet’s fate in her 2018 account Jefferson’s Daughters: Three Sisters, White and Black in a Young America.47
I vividly remember hearing Berkeley professor Troy Duster’s brilliant response to a patronizing question from a television interviewer about his ancestry, “I come from a long line of slave owners.” He was alluding to these shrouded genealogies. One study calculated that nearly one of every three white Americans possesses as much as 20 percent African genetic inheritance yet looks white.48 More than one in twenty African Americans possesses no detectable African genetic ancestry.49
In spite of this, medical writings often equate race with a particular genetic profile, although a person’s supposed race and her genetics map very poorly onto each other. “If we were to select any two ‘black’ people at random and compare their chromosomes,” writes Sharon Begley in Newsweek, “they are no more likely to be genetically similar than either would be when compared to a randomly selected ‘white’ person.”50
Race is a social construction, and although it has medical consequences, these too are socially driven. They are not definitive or determinative. In short, your medical condition tends to reflect the race to which you are perceived as belonging.
Medical reports frequently ignore the fact that America is comprised of both black and white Americans who sometimes differ in their health profiles, and sometimes do not. The medical literature also sometimes fails to recognize the differences in health risks and status that face Hispanics and Latinos of widely varying ethnic groups. In fact, medical journals often investigate questions of race without ever defining the terms “black,” “African American,” or “Hispanic,” even as they discuss what purport to be consistent genetic characteristics within any of these diverse groups.
Nonetheless, average IQ differences among some U.S. racial groups—the widely touted “IQ gaps”—have been decreasing for the past decades. This narrowing may be partly a response to improved educational opportunities, or it could result from a decrease in the rates of some toxic exposures, such as lead’s removal from gasoline and interior paint. Both explanations point to social inequality, not genetics, as a driver of IQ inequities.51
As I have mentioned, the Hispanic-white IQ gap is crudely flogged by hereditarians like Jason Richwine to support a xenophobic political agenda. For example, Richwine insisted that not only a lower average IQ but also a failure to assimilate was reason enough to halt Hispanic immigration to the United States. “Failure to assimilate” is a nonsensical accusation to level at people who have been consistently barred from higher education, employment, and homeownership, and who are frequent targets of raids and sweeps intent upon challenging their right to a place in America, even after decades of peaceful, law-abiding residency.
One could be forgiven for failing to see how Asians are harmed by IQ mythology, which tends to place some Asians at the pinnacle of intellectual ability.
Not all Asians are accorded high IQ scores: Those from poor nations of the developing world, and their descendants in the United States, share the low-IQ fate of their global South peers. According to one scale, the East Asian cluster (Chinese, Japanese, and Koreans) has the highest mean IQ at 105, followed by Europeans (100), Inuit-Eskimos (91), Southeast Asians (87), Native Americans (87), Pacific Islanders (85), South Asians and North Africans (84), sub-Saharan Africans (67), Australian Aborigines (62), and Kalahari Bushmen and Congo Pygmies (54).
Moreover, those Asians who occupy the top strata of IQ rankings find this status to be a double-edged sword. It doesn’t as much laud Asians as it uses them to denigrate other, darker people of color. As UC Hastings law professor Frank Wu, author of Yellow: Race in America Beyond Black and White, writes, “Asian Americans are brought into the discussion only for the purpose of berating blacks and Hispanics.”52
The high intelligence and intellectual achievement of Asians may owe something to the Pygmalion effect. In a classic 1960 experiment, California teachers were informed that as a result of IQ test scores, certain students of theirs were found to be “special,” with prodigious potential and the expectation of intellectual greatness. Accordingly, the grades of the children labeled “special” improved dramatically, and, when tested a year later, half of their IQ scores had risen by 20 points. In fact, these children had been chosen at random, and the improvements in their scores served to demonstrate the outsize role that teachers’ expectations can play in a student’s academic success.53
The high IQ of some Asian populations is part of the “model minority myth,” Wu points out. However, it is possible to be seen as “too intelligent,” especially when high-IQ claims inspire unreachable goals that cause stress and even suicide among young Asian aspirants. The perception of Asians as naturally intelligent also inspires jealous resentment, causing other groups to target Asians and see that they are “taken down a peg.”
The Asian-white IQ gap is increasing as Asians’ IQs rise more swiftly than those of whites (just as those of African Americans are rising more rapidly than whites’), and there are signs of a backlash, including evidence that some colleges hold Asians to higher standards than other applicants. Schools deny this: In 2015 Princeton fought—and won—a discrimination suit over affirmative action policies that plaintiffs believe benefit African Americans and Hispanics while penalizing Asian students.54
As this book went to press, Harvard University was also defending itself against a lawsuit claiming that its affirmative-action policies favor African American and Hispanic students while harming the admission chances of Asian applicants with higher grade-point averages and standardized testing scores.55 The case, widely seen as “a battle over the future of affirmative action,” is supported by the conservative group Students for Fair Admissions, whose sixty-five-year-old leader, Ed Blum, helped to design the lawsuit.56 (In 2016, an earlier lawsuit filed by Blum failed to block affirmative action at the University of Texas at Austin.)57
However, in the magazine Diverse Issues in Higher Education, Emil Guillermo describes this battle over affirmative action as “our civil war” and points out that the situation is far more nuanced than Blum’s claims and the headlines suggest. “Fortunately,” Guillermo writes, “recent polls suggest the vast majority of Asian Americans support affirmative action.”58 He explains that grades and scores are only part of the metrics that determine acceptance to Harvard and that Asian students as well as African American, Hispanic, and white applicants often gain acceptance to Harvard by dint of exemplary character, experiences, and nonacademic triumphs.
For example, student Sally Chen, a daughter of middle-class immigrants who describes herself as “one of those less-than-perfect Asian Americans,” testified that her less-than-stellar grades and scores were redeemed by a high school record of exemplary leadership and civic involvement. Guillermo notes, “Chen testified that if Harvard couldn’t consider race she wouldn’t be at Harvard now. ‘There’s no way in which my flat numbers and résumé could’ve gotten across how much of a whole person that I am.’”
Asians’ high IQs are frequently yoked to declarations that they lack creativity, are crafty, clannish, and constitute an “inscrutable” group that is unwilling to fully integrate into U.S. society—the same accusation that Richwine levels at Hispanic Americans to argue against their inclusion as citizens. In Yellow: Race in America Beyond Black and White, Wu writes,
Every attractive trait matches up neatly to its repulsive complement, and the aspects are conducive to reversal.… To be intelligent is to be calculating and too clever; to be gifted in math and science is to be mechanical and not creative, lacking interpersonal skills and leadership potential. To be polite is to be inscrutable and submissive. To be hard-working is to be an unfair competitor for regular human beings, and not a well-rounded, likable individual. To be family oriented is to be clannish and too ethnic. To be law abiding is to be self-righteous and rigidly rule bound. To be successfully entrepreneurial is to be deviously aggressive and economically intimidating. To revere elders is to be an ancestor-worshipping pagan, and fidelity to tradition is reactionary ignorance.59
The “failure to assimilate” claim against Asians is especially unfair because racial bias often results in a refusal by other Americans to recognize their citizenship. Their identity as Americans is frequently challenged. It is well known that Japanese Americans were interned during World War II on a racial basis, but South Asians have also faced deportation and been stripped of their citizenship rights. Even today, fourth-generation Americans of Asian descent are often asked from what country they hail or complimented on their good English.
Not all stigmatized minority-group members are people of color.
American “poor whites” have long been treated like an ethnic group that suffers from congenitally low intelligence. In 1928 the Carnegie Corporation bankrolled an investigation into the “poor white problem in South Africa” and administered IQ tests to more than 16,500 children, 20 percent of them poor whites. The study found that “Poor white IQs were lower than average.”
In August 2016, outraged Britons forced top English educator David Hoare to resign after he publicly ranted against the Isle of Wight, once an upscale tourist destination. Hoare hadn’t flung the n-word or decried the presence of Britons of color; instead he had targeted a quite different group, calling the region an “inbred, poor white, crime-filled ghetto.” This view of indigent whites as a troublesome ethnic vector has long persisted in the United States as well, perfectly epitomized by Huckleberry Finn’s lazy, illiterate, alcoholic, and criminal father, “Pap.”
During the Civil War, fascinated Northern officers often described the ignorant, “coarse,” landless Southerners they encountered in war zones. One wrote of North Carolinian combatants, “I pity them; most are not intelligent, not half of them can read.” Another officer described their physical type: “A most forlorn and miserable set of people: white, long, lean, and lanky with long yellow hair.”60
Before they achieved full white identity in the United States, Italians, Irish, and others from the “wrong” regions of Europe were also despised as inferior dullards prone to sloth, violence, and alcoholism. The eugenics movement and intelligence testing came to scientifically validate such views, and eugenicists buoyed the National Origins Act of 1924, which barred immigrants from southern and eastern European countries as “dysgenic.” Low IQ scores provided a supposedly biological underpinning for this stereotype of poor whites as unintelligent, lazy, and prone to criminality, as it has for African Americans and Hispanics.
Today’s “poor white” group identity is widely reflected in mass media, from Cops to Eminem to Jerry Springer—and in the pages of The Bell Curve, which denounced not only African Americans but also “underclass whites” as genetically predisposed to lower intelligence and higher criminality.
However, a more sympathetic Atlantic article, published in September 2016, entitled “The Original Underclass,”61 symbolizes the recent attention given to the health status and behavior of lower-socioeconomic-class whites. Their profile of low achievement and drug addiction now casts them as an ethnic “underclass.” Using IQ as a departure point, pundits showcase its supposed side effects—sloth, violence, a tendency to addiction, and even abnormal physical types, so that a picture often emerges of malnourished, slothful, unhealthy, dirty, gun-toting alcoholics with low intelligence.
The reification of this group as an unintelligent and degraded race follows the same illogic used to invest African Americans and Hispanics with pathology. Powerful pressures of economics, social stratification, trauma, and discrimination conspired to lower IQ scores in lower-socioeconomic-status whites. Research to improve their health status, including their IQs, is a laudable goal, but we must take care not to slip back into damning eugenic stereotypes. We need more research into factors that compromise the intelligence of lower-socioeconomic-status whites, including studies that quantify their toxic exposures.
The treatment of “poor whites” as a racial group illustrates race is not an innate biological reality but a sociopolitical construct that is useful for maintaining political biopower.
But race is a social reality with real-world biological consequences and nowhere is this more apparent than environmental racism. De facto racial segregation, mortgage redlining, and, as I shall describe, the withholding of basic environmental services, are used to force racial groups into environmental “sacrifice zones,” where exposures to high levels of IQ-lowering heavy metals, chemicals, and pathogens impede normal brain development. Targeting the groups with aggressive marketing of especially noxious alcohol and tobacco products and confining them to “food deserts” that help to ensure brain-eroding nutritional deficiencies all have marked biological consequences. When it comes to exposures that limit cognition, race as a social construction becomes race as biological fate. Unless we choose to intervene.
We could not do better than to start with lead.