III. The Seduction of Science

“It is the theory which decides what can be observed.”

ALBERT EINSTEIN

ABOUT TWENTY YEARS AGO, I picked up a book called Apes and Angels published by the Smithsonian Institution in Washington. It is a modern compendium of the scholarly, scientific, and popular evidence—ranging from comparative measurements of skull sizes and arm lengths to anthropological surveys and popular caricatures—that had been originated by nineteenth-century scientists and popularized by journalists to “prove” a respectable and popular thesis: the Irish had descended from apes only a few generations before, while the English were descendants of man created in God’s image and thus “angels.”27

In addition to impressive forensic evidence, scientists of the day could point to the proof around them. Most of the Irish were poor—and therefore overrepresented in prison populations—at a time when being poor or criminal was itself thought to be genetic. In England, Irish workers were often illiterate and at the bottom of the social hierarchy, but in Ireland, English families were powerful and prosperous. Even in the United States, many Irish immigrants were laborers or indentured servants, and in cities like Boston, they competed for jobs with free blacks.

What is most interesting about this biology-based argument is its academic and social respectability at the time. That’s what makes its parallels with other biology-is-destiny theories about sex and race so striking. Freud, for instance, was actually very clear about his adversarial stance to women’s equality. “We must not allow ourselves to be deflected … by … the feminists,” he wrote to his colleagues, “who are anxious to force us to regard the two sexes as completely equal in position and worth.”28 Yet his theories have been cited for most of the last century as objective. Thanks to that retrospective look at apes and angels, I began to realize that even the most objective areas of education need always to be questioned; the more they present themselves as value-free, the more the need for questions.

Think about science. It has become the most powerful influence on the ways our minds and capabilities are valued and the ways we ourselves value them. But the scientific methods on which we now rely to verify and explain our personal experience are not so time-honored as we like to think: only about three or four hundred years old, they are newcomers in human history. Moreover, during most of that time, science was conducted within centers of learning run by religions that often discouraged or punished secular challenges to religious beliefs about human origins and relative worth. (We have only to look at fundamentalist Christians still teaching creationism in schools, or at other contemporary fundamentalists interpreting Islam, Judaism, Shintoism, and other religious doctrines, to understand the resistance to any view of human beings as part of a continuum of nature.) Even among scientists who pursued secular inquiry, often at great personal risk, a total rejection of cultural assumptions was impossible. If all women and various races of men were said by religion to have no souls, for instance, you could be pretty sure that science would soon discover they were “less evolved” as well. The theory came first, and evidence in its support was mustered afterward—not necessarily in a dishonest way, but as a result of selective vision.

As nineteenth-century materialism and travel to other cultures weakened the church as a practical authority, however, scientists began to achieve more independence from it—but also to inherit some of its seductive power. Now, they were expected to become society’s explainers, justifiers, and providers of rules. If converting the heathen to Christianity was no longer a sufficient rationale for the colonialism that had become the pillar of European prosperity, for instance, then science was expected to supply equally compelling reasons why such a system was positive, progressive, and “for their own good.”

This interest in ranking by race and ethnicity gave birth to craniology, the father of many prestigious and influential specialties devoted to documenting and measuring human differences. By elaborate measurements of cranial capacity, this new branch of science set out to assess brain size, relative development of different areas of the brain, and thus (it was assumed) intelligence itself.

In context, craniology must have seemed humane after many seventeenth- and eighteenth-century beliefs: for instance, that intelligent women were the work of the devil (a belief that had helped to justify killing nine million women healers and other pagan or nonconforming women as witches during the centuries of change over to Christianity),29 or that an individual’s physical and mental problems came from an imbalance of the four cardinal “humours” of blood, phlegm, choler, and bile (the reasoning behind the standard medical practice of “bleeding” patients to restore the balance). Instead, craniology offered the reward of simple, clear, replicable proofs of human hierarchy and its mental manifestations.

From the beginning, it was clear that men as a group had more “cranial capacity” than women as a group—a fact that was both origin and verification of craniology’s basic premise that greater skull size meant greater intelligence. After many wide-ranging studies that compared multiple measurements-per-skull by race and ethnicity as well as by sex, craniologists were also able to report that the average cranial capacity of the white race was larger than that of Africans, Asians, and even Southern Europeans.

These discoveries squared with other mainstream scholarly conclusions of the day. From anthropology to neurology, science had demonstrated that the female Victorian virtues of passivity, domesticity, and greater morality (by which was meant less sexual activity) were rooted in female biology. Similarly, the passive, dependent, and childlike qualities of the “darker races” (then still called the “white man’s burden”) were part of their biological destiny. Evolutionists also chimed in with a reason for all this: men who were not Caucasians and women of all races were lower on the evolutionary scale. In the case of race, this was due to simple evolutionary time, since it was then believed that European civilizations were the oldest. In the case of Caucasian women—who obviously had been evolving as long as their male counterparts—there was another rationale. The less complex nervous systems and lower intelligences of females were evolutionary adaptations to the pain of childbirth, repetitive domestic work, and other physical, nonintellectual tasks. Naturally, females of “lower” races were also assumed to be inferior to their male counterparts.

Then as now, the biologically based myths that resulted from these theories of sex and race were parallel. Men of color, and all women, Caucasian and otherwise, were said to be childlike, governed by their emotions, closer to nature, limited in intellect, in need of simple rules to follow, suitable for physical tasks, and so on. These myths diverged only slightly to justify different functions and economic uses—greater physical strength for men of color, greater strength to bear pain and childbirth for all women—and together, they worked efficiently and well to legitimize white male supremacy.

Elizabeth Fee, a scholar of nineteenth-century craniology, has gathered representative quotes like these:

As a new and increasingly popular branch of science, craniology added impressive human data to older plant- and animal-based arguments for breeding only with “one’s own kind.” This proof of the influence of both racial purity and superiority had special importance in the United States, where a Civil War against slavery, a domestic movement against women’s status as the legal chattel of fathers and husbands, and the joint fight of these movements for universal adult suffrage were threatening to undermine the entire sexual and racial caste system. Craniology reinforced the need to restrict the freedom of white women—the means of reproduction for the white race—and to outlaw “miscegenation,” “race mixing,” or “race mongrelization.” Though such antimiscegenation laws were not used against white men who fathered children with women of color, even by force, they did prevent legitimizing any such union, thereby perpetuating a racially “pure” system of inheritance.* And they were used with full effect against any bonding between white women and men of color.

Craniology was flexible enough to survive many internal disputes—and some external ones. In the second half of the nineteenth century, for instance, neuroanatomists believed that higher mental activities were located in the frontal lobes of the brain. Not surprisingly, craniologists confirmed that those areas were larger in male skulls than in female ones, and that the less important, parietal regions at the top and sides of the skull were smaller than in females. Toward the turn of the century, however, neuroanatomists revised their opinion: higher intellectual abilities were located in the parietal regions after all. Soon, craniologists had discovered that their earlier measurements had been inaccurate: males actually had larger parietal lobes than females, and the newly unimportant frontal regions were smaller.

Eventually, craniologists had to modify their focus on sheer brain volume as the primary indicator of intelligence. Otherwise, whales, elephants, and other possessors of much-larger-than-human brains would have had to be counted as potentially more intelligent than Homo sapiens. The obvious answer was to take body size into account, but there was an internal split on how to do it. Should scientists use the ratio of brain weight to body weight (which would have made women as a group more intelligent than men as a group) ? Or should they use the ratio of brain surface to body surface (which proved women’s intellectual inferiority but also made white males inferior to men of some other races)?

Alas for craniology, there was no one answer that met the demands of both logic and politics. In addition, there were scandals: some scientists had just been assuming that the skulls and brains they used in laboratory experiments were from male or female cadavers, based on their own tautological criteria, and others were comparing the wrong numbers within the up-to-5,000 measurements of one skull. In a back-to-basics effort to save their profession, some craniologists returned to a simple hierarchy of skull and brain size. After all, that was easily understood and met the commonsense criterion of explaining male superiority.

Soon, however, new data on mental patients disclosed the alarming fact that many of these “inferior” people had large skulls. Since women who didn’t conform to their gender roles were (as we now know) more likely to be judged insane, there were a disproportionate number of females among the mental patients in general, including the large-skulled ones.

Ingenious explanations were devised. Some scientists argued that a large skull indicated both a greater mental capacity and an increased danger of madness. Because males had stronger characters, they were better able to develop the former and resist the latter. Females, well known to be weak and subject to hysteria because of their female organs, were incapable of using the intellectual potential and likely to fall victim to the instability. Among the proofs presented was the case of an exceptionally large-brained female graduate student who had committed suicide after failing her exams. With one expedient theory of the differential impact of large skulls on males and females, craniologists had managed to explain away three facts: why mental patients were disproportionately female, why all geniuses were male, and why females were more likely than males to attempt suicide.

But even such ingenuity couldn’t save craniology from the weight of its own contradictions. By the early twentieth century, scientists were in disarray from the cumulative impact of in-fighting and public problems with their criteria.* One of its death blows had been dealt in 1901 by Alice Lee, a London mathematician who did a study comparing the skull sizes of female medical students (presumably sane, since they had been accepted into such august company) with those of respected male faculty members. Since some of the female students had larger skulls than some of the male authorities, the distinguished men had to choose between their own intelligence and accepted craniological theory. (As Lee pointed out, the theory couldn’t be true of an entire sex unless it was also true of individuals.) As a student rigorously trained by Karl Pearson, a respected male biometrician, Lee also noted that most skull differences fell within the 3-percent margin allowed for error. Her paper was so criticized by craniologists that, like many women academics today, her work had little impact until a respected male academic defended her; in this case, a spirited defense by Pearson himself, a supporter of female emancipation.33

In later years, larger and more random samples were to confirm what we now know: there are no racially consistent differences in average brain size; and neither male nor female skull or brain size, as long as it is within a normal range, has any relationship to intelligence. Craniologists had defeated themselves from the start by working backward from the false hypotheses of male and Caucasian superiority.* But all the respectable biases leading up to the flowering and acceptance of these theories, as well as their long century in the sun, produced too many casualties to number.

John Hope Franklin, a modern African-American historian at Duke University, gives such beliefs as much credit for keeping African Americans in slavery and out of the U.S. Constitution as the economic motives more often cited. “We don’t really know, we only think we know,” he points out, that the Constitution could not have been ratified if it outlawed slavery. He blames “the notion that Africans were inherently inferior and, consequently, slavery was a satisfactory, even desirable status for them.”34

Even after such biological arguments had been discredited at a scientific level, they continued to enjoy an afterlife in political and social agendas—in Nazi Germany, for instance. And only in 1991, thanks to a new spirit of openness and glasnost, did citizens of the Soviet Union discover that a Brain Institute had been working away in Moscow for sixty-seven years, trying to prove Stalin’s thesis that “the New Soviet Man” could be bred by developing brain weight as a measure of intelligence (and where Stalin’s own brain, as well as those of Lenin and even of dissident Andrei Sakharov, had been faithfully weighed and preserved).35 In this country, such beliefs no longer have much open support, but they still crop up in the cruel jokes and cartoons that sometimes depict African Americans as monkeylike; and the beliefs that still confront women with arguments that “femininity” and intelligence are somehow contradictory, and that women’s biology restricts rational thought. An updated version of this bias is the notion that women’s advances are damaging our health. Though longevity and mental health of women of all races have actually increased in the twenty years of the latest wave of feminism—and though poor black women, not white male executives, have always had the greatest incidence of tension-related diseases in any case—there is a premise that doing “men’s jobs” will give women “men’s diseases”; a not-so-subtle threat that “success will make you sick.”

In general, however, craniology and other nineteenth-century theories of group differences seem ridiculous now. That’s why it’s so important to understand that they were once respectable, and to look with healthy skepticism on currently accepted theories that may turn out to be just as seductive, dangerous to self-esteem, and wrong.

In 1981, anthropologist Stephen Jay Gould set out to mend some of the damage done by his profession. In The Mismeasure of Man (a title that, ironically, once again omits women), he retraced early craniological studies to show the careless, biased, even fraudulent methods with which their massive samplings had been gathered. Nonetheless, Gould found that, for the most part, this was not done with any malice or even consciousness; an important testimony to the culture-bound assumptions on which science may rest. Theories are most successful, Gould concluded, when they enable millions of people to believe “that their social prejudices are scientific facts after all.”36