As new forms of sociality become increasingly important in the survival and proliferation of humans, brain size becomes proportionally less important. Brain size had been increasing for 2 million years or so, as bipedal primates with chimp-sized brains and cruddy tools came to have remote descendants with brains three times as large and very fine tools. But by perhaps 20,000 years ago something different is going on: the evolution of culture has assumed a trajectory largely independent of the bodies producing it. As the Acheulean tools of 500,000 years ago evolved into the Mousterian tools of 100,000 years ago, the heads and brains of the people using them also evolved. Yet as the biplanes of a century ago evolved into the jetliners of today, they did so without any concomitant change in the brains or heads of the people making or using them. Brain size simply stops being meaningful for human existence, because our survival increasingly becomes predicated on what is between those brains, rather than what is within them—the social aspects of human existence that are invisible paleontologically.1 These incorporate new un-ape-like ways of perceiving and interacting with relatives and non-relatives, and make the case for human evolution being largely inaccessible to the biologist or primatologist without acknowledging the exceptional elements of our social and cultural history.2
As the cultural aspects of our existence increasingly determine the content of our lives—from the languages we speak to our diets, our personal appearance, our thought processes, and most fundamentally, our ability to thrive and breed—it becomes increasingly difficult to separate the natural from the cultural. Indeed, perhaps the most frustrating assertion in the study of human evolution (aside from the claim that it didn’t happen) is the claim that there is a “human nature” that is separable from human culture, and discernible on its own—as if culture were like the icing on a cake, simply needing to be scraped off, in order to observe our purely biological selves. But this is wrong for three reasons.
First, we see the human species culturally. Science is a process of understanding, and we understand things culturally. We hope that we can observe and transcend the cultural biases of our predecessors, but there is no non-cultural knowledge. As a graphic example, consider the plaquethat was attached to Pioneer 10, launched in 1972, and is now outside of our solar system (fig. 2).
Why was NASA sending pornography into outer space? Because they wanted to show the aliens just who it was that had sent the space probe out. But of course the handsome, fit man and woman depicted didn’t send the probe out; that was a bunch of male nerds. So the illustration is a symbolic representation of the group that sent the probe out. But which group? Americans? Aerospace engineers? Primates? No, the group NASA wanted to represent was the species Homo sapiens. Not children, not seniors, just handsome adults. And why send them naked? After all, that’s not what the aliens will see when they track the probe back to earth (a map was also conveniently provided). Answer: Because they wanted to depict the man and woman in a cultureless, natural state. But surely the shaves, haircuts, and bikini waxes are cultural! As are the gendered postures, with only the man looking you straight in the eye. In a baboon, that would be a threat display; let’s hope the aliens who intercept the space probes aren’t like baboons. And finally, what’s the caption for the image of the man with his hand up? “Howdy!—And welcome to our solar system!” Or “Halt!—This is a private nudist sector of the galaxy!” Or perhaps even, “Excuse me, but is there a bathroom in this quadrant?”
Figure 2. Image from NASA’s Pioneer plaque (Wikimedia Commons).
Thus, in believing itself to be freeing the image of culture, NASA was really filling it with cultural information, but simply failing to recognize it. Culture is always there, in human thought and act, and it is very easy to mistake for nature.
Second, we have been coevolving with culture for a very long time, several millions of years. There seems little doubt that dexterity and intelligence and technology all coevolved. Culture is thus, in a very fundamental sense, an ultimate cause of the human condition: we have evolved to be adapted to it.
And third, the environment in which we grow and develop is fundamentally a cultural one, filled with social relationships, linguistic meanings, manual labor, prep school, fatty beef and high fructose corn syrup, beer, smog, and cigarettes. Culture is thus also a proximate cause of the human condition. To talk of human nature abstracted from culture is pre-Darwinian nonsense. From the standpoint of human evolution, then, the quest to discover a human nature independent of human culture is a fool’s errand; human facts are invariably natural/cultural facts.
The fallacy of reducing natural/cultural facts to natural facts lies behind the long-standing fallacy of race—the idea that the human species can be naturally partitioned into a fairly small number of fairly discrete kinds of people. Race was the first question that guided anthropology in the eighteenth century: given that there were all these natural kinds of animals, vegetables, and minerals out there, what natural kinds of people were there?
At the conjunction of the age of exploration, the age of colonialism, and the age of science, the Swedish biologist Carl Linnaeus gave a definitive scientific answer: there are four kinds of people, living on different continents, and color coded for your convenience—white Europeans, yellow Asians, red Americans, and black Africans. And they could be naturally separated not simply on the basis of their continents and colors, but also on the basis of how they dressed (tight-fitting clothes, loose-fitting clothes, painting themselves with fine red lines, and anointing themselves with grease, respectively) and their legal system (law, opinion, custom, and whim, respectively).3
The next generation of scholars tried to rely more on simply physical attributes, and also synonymized Linnaeus’s taxonomic category of “subspecies” with a more colloquial term referring to a strain or lineage of living things—race. In practice that meant that two usages of the term were concurrent—in reference to (1) a formal taxonomic subspecies, and (2) an informal group of people sharing a common identity and narrative of descent. The first would remain, with minor modifications, the classification of Linnaeus; the second, however, would allow you to racialize groups like the Gypsies, Lapps, Eskimos, and Jews (now known as the Roma, Sami, Inuit, and Jews). By the 1920s, anthropologists were arguing that the latter kind of race was largely illusory, for those groups were not “natural” units; and by the 1960s, anthropologists were coming around to the realization that the first kind of race was illusory as well. Early fieldwork showed, for example, that continental groups were far from homogeneous. Thus, works like The Races of Africa and The Races of Europe showed that however earnestly the investigators believed themselves to be looking at large natural subdivisions of the human species, those subdivisions could themselves be readily subsubdivided.4
As early as 1931, the biologist Julian Huxley would observe, “It is a commonplace of anthropology that many single territories of tropical Africa, such as Nigeria or Kenya, contain a much greater diversity of racial type than all Europe.”5 Nearly twenty years later, when Huxley was president of UNESCO, he commissioned a Statement on Race to formalize and disseminate the post–World War II consensus. Change did not come so easily, however, and scholars of the earlier generation, including some former Nazi anthropologists, objected strongly to the newer consensus.6 Nevertheless, as we noted in chapter 1, by 1957 we now understood the human species “as constituting a widespread network of more-or-less interrelated, ecologically adapted and functional entities.”7
The modern understanding of human variation that emerged in the later part of the twentieth century involved a new empirical understanding of human variation, implying that race, like the geocentric solar system, was effectively an optical illusion. You could see human races much as you could see the sun rise, cross the sky, and set over the opposite horizon; but your mind was simply playing tricks on you. In one case, the earth’s rotation leads you to embrace the geocentric illusion; in the other case, centuries of political intellectual history lead you to embrace the racial illusion.
The major features of human diversity are patterned quite differently than Linnaeus and two subsequent centuries of premodern human science thought. The primary ways that human groups are similar or different from one another is cultural, although that concept only began to be formalized in the 1870s. If we (perversely) choose to ignore the primary patterns of human diversity, and try to focus instead only on the biological differences, we find that the major pattern of human biological diversity is polymorphism; that is to say, most alleles in the human gene pool are cosmopolitan, and found in most places, although in varying proportions. The second UNESCO statement on race, released in 1951, explained that for observable features, “the differences among individuals belonging to the same race are greater than the differences that occur between the observed averages for two or more races,” but it was not until 1972 that the geneticist Richard Lewontin was able to quantify that statement by studying genetic data. He was able to show that upward of 80 percent of the detectable genetic data in the human species was to be found within any individual population—a finding that has proven to be robust to all kinds of genetic data.8 If you choose to ignore the cultural and the polymorphic variation, the major feature that remains is clinal, that is to say, varying gradually over geography. And if you choose to ignore the cultural, the polymorphic, and the clinal, what’s left is local variation. Race is simply a biological illusion.9
But knowing what race is not doesn’t tell us what race is. Race is a process of aggregating and classifying people, creating bounded categories of difference where none exist “out there.” It is thus a conjunction of difference and meaning. That is to say, you can measure how different people are, or populations are, but that does not tell you whether they constitute two variations on the same theme, as it were, or two different themes. Those decisions involve the construction and imposition of meaning upon the patterns of human diversity you observe, the attribution of different properties to people on either side of the boundaries, as well as the patrolling of those boundaries (in the form of miscegenation laws).
The recognition that race is a fact of nature/culture, rather than a fact of nature, is sometimes misunderstood. In a reductive mind-set that sees biological, or even genetic, facts as real, and cultural facts as less real, we sometimes hear that we now know that race “is not real.” But natural facts are often less real than cultural facts—like money or education. Facts of nature are usually not very important to human existence any more: that has been the trajectory of human evolution for the last few million years—to create our environments and realities. Indeed, as a unit of nature/culture, race can be a crucial determinant of the attributes of one’s life. Race can be inscribed upon the body in remarkably subtle ways, for example, where apparently consistent racial differences in biomedical risk factors often turn out to be a result of the conditions of life.10 As it has been epigrammatically noted, your zip code is a better predictor of your health risks than your DNA code.
One can, of course, study the differentiation of human populations—how they specialize and survive and adapt to local conditions. They do it culturally, physiologically, and genetically, and usually all at the same time. Geneticists, however, sometimes try to bracket their own data and analyze the genetics separately, often in the naive belief that in so doing, their work is free of the cultural constraints and values entailed in working on people.
That is what the earliest students of the blood groups thought. With the discovery of the ABO system, geneticists in World War I tried to cluster the human species into natural groups and discern the true “races of mankind.” They concluded that there were three kinds of people: European, Intermediate, and Asio-African—or essentially “white” and “other.”11 They sampled more populations and analyzed more loci, but kept coming up with racial nonsense. Into the 1960s, in fact, one leading proponent of racial genetics could claim to have identified thirteen human races, including five in Europe and only one in Africa—hardly indicative of anything uncultural. It wasn’t until a decade later that the geneticists came to realize that their data don’t actually reveal races at all; like the rest of the data on human biological diversity, they universally exposed polymorphic, clinal, and local variation.12
There is, of course, geographic differentiation of peoples and their gene pools. There are also discontinuities of greater or lesser extents, caused by features like mountain ranges or language difference; and statistical ways of discerning them. But none of them produce the fairly large and fairly discrete kinds of people that we encode as “races.” (One famous study, which was widely misrepresented, divided the human gene pool into between two and seventeen groups, the actual number being input rather than discovered. When it partitioned the human gene pool into five groups it retrieved essentially continental groupings, and into six groups it retrieved the continents and the Kalash people of Pakistan.13)
The recognition, in the third quarter of the twentieth century, that microevolutionary taxonomy did not describe a significant biological component of the human species, for our species had a very different empirical structure, was paralleled by new insights in macroevolutionary human taxonomy. By the 1970s, paleontology had collapsed many of the old, weird genera (like Plesianthropus, Pithecanthropus, Telanthropus, and Zinjanthropus) into just two: Homo and Australopithecus. (Paranthropus would later be resurrected for the “robust australopithecines.”)
The historian Robert Proctor has observed that the microevolutionary and macroevolutionary taxonomies are intertwined, for the practice “is ultimately a moral choice. . . . As cultural creatures, we have the capacity to determine whom we will include or exclude as part of us.”14 The proliferation of taxa below and above the species level constructs a narrow channel for entry into “us-ness.”
Milford Wolpoff and Rachel Caspari have called attention to the links between the scientific philosophy of essentialism in understanding prehistory and modern human diversity. “Essentialism” is a term that does a lot of work in philosophy of science, and in the present context it means establishing biological groups on the basis of its members possessing one or a few key features. Any specimen or person lacking the crucial feature must therefore be accommodated by establishing a new category. This practice, they argue, promotes the proliferation of pigeonholes, which in turn easily become tree branches, and give the illusion that they can be linked into a phylogenetic history.15 In short, the process of biological reification above the species level is connected to biological reification below it.
The Neanderthals hold a special place—biologically and mythologically—in the knowledge of who we are and where we came from.16 The discovery of Neanderthals in the nineteenth century pointed to a deep history of “otherness” in Europe, of people who might have been victims, variants, or even ancestors of modern-day Europeans.17 Their differences from us are fairly small. Their brains were the same size as ours, but the heads that contained them were low and long. You can find people today with brow ridges or sloping foreheads or weak chins or long heads or large, narrow faces or projecting midfaces or large jaws with more than enough room for wisdom teeth. But you don’t find those features together, or quite so extreme, in anyone living today.
What, then, is our relation to the Neanderthals? Are “they” the odd-looking people whom “we”—that is to say, lanky, round-headed, anatomically modern Homo sapiens—dispossessed and exterminated? That was an explanation that made sense to nineteenth-century Europeans, who eagerly imagined their own colonial ambitions and barbaric exterminations stretching back into the dim past.18 Or did “they” represent humans in a state of pure nature, their lives “solitary, poore, nasty, brutish and short” as Thomas Hobbes and his Enlightenment successors portrayed the forerunners—and shadow—of civil society? Perhaps “they” were neither primordial victims nor primordial forebears, but simply freaks of nature—deformed, maybe by accident of birth or circumstance of life, and interesting because they are weird and pathological.19
The truth is likely to be “a little of each,” since the alternatives were never mutually exclusive. Their bones show lots of evidence of healed fractures; their teeth are worn as if they were being used as tools; and their muscular development was strikingly asymmetrical. Whatever they did, it was rigorous, it was cultural, and it was humane (at least, they took care of friends with broken arms better than chimpanzees do). They also tended to get a lot more exercise on one side than the other. They were replaced in the fossil record of Europe by less stocky people like you and me, who had chins and foreheads. And yes, they were uncivilized. They sometimes buried their dead, but never sent any grave goods along with the deceased for the journey. They didn’t build anything, or at least anything lasting or recognizable. If they decorated themselves, or had any aesthetic sensibility at all, it was rudimentary at best.
The imaginary encounter between a human and a Neanderthal has been the subject of a wide range of literary efforts, from Jack London (Before Adam) and William Golding (The Inheritors) to the contemporary novelists Jean Auel (The Clan of the Cave Bear) and Robert J. Sawyer (Hominids). What stimulates the imagination is the encounter with otherness; after all, it was only in 1537, in a papal bull called Sublimus Dei, that Pope Paul III officially declared that Native Americans were actually rational beings with souls. So what would it be like to encounter someone who was so different from you that they might possibly be considered not really human?
Of course, we don’t know—but we do know that our answers to the seemingly natural question “Human or not?” are strongly conditioned culturally. Where naturalists of the eighteenth century appreciated that the interfertility of living peoples everywhere indicated that we were all one species, slavers of the ninteenth century worked hard to inflict a subhuman condition upon their victims, and then to read that as an indication of their subhuman natures. By the twentieth century, reactionary geneticists tried to show that there might be hidden debilitating effects of interracial matings.20 As late as the 1950s, a right-wing botanical geneticist named Reginald R. Ruggles Gates would argue that since plants are profligate outside their recognized species boundaries, human interfertility should not be a criterion for placing us all in the same species. And in 1962, a physical anthropologist named Carleton Coon sacrificed his career and reputation on a book purporting to show that whites had become human 200,000 years before blacks had.21 Of course these reactionary works were all influenced by the politics of the age, which is exactly the point.
The politics are more subtle, but Neanderthals have been shuttled back and forth across the boundary of humanity over the years. They have had gender roles projected upon them, been portrayed as cannibals and as flower children,22 and even been caught up in bioethics discussions—as a geneticist recently glibly called for a “an extraordinarily adventurous woman” to carry a cloned Neanderthal to term.23 The Neanderthals are a little bit different physically, and a little bit different technologically, but of course we don’t know what they were like mentally or socially, and fossils can’t mate. So, do we expand the category “human” to include them (as a subspecies, Homo sapiens neanderthalensis—in contrast to our own, Homo sapiens sapiens), or do we restrict “human” solely to people in our contemporary, modern frame of reference (and call them Homo neanderthalensis)? In the 1980s the genetic data were counted strongly in favor of the latter; today they count strongly in favor of the former.24 What the genetic data will strongly show thirty years hence is anybody’s guess. It does seem as though the final call on that question is underdetermined by the science, genetic or anatomical. That is because the issue, the boundary of our species, the fence around humanity, is constructed from nature/culture.
The greatest falsehood of the imaginary encounter between a human and a Neanderthal is that they would be classifying us the way we classify us. As if, after emerging from your time machine in 70,000 B.C. and chancing upon a band of early humans, they would greet you with “Hey, it’s another one of us forehead-and-chin guys! Come on and sit a while! You like roast mammoth?”
But you don’t hang out with people who have same shaped skull as you. Nobody does. Actually, of course, they would probably evaluate you the way people always have and probably always will—they would look to see whether you could communicate and behave appropriately, present yourself appropriately and share basic ideas and values with them, by which they could infer that your behavior is more or less predictable. Since you would have no idea how to communicate or behave, you would probably seem pretty weird, if not threatening. The idea that they would evaluate you based on your chin and forehead is an ethnocentric conceit, and to them you would probably be at least as different as a Neanderthal would be. It’s hard not to think of early modern humans as a cohesive unit, different from Neanderthals and aware of it, and behaving accordingly. But it is probably a mistake to constrain our understandings of Pleistocene peoples by encoding our own cultural biases into them.
One important cultural bias involves the fallacy of reification. Why assume that the Neanderthals were a single coherent group, simply because their technologies were similar and their bodies were similar? Again, if they were like other cultural groups—like us, that is—then they probably exploited local resources differently across space and time, communicated differently, and acted differently. The idea that they were somehow a single cultural unit, because we identify them as such skeletally, is actually a bit of a stretch. Consider the Aryans: Once upon a time there was a philologist at Oxford named Max Mueller. He mastered Sanskrit and ancient Indian scriptures and wrote popular works about the early Indian nobility called Aryas, and inferred that they were the original speakers of the ancestral Indo-European language, which he called Aryan. And pretty soon his acolytes were not only talking about the Aryan-speakers but about the people doing the speaking—their attributes, both cultural and physical. Toward the end of his life he famously chastised those followers of his who were so aggressively reifying the Aryans.
I have declared again and again that if I say Aryans, I mean neither blood nor bones nor hair nor skull; I mean simply those who speak an Aryan language. . . . I commit myself to no anatomical characteristics. . . . To me an ethnologist who speaks of Aryan race, Aryan blood, Aryan eyes and hair, is as great a sinner as a linguist who speaks of a dolichocephalic dictionary or a brachycephalic grammar. To me it is worse than a Babylonian confusion of tongues—it is downright theft.25
The point is that he appreciated that he was talking about a deduced language, and although someone had to be speaking it, and they had to look like something, those deductions were considerably removed from the data, which were simply inferences about an old language family that he was calling Aryan. And we all know where that went a couple of generations later. So anthropologists are wary of reifying peoples.
Yet we impose modern cultural ideas not only upon the life of Neanderthals, but upon their death, too. The most frequent question we ask about the Neanderthals is, Why did they become extinct? Why didn’t they make it, as we did?26 As framed, the question invites you to identify the flaw in Neanderthals, why they missed out on “The preservation of favoured races in the struggle for life,” as Darwin’s subtitle to The Origin of Species had it. What, in short, was wrong with them? We have quite a list of candidate flaws: too dumb, too uncommunicative, too carnivorous, too cold adapted, too pacifist, too conservative, just too damn ugly. And yet we don’t ask that question about other former human groups. What was wrong with the Hittites? What was wrong with the Sumerians? What was wrong with the Olmec? If they had empires, their empires rose and fell. But we see the people themselves as part of the ebb and flow of human bio-culturally constituted social units—assuming identities, having identities imposed upon them, leaving their graves and objects behind, and leaving behind relatives with other identities.
Descent and relatedness are bio-cultural, and are aspects of a unifying theory, kinship—which is a universally mythologized biology. The status of skeletal remains from Liang Bua (“Homo floresiensis”) are disputed, but why should it be particularly important to the science of human evolution if there was once an island of isolated, late-surviving primitive hominids in Indonesia?27 What would they actually change our ideas of? The “hobbits” would be interesting as newfound cousins, as a part of our narrative of deep kinship and descent. With the “hobbits” the myth is only slightly different, at best; but the new mythologies of Homo floresiensis have far surpassed their scientific value. And thus, even as the Neanderthals are being progressively demythologized in the old ways, new mythologies of human origins are taking form.
The problem of imposing macroevolutionary taxonomic thought upon human microevolution is seen in modern controversies about the “Denisovans.” Who were the Denisovans? A race of mighty Ice Age hunters, who traversed the great frozen steppes with steely resolve, looked the great woolly mammoths straight in the eye, and thrived by their wits and cunning in those dark, primordial, and savage times.
Actually, they were a finger bone and a couple of teeth, dated to about 50,000 years ago, from a single stratigraphic layer in a cave in Siberia. And with the aid of high technology and low theory, we have learned a little about them. Initially, the mitochondrial DNA isolated from the finger bone indicated that the finger bone was genetically distinct from both Neanderthal finger bones and modern human finger bones.28 Thus, the finger bone quickly became a genome, a body, a gene pool, and a population: the Denisovans. Soon thereafter, the nuclear DNA of said digit was sequenced, and it showed the Denisovans to be a divergent offshoot of the Neanderthals.
So far, so good. There’s no reason why a 50,000-year-old-hominid from Siberia should closely match a human or a Neanderthal, while being very generally similar to both. Nor is there a reason why mitochondrial and nuclear DNA results should perfectly coincide, since they are transmitted differently. After all, nuclear DNA is transmitted biparentally via the chromosomes. Mitochondrial DNA (mtDNA) is transmitted only maternally. This means that you are chromosomally equally closely related to your mother and father; but mitochondrially a clone of your mother and unrelated to your father. Moreover, as noted in chapter 1, it means that three generations back, you are a mitochondrial clone of one of your eight great-grandparents, and unrelated to the other seven. Although mtDNA is high tech, it is not tracking ancestry in a commonsensical or normative genetical sense.
But what happens when we compare the DNA of the Denisovans to those of different modern human groups, and use the unique DNA sites to ask our computers to draw trees? Then we discover that although many peoples have a tiny fraction of similarity to Denisova, the people with the greatest similarity are not Asians, but Melanesians, from Papua New Guinea. In fact, geographically, to get from Siberia (where the finger bone is from) to New Guinea (where the greatest genetic similarities to the finger bone are found today), you have to go through a lot of peoples who have no detectable genetic similarities at all to the Denisovan finger bone.29
Figure 3. Inferred cladistic relationships and patterns of gene flow among humans, Denisovans, and Neanderthals (after Prüfer et al. 2014).
Then they sequenced a toe bone, from the same stratum but a little bit below the finger bone, and found it to cluster with Neanderthals, not with either the finger bone or modern humans.30
And then they sequenced some DNA from a site in Spain, some hundreds of thousands of years earlier than Denisova, and thousands of miles away, and identified some intimate similarities to Denisova.31 Even more recently, Denisovan DNA has been suggested as the source of genetic adaptations to high altitude in modern Tibetans.32
In a cladistic, taxonomic framework, this is very difficult to make sense of (fig. 3).
The diagrams start out with three branches, leading to modern humans, Denisovans, and Neanderthals, respectively. Then the human branch has to start sub-branching, and some of the sub-branches have to connect to the Denisovans. Then things go back and forth to the Neanderthals, and before you know it, you’ve got not a tree, but a trellis or rhizome, or capillary system. Your mistake was to think that the history you were trying to reconstruct was a tree in the first place.
What is going on over the last few hundred thousand years of human evolution is microevolution, and is thus not dendritic or tree-like. It’s a big mystery if you take the trees of similarity to be trees of taxonomic divergence, where groups with names are reified as units of biology. But once we realize that the named human groups are bio-cultural entities, and the Denisovans are reifications, we can reframe the question, and ask why we see this genetic hodgepodge. The answer, of course, is that we are dealing with mobile groups of hunter-gatherers in space and time, demographically complex and genetically connected; and their genetic relationships are not the branches of a tree, but a bowl of ramen noodles.
Some people, and some groups of people, are more genetically similar than others, based on their proximity in space and time. And indeed one can study that, and come up with “genetic distances” and build trees from them. And they can be informative, and can answer intelligent questions put to them. But population geneticists have also been known to draw trees clustering all kinds of human groups. A widely publicized study from 1988 drew a genetic tree that linked together the genomes of national (political) categories (Ethiopian, Iranian, Korean), linguistic categories (Bantu, Uralic, Nilo-Saharan), ethnic categories (Khmer, Eskimo, Ainu), and broad geographic categories (West African, Central Amerindian, European).33 But these are neither comparable nor natural units. An ambitious population geneticist could cluster Cardinal fans, Blue Jay fans and Tiger fans, and get a tree, and yet would hopefully know that the tree had no biological meaning because the taxa aren’t natural units, like cardinals, blue jays, and tigers.
Likewise, the idea that “the Iranians” or “the West Africans” or “the Ainu” constitute some kind of natural, taxonomic unit—much less “the Denisovans”—is a very misleadingly cultural way to think about human microevolution. Biologists generally refer to “reticulate evolution” as a means of describing the striking discordances between macroevolutionary species trees and small bits of genomes that may be distributed strangely, because of crossing between species (especially in plants), or viruses that pick up DNA from one species and put it into another. So in one sense, it’s old news.34 The problem in humans, though, is especially complicated by the pseudo-taxonomic status of human groups. This is not a case of two species that are ordinarily distinct occasionally swapping genetic bits; but rather, two groups of people intimately connected by history and behavior, in spite of having different identities or names. It’s the basic confusion of bio-culturally constituted groups being confused with biologically constituted groups.
Good statistics can never correct for bad epistemology.
But there is an interesting mathematical argument to show just how biologically meaningless it is, indeed, to try to link these DNA sequences from tens of thousands of years ago to one another and to particular living people. There are over 7 billion people alive now. They each have two parents. Yet a generation ago, there weren’t 14 billion people on earth? Why not? Because most of those parents are common ancestors. Two siblings don’t have four parents; they have two parents. Two first cousins don’t have eight total grandparents; they have six. And that is why fifty generations ago, say, in the Middle Ages, you had 250 ancestors—or round about a quadrillion, vastly more than the number of people alive back then, or the number of people that have ever lived.
So how do we squeeze the huge number of ancestors that we each have today into the few tens of millions or so people that were alive back then? The answer is that nearly all of them are common ancestors. That is how we are all inbred, and all related to one another—because the vast majority of our ancestors are (1) recurring many times in our own pedigrees; and (2) held in common with everyone else we know.
What we are describing here is called pedigree collapse, by demographers and genealogists.35 What it means is that, even in fairly recent historic times, nearly everyone is related to (that is, they share common ancestors with) everybody else.36 And this would only be exacerbated by population crashes due to plagues, as well as the universal human tendency to mate non-randomly with people who match them linguistically and ethnically.
Now the interesting mathematical question: How far back in time would you have to go in order to essentially statistically guarantee that everyone alive today has common ancestors? And the answer: Surprisingly recently, only 5,000 or 10,000 years.37
So Og, back in the Upper Pleistocene of 20,000 years ago, is an ancestor either of nobody alive today (his whole family might have been killed in an avalanche, after all) or of everybody alive today. Of course, certain particular ancestors recur more in some people’s genealogies than in other people’s genealogies. But nobody living 20,000 years ago was the lineal ancestor of only some people alive today.
So where does that leave our friends, the Denisovans from Siberia 50,000 years ago, in connection with living human peoples? Here, the population genetics seems to be at odds with the molecular genetics. We find a few percentage points of similarity when we isolate the unique nucleotides of Denisovan genome and match them up against modern people—and we find them more commonly in Melanesia than in Asia.
The solution to this apparent conundrum is probably that genetic descent is meaningless that far back in time. The connection between people 50,000 years ago and modern peoples is a series of bubbles percolating out of a diverse ancestral brew of human gene pools, all connected to one another in various ways, and to varying extents.38 If you biologically reify modern human groups, and imagine the ancient groups to be separate taxonomic entities as well, you can get answers. But those answers probably have little or no biological meaning in the context of the descent of modern people from the people of 50,000 years ago.
So what genealogical sense do we make of being told by recreational genetic testing companies that your DNA is, say, 2.4 percent Neanderthal and 3.1 percent Denisovan? Two percent may sound like a little bit, only two in a hundred, but remember that 50,000 years ago you had an astronomically high number of lineal ancestors, who were crammed into bodies of everybody else’s physical ancestors. The differences among living peoples in terms of their descent from those people of 50,000 years ago are quantitative, not qualitative. Those differences reside in how many times in each person’s pedigree any particular ancient ancestor recurs; we are all descended from the same ancestors, but some more than others. Consequently, we don’t need to imagine marauding bands of Ice Age rapists—or even a genetically equivalent, but nicer image—but only a loose network of diverse human gene pools, connected over time and through space, of which we all partake today, if slightly unevenly.
There are many ways to think about descent and relatedness, and none of them is objective and uncultural. And although we usually think of the cultural study of kinship as applying to families and clans and totemic ancestors, those generalizations apply as well to Homo sapiens and its descent from Australopithecus afarensis and its relation to Pan troglodytes. Some ways of thinking about it are more or less constrained by the available data: skeletal, archaeological, primatological, ethnographic, historical. The engagement with our ancestors and relatives is necessarily accompanied by a reflexivity that makes this science different from, and often unfamiliar to, scientists trained in other fields.
We often hold out the hope that genomics will answer the important evolutionary questions of the age. But we do that for cultural reasons, the result of decades of propaganda for the Human Genome Project, which even has a name, geno-hype.39 Genetics is the scientific study of heredity, and needs to be aggressively differentiated from the idea that heredity is the most important factor in one’s life, although those two ideas have been widely conflated, often by geneticists themselves, over the course of the twentieth century. Where genetics is problem driven (“How does heredity work?”), genomics is driven often by financial and technological concerns. The Human Genome Project was begun on the promise of curing genetic disease, but its primary advances have been in diagnostics and forensics. In evolutionary anthropology, however, genomics cannot answer the most basic question: Why aren’t we apes? Nor will it ever be able to; for the answer to that question lies as much within the social/historical realm as within the genomic/biological realm.
The contested boundary of the human species itself allows us to use it as a basis for understanding the unique contribution of anthropology to the study of the human condition: that human biological facts are never natural facts, but are natural/cultural facts. That is to say, they are not discovered, but are the result of a complex negotiation among what seems to be “out there”; what ideas we bring to the endeavor of describing, understanding, explaining it; and our own perceived rational interests in how to present and utilize it. Misrepresenting facts of human biology as natural facts has historically been the source of ostensibly scientific justifications for conservative political policies.
Consequently, the principal struggle of modern evolutionary theory is to distance itself from odious political views that invoke evolution to claim scientific legitimacy. According to a conservative scholar in 2007, “Conservatives need Charles Darwin. . . . The intellectual vitality of conservatism in the 21st century will depend on the success of conservatives in appealing to advances in the biology of human nature as confirming conservative thought.” Here, “human nature” means little more than imaginary organic limits to social progress.
Those imaginary limits are often more visible to non-scientists. The Origin of Species, written in 1859, can still be read without grimacing, specifically because Darwin avoided talking about people in it. And when he finally got around to talking about people, over a decade later, in The Descent of Man, we can see now that it is filled with sexist Victorian claptrap. Indeed, somewhat later, in 1922, when politician William Jennings Bryan came out against evolution in an op-ed in the New York Times, he specifically highlighted the sexism he identified in Darwin’s later work:
Darwin explains that man’s mind became superior to woman’s because, among our brute ancestors, the males fought for the females and thus strengthened their minds. If he had lived until now, he would not have felt it necessary to make so ridiculous an explanation, because woman’s mind is not now believed to be inferior to man’s.40
This is particularly ironic in light of the scandal that enveloped Larry Summers, the president of Harvard almost a century later, after addressing the question of why there were so few women on the senior science faculty at institutions like his own. His answer was that perhaps they lacked the intrinsic aptitude at the high end. Maybe it’s true; maybe there are occult mental forces found more commonly in men that dispose them toward tenure in science. But how would we know, unless we examine the institutionalized practice of science faculty tenuring at Harvard and elsewhere? Maybe the problem is the pipeline itself, rather than its contents. But by invoking imaginary genetic limitations, Summers was suggesting that the problem was not science’s or Harvard’s, but women’s. So if the phenomenon is simply a part of the biology of Homo sapiens, then there is really no problem of social or institutional discrimination worth worrying about or examining—a highly bio-political dissimulation masquerading as a scientific fact of human nature. And it’s been wrong every other time it’s been invoked, so the chances that it’s right this time are probably pretty small.
These days, it is not uncommon to hear evolution invoked in support of a range of bio-political claims, of differing degrees of loathsomeness: that there are no truly selfless acts; that it is natural to hate people different from you; that it is natural for men to dig young babes and women to dig sugar daddies; that rape is not a crime of power using sex as a weapon, but just misguided reproductive effort; and that there are different kinds of people and they have different intellectual properties, just deal with it.41 If you want to know why there is creationism, it’s because this crap passes for evolution. And believe me, I have only scratched the surface.
Certainly the human gene pool is still being tweaked in various ways, but it is always easy to overvalue the tweaking.42 After all, whatever alleles exist for sharp vision and disease resistance are not nearly as important in the modern world as glasses and antibiotics are. The best examples of recent genetic adaptations are not at all “the spread of a good allele.” Sickle-cell anemia is a disease, after all, which changes your red blood cell from the shape of a bialy to that of a croissant. It’s “good” (in terms of malaria resistance), but only if you have exactly one copy of it, not zero and not two. But the children of people with one copy of the allele don’t necessarily have one copy of the allele themselves: they may have zero or two, with neither genotype being optimal. Consequently it is polymorphic everywhere it is found—there is no population in which everyone has the allele. We don’t know nearly as much about lactose tolerance, the allele that allows you to eat quiche without getting explosive diarrhea, but it is also polymorphic everywhere. There are no populations composed of entirely quiche eaters. Indeed, the story we have for the main lactase-persistence allele, spreading from southeast Europe to northwest Europe a few thousand years ago from the benefits of being able to drink milk, has to be an oversimplification, since the most-lactose-tolerant peoples are not those who were the first dairyers (in southeast Europe), but those who became dairyers last, in northwest Europe.43 And the story of an adaptation in human brain alleles, published in Science in 2005, was little more than a racist genomic myth.44 It’s enough to make you wonder what motivates scientists to keep looking so hard for evidence of recent favorable genetic adaptations in human populations; just as scientists of generations past looked so hard for reliable evidence linking brain size to intelligence. If it’s there, it can’t be really important, because we have been looking really hard for a really long time without finding it.
Actually, the human brain, mind, and gene pool are generally remarkably unadapted and plastic, which makes a good deal of sense, given (1) the reliance on adaptability, rather than adaptation, which is the evolutionary hallmark of our lineage;45 (2) the scope and complexity of the environments to which early human populations were adapting;46 and (3) the demographics of early human bands, which would have favored the action of genetic drift,47 opposing the precision engineering of anything, including the brain.
I have argued here that a comprehension of the science of human origins incorporates multiple elements: first, the data and an understanding of how they are produced; second, the connection between how we understand our ancestry (phylogeny) and how we understand ourselves (diversity); third, an acknowledgment of the cultural aspects of human evolutionary theory, a repudiation of its repressive invocations, and a focus on its progressive implications; and fourth, a reflexive engagement with the political, theological, and moral dimensions of origin narratives generally, and eventually with those of the creationists, whom we hope will come to find evolution less culturally threatening, and thus less rejectable, than they have found it over recent decades.