Annie Oakley was proud of her unveneered honesty. After real life as a star in Buffalo Bill’s Wild West Show, she became the rootin’, tootin’, sharpshootin’ tomboy-heroine of Irving Berlin’s 1946 musical, Annie Get Your Gun. In one of the show’s rumbustious solos, she derides ‘learning’ and advocates ‘doin’ what comes natur’lly’ instead. The rewards for natural behaviour seem, from the lyrics, to be mainly amorous—in the pale moonlight, behind a tree—but also practical in diverse ways, from paying bills to raising a family. Despite her professed lack of education, Annie had grasped two basic assumptions of Western civilization: that nature is different from culture, and that ‘learning’ transforms the one into the other. What comes naturally is instinctive. Behaviour we have to learn is culture. Annie exaggerated, but her sense of the difference was a kind of common sense.
Eating is natural, in Annie Oakley’s sense of the word, and we all do it, irrespective of the culture that surrounds us. But our diversity of culture shows in what we eat, how we cook and dress it, whom we eat it with and where, what technology we wield to get it to our mouths and stomachs, and what code of table manners, if any, we apply. Sex is natural, but whom we admit or prohibit as partners and the rites with which we surround lovemaking are the results of our cultural circumstances. It is in our nature to seek shelter—but our building practices diverge dazzlingly from time to time and place to place. You can tell a lot about people’s social milieu from the way they walk or stand or sit, because we learn to modify these instinctive behaviours according to the expectations of society and the instruction and example of our elders. Conflict and peacemaking are natural activities, but the ways in which we do them—the scale and spirit of our violence, the destructiveness and duration of our wars, our reasons for choosing allies and enemies, our motives for submitting or negotiating—belong to the realm of culture. Everyone, by nature, is capable of thinking some of the same thoughts, but culture stifles some and stimulates others. In most cases our individual idiosyncrasies are the results, no doubt, of inborn peculiarities of temperament and taste—but we defer to the society that surrounds us when we select which of them we practise and which we suppress. Most human behaviour is modified by acquired characteristics, stimuli, and constraints, such as tradition, fashion, ideology, mimesis, peer pressure, and law.
Even sleep has a history, with conventions about how and when to do it varying between cultures. 1 In Europe in antiquity and the middle ages, people usually had two nightly sleeps, separated by a period of wakefulness. The Pirahã of Amazonia’s Maici valley, according to the brilliant renegade missionary who knows them best, hardly sleep at all. They bid each other farewell by warning ‘don’t sleep—there are snakes’. They say this, Daniel Everett explains,
for two reasons. First, they believe that by sleeping less they can ‘harden themselves,’ a value they all share. Second, they know that danger is all around them in the jungle and that sleeping soundly can leave one defenseless from attack by any of the numerous predators around the village. The Pirahãs laugh and talk a good part of the night. They don’t sleep much at one time. Rarely have I heard the village completely quiet at night or noticed someone sleeping for several hours straight. 2
Gestures and grimaces make the point about the primacy of culture with a kind of dumb eloquence. Those that express emotion genuinely seem instinctive, because they are common to every culture—or nearly so—and are identical or very similar in many animals, especially, as Darwin noted, among primates. The masks of comedy and tragedy, for instance, are recognizable all over the world as stylizations of smiles and sadness. As early as 1839, when he was only beginning to formulate the theory of evolution, Darwin began to think that all our facial expressions are instinctive, after observing the infant behaviour of his first child. 3 Some universally meaningful motions of other parts of the body, perhaps, belong in the same category, such as the involuntary spasms with which we, like other apes, shield our eyes or hide our faces when we do not want our emotions betrayed, or the way we clasp foreheads and rub chins in pensive moods or surprised moments. As evidence of the instinctive ways we register emotion physically Darwin listed, among others, astonishment, signified by dilation of the eyes and raising of the eyebrows; blushing for shame; tensing and clenching in defiance; snarling in anger; pouting in indifference; sneering in contempt; shuddering with fear.
On the other hand, a further repertoire of gesture—especially in areas beyond emotion—seems to be peculiar to some groups of people, who learn bits of it from each other. Margaret Mead was the outstanding pioneer of the relevant research. Nowadays she is chiefly notorious as the good and gullible anthropologist whom her Samoan study-subjects in the 1920s allegedly misled into admiration for and advocacy of extremes of sexual promiscuity they never practised. 4 Mead’s book about them helped revolutionize Western sexual behaviour by identifying sex with liberation. 5 But other work she did was more helpful and has endured better. She thought gestures vary as much as spoken language from culture to culture, not necessarily because they are intrinsically different in different circumstances but because they acquire nuances of interpretation in contrasting cultural contexts. She helped to inspire fellow anthropologists to compile a great lexicon of over a quarter of a million non-verbal expressions from around the world—creating an impression of diversity so complex as to defy, though not quite dispel, claims that gestures are universal. Mead’s pupil, Ray Birdwhistell, devoted much of his life to compiling the evidence, pointing out, for instance, that whereas an Arab might communicate appreciation of a passing girl’s beauty by rubbing his beard, a South Italian will convey the same judgement by pulling the lobe of his right ear, while a North American might make suggestive wiggling motions with his hands or kiss his fingertips. 6
Even some of the responses we make unthinkingly and that seem to us almost automatic vary from place to place. Not every group identifies nodding with assent or head-shaking with denial. Kissing is nearly universal, but in some cultures nose-rubbing is a preferred way of signalling or initiating intimacy. Shrugging is an almost universal signifier of indifference, but not quite; so it might be cultural or anomalously instinctive. A few years ago HSBC—the banking conglomerate that descends from the old Hongkong and Shanghai Banking Corporation—mounted a publicity campaign to convince potential depositors that its employees possessed critical local knowledge. The advertisements showed gesticulators committing terrible bêtises as a result of making one culture’s gestures in another culture in which they bore distinctive meanings. A diner in Brazil innocently made a sign that, almost everywhere else, would signify approval, but in Brazil amounts to a warning against the evil eye, or an accusation of cuckoldry. In parts of Greece, according to the same campaign, an open palm signifies repugnance rather than the more usual friendliness or congratulation. Overall, therefore, gesture and grimace confirm the effective dichotomy of nature and culture.
Two years after Annie Oakley sang, the fashionable academic pundit, Alfred Kroeber, pronounced the same principle in language more feeble and less catchy than hers, but with the same insistence on the essential difference between inherited and learned behaviour. ‘There are,’ he wrote, ‘certain properties of culture—such as transmissibility, high variability, cumulativeness, value standards, influence on individuals—which it is difficult to explain or see much significance in, strictly in terms of the organic composition of personalities and individuals.’ 7 The way the academic world was arranged bolstered the sense of the distinction. But doubts were already circulating. They have accumulated steadily ever since. Students of culture and biology have exchanged data and thoughts, bringing to light ever-accumulating commonalities that link their spheres. It is now possible to climb up the rickety scaffolding of minute monographs and recondite articles to a lofty, if insecure, level of analysis at which history and natural history are one. The rest of this chapter recounts the nineteenth-century effort to attain that level and restore biology and culture to each other.
* * *
Common sense was on both sides of the question. At a deep level, the same sort of intuition that made Annie Oakley separate culture from nature has inspired the reconvergence of history with natural history. As I have already insisted, the distinction between nature and culture is imperfect. We can do nothing inconsistent with our natures; to that extent, all behaviour is natural. Our propensities for teaching and learning are innate—part of the equipment evolution gives us and our genes encode. Since humans depend entirely on their parents or other elders during many years of nurture, ours is a species peculiarly adapted by nature to the transmission of culture.
The relationship, moreover, that binds culture and nature is mutual. They are linked in a kind of strange loop, in which each influences the other. I do not think, for instance, that anyone would hesitate to admit that human behaviour of kinds we classify as cultural modifies some aspects of what we generally regard as the natural world: just about every society we know of has modified the environment it inhabits, if only by winnowing selected species, managing forests and grazing, appropriating shelter, and diverting waterways. Farming fundamentally recrafts the land. Domestication produces new species. Overexploitation eliminates old ones. Urbanization remodels the natural environment unrecognizably. Medicine extinguishes some pathogens and encourages others. Some diseases are the results of lifestyle. It is hard to imagine the recent explosion in the incidence of asthma and allergies outside modern, urban societies.
Every time human culture strokes or scars the biosphere, new life-forms invade the eco-niches we open up. The ‘ecological revolution’ of the early modern era swapped biota across oceans and between continents, replacing an æons-old model of evolution, in which each continent evolved distinct plants and creatures, with a new, convergent history—a global environment, in which the same species inhabit comparable climates worldwide. Today, the same life-forms occur, the same crops grow, the same species thrive, the same creatures collaborate and compete, and the same micro-organisms live off them in similar climatic zones all over the planet. To some extent the ecological revolution happened without any input from human behaviour. Weeds colonized niches without help from conscious human agency; pests and pestilence spread in defiance of everything culture did to stop them. But without the initiatives of the explorers, travellers, conquerors, and colonizers who opened the routes other biota traversed, the whole process would never even have started. Nor could it have happened without the agency of planters and breeders who nurtured transmissions, founded gardens of acclimatization, grafted plants, and cross-bred animals. 8 Culture turns gardens into deserts and deserts into gardens. Nowadays, it even has measurably accelerating effects on climate change.
It is not surprising, therefore, that culture can transform aspects of human nature, too. Extreme examples are eugenics and, potentially, genetic modification. Some societies breed humans for what we might call ‘unnatural selection’—to suit cultural prejudices in favour of particular body-shapes or pigmentation, or particular levels or types of intelligence. Early in the fourth century bce, Plato’s recommendations for a perfect society included a recipe for constructing it out of perfect individuals. The best citizens should be encouraged to reproduce. The children of the dim and deformed should be exterminated to stop them from breeding. Shelved for over 2,000 years the programme reappeared in nineteenth-century Europe and North America, where racism blamed heritable deficiencies for the supposed inferiority of non-whites, while a form of Darwinism suggested that the presumed advantages of natural selection might be helped along by human action. In 1885, Darwin’s cousin Francis Galton proposed that the selective control of fertility and marriage could perfect the human species by excising undesirable mental and moral qualities. By spending ‘a twentieth part’ of the effort breeders put into improving horses and cattle, he promised to breed ‘a galaxy of genius’.
Within a couple of decades, his suggestion became one of the orthodoxies of the age. In early Soviet Russia and parts of the United States during the same period, the right of marriage was officially denied to people officially classed as feeble-minded, criminal, and even (in some cases) alcoholic. By 1926, half the states of the USA compulsorily sterilized some of these people. Nazi Germany brought the eugenic idea to its logical conclusion: the best way to prevent undesirables from breeding was to massacre them. Anyone in a category the state deemed genetically inferior, including Jews, gypsies, and homosexuals, was liable to extermination. Meanwhile Hitler tried to perfect what he thought would be a ‘master race’ by means of experimental copulation between big, strong, blue-eyed, blond-haired, human guinea pigs.
Nazi excesses made eugenics unpopular for generations, but it was lawful in Sweden to sterilize mental patients as recently as 1975. Eugenic programming has resurfaced recently in some apologetics for abortion on the grounds that it disproportionately kills off the offspring of a criminally inclined, economically feckless underclass. 9 And banks of semen donated by men of allegedly special talent or prowess are widely available to mothers willing to shop for a genetically superior source of insemination. Genetic modification, meanwhile, gives us the opportunity, if we wish, to produce genetically engineered ‘designer babies’. The isolation of particular genes associable with various inherited characteristics makes it theoretically possible to filter undesirable variations out of the genetic material that goes into a baby at conception. We can replace natural selection with cultural priorities. Most readers—I hope and trust—will find the prospect repellent. But it could work. Just as eugenics might have helped modify the appearance and enhance the health of populations it produced, so genetic modification might eliminate the genes society condemns as undesirable, and produce populations of conformists.
In any case, an enormous amount of evidence shows that culture shapes human bodies and brains in other, subtler ways, morally neutral or benign. 10 Richard Nisbett and Dov Cohen, who published the results of one of the most famous experiments, at the University of Michigan, in 1996, selected subjects from different parts of the USA and measured their hormonal responses to insults. Southerners responded with much higher releases of cortisol and testosterone than Northerners. The researchers concluded that the peculiar value Southern culture attaches to honour has a physiological effect. 11 It is possible to interpret the data differently, but other, comparable experiments have tended to confirm the conclusion. 12 Some of the ways in which human bodies change over time or from place to place are the results of genetic isolation or relative isolation; some, like variations in pigmentation, originate as adaptations to environmental variations. Some experiences can trigger heritable changes in the relationship between genes, without affecting the structure of DNA: trauma, privation, and smothering mother-love can affect successive generations of humans and rats. 13 Plenty of physical differences are traceable to the impact of culture on human breeding habits.
A compelling illustration of culture reshaping bodies is that among populations that practise dairying lactose intolerance virtually disappears. Those that farm starchy staples develop a so-called ‘sickle cell’ that adapts their digestions. In India, where strong traditional prejudices discourage interbreeding between castes, measurable genetic differences are among the results. 14 Cannibals in Borneo have alleles that counter the brain-corroding prions that would otherwise madden feeders on human brains. 15 A gene frequent among the Yanomamo of Amazonia, who value war, is almost non-existent among the peace-loving San of the Kalahari, presumably because the two cultures privilege contrasting types of potential parents. 16 The apparent convergence of male and female body shapes during the twentieth century in Europe and North America seems to reflect cultural change—the critique of gender, the elimination of most forms of economic specialization by gender, and the correspondingly revised values people apply in choosing their mates. Women no longer demand physically conspicuous masculinity in potential husbands; men can see the advantage in selecting women physically well qualified for traditionally masculine roles. The extraordinary drop in fertility rates in highly prosperous economies is probably the result mainly of social and economic change (which affect habits of breeding and rates of contraception).
A further, more problematic, possible example of the effect of culture on bodies is brain size. Vulgar error sees big brains as the cause or a cause of the multiplicity and ambition of human achievements. But the simple-minded assumption that bigger is better does not necessarily apply to brains. Beyond a certain threshold the size of our brains makes little or no difference to the potential range of our abilities. There is no determining connexion between brain size and genius: Turgenev had a large brain, but Anatole France had one of the smallest ever measured. Men and women, on average, are equally clever, despite general differences in brain size. Large-brained primates, such as cebus monkeys and apes, outperform other species in tasks that humans generally regard as tests of intelligence, but no known test is unpolluted by human standards. 17 On the whole, the skill with which some apes adapt to human-style intelligence tests is astonishing, when one takes account of brain size and the chasm of understanding that has opened since the disappearance of our last common ancestor 6 million years ago. In categorization tasks, for instance, such as sorting a random assemblage of foods and objects of different shapes, colours, and substances—metal bells, wooden cubes, red grapes, green toys—into two piles, chimpanzees, despite their relatively smaller brains, perform similarly to human children of about 3-and-a-half to 5 years old. 18
Homo floresiensis—a diminutive species, which the media nicknamed ‘hobbit’—emerged from a dig in Indonesia in 2003. Despite having a brain comparable to chimpanzees’, floresiensis had a tool kit very like that of our own ancestors, whose brains were three times as big, about 40,000 years ago. 19 In a sense, therefore, homo sapiens is overencumbered with more brain than we need. Big brains are costly in energy terms—they need a lot of nourishment—and do not seem to deliver proportional advantages. So they could be evolutionary aberrations, with no specific evolutionary function, or they could be the result, rather than a cause, of culture.
One of the most eloquent and unremitting advocates of the primacy of biology thinks the latter. The Oxford Professor of Evolutionary Psychology, Robin Dunbar, is famous for two attractive theories: that we can only know 150 people well, and that language originated as a substitute for grooming when the size of hominid communities crossed a critical threshold. He has also argued that humans’ relatively big brains are a consequence of the same cultural change, towards large groups with consequently unwieldy amounts of information to manage. 20 The theory seems to need some reformulation, at least, since small brains, well organized, can handle as much data as big ones. 21 Nevertheless, the theory that brains grew as a consequence of the growth of human groups represents a remarkable concession to the primacy of culture. Over the last ten or fifteen thousand years, moreover, people who live in sedentary societies have shrunk—with somewhat feebler bodies and slightly smaller brains. That those changes in physique and brain-size are the consequence of sedentarism, with corresponding shifts in diet and patterns of labour, as people abandoned the mentally demanding complexities of foraging ways of life, is an irresistible presumption; though we have made no discernible progress in intelligence over that period, we seem to have remained on average, as good or bad as ever at thinking.
The power of culture shapes our bodies and grows our brains. Michael Tomasello, an anthropologist who heads the Max Planck Institute and is a renowned defender of the notion of human uniqueness, thinks that the appearance of distinctive human elements of cognition—and therefore the very emergence of humankind in the evolutionary record—happened too quickly to be the results of unaided biological evolution; they have to be explained by self-driving cultural changes. 22 Peter Richerson and Rob Boyd, in their work (to which we shall return) on environmental science and anthropology at the University of California, Los Angeles, made disinterested efforts to sort cultural from biological influences on behaviour and decided that the categories were indistinguishable at the margins. They have speculated that ‘perhaps human nature itself is substantially a product’ of culture. 23
It makes sense, therefore—however paradoxical it seems—both to treat culture and nature as distinct subjects of study and at the same time to search for a level of analysis at which we can see how they interact. The next few pages tell the story of how that search began—unpromisingly, as it turned out, among advocates of biological and environmental determinism.
* * *
Even while the nineteenth-century bifurcation recorded in Chapter 1 above was under way, some thinkers—nearly all of whom were outside or on the margins of academic establishments—were striving for reconciliation. One of the greatest of these pioneers was, for a while, interned in a madhouse. In lectures he began to publish in 1830, when he was struggling with self-diagnosed insanity and the frustrations of a stagnating academic career, Auguste Comte predicted a new synthesis of scientific and humanistic thinking. He called it ‘sociology’ or ‘social science’. He was unsure, however, about how to frame or forge the new discipline he imagined. Active seekers of a synthesis almost always proposed one (or both) of two strategies or programmes: biological and environmental determinism. Logically, exponents argued, if individual behaviour can be predicted from the size or shape of one’s head or hands, or one’s ‘life lines’, or one’s skin colour, so can culture. If the climate or ecosystem determines what we do, it also determines what we pass on to our children.
Take biological determinism first. It has a long tradition behind it in the West, but Christian prejudice has tended to reject it on the grounds that it is incompatible with the free will God concedes to humans. Until the nineteenth century the contexts in which it flared conspicuously concerned slavery and monstrosity.
Slavery is a hard subject to contemplate without prejudice, because our own culture demonizes it. Most human societies, however, have regarded slavery—or some very similar system of forced labour—as entirely normal and morally unchallengeable. Most practitioners have not bothered to justify it, therefore. In classical Athens, however, Aristotle was aware of the contradiction between enforced servility and the values he espoused—such as the independent worth of every human being and the moral value of happiness. He formulated the world’s first justification of slavery: some people, he proposed, are inherently inferior and, for them, the best lot in life is to serve their betters. For instance, Aristotle argued, races inherently inferior to the Greeks could be plundered for slaves; or in wars caused by the resistance of natural inferiors to conquest, captives could be enslaved. In the course of developing the idea, the philosopher also formulated a doctrine of just war: some societies regarded war as normal, or even as an obligation of nature or an act of piety enjoined by the gods. Aristotle, however, regarded war as just if the victims of aggression were inferior people who ought to be ruled by their aggressors. This teaching, though it may sound repugnant to my contemporaries, at least made war a subject of moral scrutiny in the West for ever after, but that would be little consolation to the victims of it.
In practice, Aristotle’s doctrine of slavery was ignored for centuries, because slavery was largely unquestioned and masters could admit, without prejudice to their own interests, that their slaves were equal to themselves, except in legal status. In general, slaves in medieval Latin Christendom were regarded like prisoners today: they had forfeited their freedom (and sometimes, implicitly, that of their descendants) in exchange for some benefit conferred by their masters, or by virtue of crimes against natural law, or by taking part in just war on the wrong side, or by surrendering in war in ransom for their lives. Or else they were acquired by purchase on the assumption that their status was already resolved. But Aristotle’s argument became important from the sixteenth century onwards, when it supplied the basic moral authority for slavery whenever the justice of the institution was challenged in the West. As the Scots scholiast John Mair put it, justifying the enslavement of Native Americans in 1513, ‘some men are by nature slaves and others by nature free. And it is just…and fitting that one man should be master and another obey, for the quality of superiority is also inherent in the natural master.’ 24 Because anyone who was a slave had to be classified as inferior, the doctrine stimulated racism, and the victimization of people of particular ‘races’ as slaves. From the point of view of a slave-owning society, the notion that slavery was biologically encoded had an obvious advantage: it cut costs by enabling masters to breed slaves.
The argument from biological inferiority never monopolized the case for the heritability of slavery, but it tinged most others, especially—in the eighteenth- and nineteenth-century West—the justification from the Bible story of Noah, who uttered a curse on a transgressor’s son: ‘a servant of servants shall he be to his brethren’. There was no biblical authority for identifying black skin as ‘the mark of Cain’ but self-interested heuristics made the association anyway. 25
As much as disputes about slavery, debates about monstrosity inspired biological determinism. Legendary monsters seem—to most people, I suppose—the products of over-active imagination; I suspect, however, that they are really evidence of human imaginative deficiencies—in particular, people’s inability to conceive of strangers in the same terms as themselves. 26 It is probably true that in most languages no term for ‘human’ exists to comprehend those outside the group. There is, as it were, no middle term between brother and other. The word that denotes outsiders is usually close or identical in meaning to ‘beast’, or ‘demon’, or some similarly pejorative term. The inclusive doctrine of humanity—our sense of species—is a relatively recent innovation in the way we think of each other. 27
In medieval Latin Christendom, the debate over monstrosity pitched two views against each other. In the thirteenth century Albertus Magnus was the voice of one side, treating physical deformity—which might in principle include any departure from prejudicially decided norms, such as, say, black skin or woolly hair—as evidence of mental incapacity; or one could follow the orthodoxy of St Augustine, who thought monstrosity was an illusion that merely reflected humans’ inability to appreciate the perfections of God’s creatures. The question was a serious one, because it affected the scope of salvation. In the great, hierarchically ordered schemes of creation that one can still see, for instance, in the stained glass of the windows of León Cathedral, where images of every order of beings are ranged between earth and sky, only creatures with rational souls were close enough to heaven to leap the gap on death. Humans were just below the angels—near enough to heaven to hope. Brute beasts and vegetative life were too far down the ‘chain of being’. But how much space was there in between? And who qualified for the privileges of human status? 28
Along with the monsters whom anyone might encounter in the routine of life—the dwarfish, the birth-maimed, the occasional enslaved or visiting black person, the physically freakish of all sorts—a crowd of traditional, fictional monsters jostled medieval imaginations. For those that occupied the rung immediately below humankind on the ladder of creation, the general name was similitudines hominis—likenesses of man. From classical antiquity the West inherited a long catalogue of monstrous races in this category, listed in the medieval period’s most popular and influential encyclopedia of nature, Pliny’s text of early in the last quarter of the first century ce, the Natural History: Nasamones, who wrapped themselves in the shelter of their enormous ears; Sciapods, each of whom reclined under the shade of his or her single, huge foot; Cynocephali, whose dog-like heads reposed on bodies of otherwise human aspect; and a host of similarly odd creatures, whose existence, though never witnessed, was attested in revered texts, including hairy folk, pygmies, ‘anthropophagi and men whose heads do grow beneath their shoulders’. In the twelfth-century carvings that adorn the doors of the monastery Church of Vézelay you can still see them, streaming in procession towards salvation in the outstretched arms of Christ. In eastern Christendom, icons of St Christopher often depicted him as a cynocephalus, confirming not only that monstrous beings could get to heaven but also that they could help the physically normal along the way.
Later in the middle ages, the question of where these liminal beings belonged became urgent as European explorers opened up access to parts of the world where unfamiliar physical types abounded. With every new discovery of strange creatures, Pliny’s panorama of creation became more credible. The New World threw up naked people, real-life anthropophagi, and reputed giants and Amazons. In Africa it turned out that there really were pygmies, and people with surprisingly distorted or selectively distended physiques, like the bulbous posteriors ascribed to females of so-called Hottentots. There were also hairy creatures—gorillas and baboons—that some observers took, at first sight, to be degenerate kinds of humans. Did all these monsters qualify for their collateral share of bliss, or should they be relegated to a category of natural inferiority—subject to conquest and enslavement by their betters?
On the whole, Christian revulsion from biological determinism protected specimens of doubtful humanity (but only falteringly, fitfully, stutteringly, and slowly, because Christians, like the adherents of every religion, cannot be relied on to observe in life the principles of faith). Native Americans were the first to benefit. ‘All the peoples of humankind are human,’ said the Spanish moral reformer, Bartolomé de Las Casas, in the mid-sixteenth century. His pronouncement sounds like a truism, but it was an attempt to express one of the most novel, powerful, and contested ideas of modern times. Still, it took a Papal Bull to convince some people that Native Americans were exempt from indelibly heritable inferiority. Even then, some Protestants denied it, suggesting that there must have been a second creation of a different species or a demonic engendering of deceptively human-like creatures in America.
For it was one thing to assert the disunity of humankind, another to devise a theory that made it credible. The most obvious option was the theory of polygenesis, according to which creatures loosely classed as human had emerged separately, whether by nature’s laws or heaven’s command. The Calvinist theologian, Isaac de la Peyrère, was the first to advocate this solution, in a work published in 1655. He was not addressing directly the problem of the diversity of humankind but that of the origins of the peoples of the New World in particular. Were they the lost tribes of Israel? Had Noah settled in Brazil, as one early seventeenth-century authority argued? Or had the first settlers come from Asia, according to the theory in which the Spanish Jesuit, José de Acosta, pre-empted the discoveries of modern anthropology? At the time, all these hypotheses seemed equally improbable. La Peyrère suggested that the universal paternity of Adam should be understood metaphorically, making credible the origins-myths that so many native American peoples cherish: that they were sprung ‘from their own earth’. The theory was dismissed by no fewer than twelve respondents in its year of publication. It was as contrary to the religious orthodoxy of its day as it was to the Darwinian orthodoxy of a later age. Its periodic revivals were, on the whole, feeble and of limited appeal.
Meanwhile, after long, heartfelt equivocations among anatomists and taxonomists, a dividing line emerged between species to exclude apes from the category of homo. The question perplexed seventeenth-century anatomists, who dissected apes in attempts to establish their relationship to humans. It flummoxed Linnaeus, when he devised his scheme for classifying all life-forms in the 1730s: he opted to class apes and humans in different genera, but only after equivocating and before changing his mind. The question was hardly settled until the early nineteenth century, when the scientific consensus finally determined against the human credentials of the last ape to qualify for consideration, the orang-utan. Doubts concerning blacks, Hottentots, pygmies, and Australian aboriginals persisted for at least as long. Advocates of enslaving or massacring them were understandably unwilling to forgo the claim that their bodies condemned these creatures to an inflexibly inferior place in the world.
* * *
Nineteenth-century science produced new arguments and uncovered new evidence. Classification of humankind into races was thought to be scientific, by analogy with botanical taxonomy. William Lawrence, whose influential lectures on anatomy were delivered in London in 1817, revived the claim Albertus Magnus had made: ‘physical frame and moral and intellectual qualities’, as he put it, were mutually dependent. ‘The distinction of colour between the white and black races is not more striking than the pre-eminence of the former in moral feelings and in mental endowments.’ The Comte de Gobineau died in the same year as Darwin. Relying more on what was then beginning to be called anthropology rather than on biology, he worked out a ranking of races in which ‘Aryans’ came out on top and blacks at the bottom. ‘All is race,’ concluded a character in one of Disraeli’s novels. ‘There is no other truth.’ Anthropology, phrenology, craniology, and criminology accumulated vast amounts of data to show that people were the prisoners of their physiques.
Various methods were proposed for linking physical characteristics to behaviour and ranking races accordingly—by pigmentation, hair-type, the shape of noses, blood-types (once the development of serology made this possible), and, above all, cranial measurements. This last method was devised by the late eighteenth-century Leiden anatomist, Pieter Camper, who arranged his collection of skulls ‘in regular succession’, with ‘apes, orangs and negroes’ at one end and central Asians and Europeans at the other. To readers and interpreters of Camper’s data, there was obviously an underlying agenda: a desire not only to classify races but also to justify disparities of power by ranking them in terms of superiority and inferiority. Hence the emphasis on the shape and dimensions of the skull, which were alleged to affect brain-power. 29
Degeneracy was another potential theoretical framework for understanding supposed racial inferiority. The popularity of the term among nineteenth-century anthropologists is intelligible in the context of a ‘discourse’ of degeneracy, employed to explain all sorts of exceptions to progress: criminality, psychiatric pathology, economic dislocations, national decline, and, ultimately, the supposed ‘degeneracies’ of modern art. In the late nineteenth century, says its chronicler Daniel Pick, degeneracy ‘slides over from a description of disease or degradation as such, to become a kind of self-reproducing pathological process—a causal agent in the blood, the body and the race—which engendered a cycle of historical and social decline perhaps finally beyond social determination’. 30
In 1870, Henry Maudslay, professor of medical jurisprudence at University College, London, united some of the themes of biological determinism and specified some of the effects of physical degeneracy on cultural attainments. When the development of the ‘brute brain’ within man, he reasoned, ‘remains at or below the level of an orang’s brain, it may be presumed that it will manifest its most primitive functions…We may without much difficulty, trace savagery in civilization, as we can trace animalism in savagery; and in the degeneration of insanity, in the unkinding, so to say, of human kind.’ Among supposedly degenerate groups of humans, the concept of ‘gradation’ offered an apparent means of measuring degeneracy. The term was coined by Charles White in the 1790s, when he produced an index of ‘brutal inferiority to man’, which placed monkeys only a little below blacks, and especially the group he called ‘Hottentots’, whom he ranked ‘lowest’ among those who were admissibly human. More generally, he found that ‘in whatever respect the African differs from the European, the particularity brings him nearer to the ape.’
The habit of classifying life-forms into species, and apostrophizing species as ‘higher’ and ‘lower’, invited speculation about who belonged to the highest one. Edward Long had justified slavery in 1774 on the grounds that blacks were distinguished from other peoples—inter alia, by a ‘narrow intellect’ and ‘bestial smell’—so as almost to constitute a different species from such humans as himself. Henry Home in the same year went further: humans constituted a genus in which there were numerous different species, of which blacks were an obvious example. According to Samuel Morton of Philadelphia, who died while Darwin was at work on The Origin of Species, Native Americans were unrelated to people in the Old World: they had evolved separately in their own hemisphere. The findings à parti pris of Josiah Nott and George Gliddon—that blacks were more like gorillas than full-ranking human beings—appeared a year before Darwin’s work was published. In the 1860s, John Hunt, founder of the British Anthropological Society, endorsed the similarity between blacks and apes and attributed cases of high attainment among blacks to exceptional instances of interfertility among separate species—admixtures of white blood (which, he thought, were nonetheless non-viable in the long run). Meanwhile, his sometime associate, John Crawfurd, revived the notion of polygenesis, while explicitly denouncing the view that distinct human species could be ranked on grounds of colour.
At first, the severance of mankind among different species was generally rejected for the obvious reason that humans of all extant kinds are capable of breeding with one another; but the compulsion to find a way of characterizing the diversity of humankind consistently with the prejudices of the times was keenly felt among scientists. Louis Agassiz—the revered pioneer of geology and anthropology in mid-nineteenth-century Harvard—staked a great deal of investment on a research trip to Brazil, where he hoped to prove that people of contrasting colours were distinct species by showing that miscegenation led to infertility. Mating blacks with whites, he thought, was like breeding mules from horses and donkeys. 31 Even Darwin, who repudiated racism and subscribed to the Anti-Slavery Society, thought races were ‘sub-species’ or potential species: blacks and whites, for example, might eventually become separate species, if kept apart from one another, by analogy with the separation of different species of gibbon, say, or tern, or of closely related felines. To many other scientists of his day, ‘human’ was a misnomer for races already divided from the human norm by unbridgeable chasms, if they were not actually products of polygenesis—the ‘separate creations’ that Darwin denied.
* * *
Environmental determinism provided no relief, in practice, to the victims of exploitation and extermination. It provided a superficially attractive explanation for variation in pigmentation—exposure to sunshine made skin dark—or body size: extreme climates favoured smallness. Boswell recorded a conversation in which Samuel Johnson explained why blacks are black: ever-deepening sun-tans had been transmitted to their progeny over many generations. But emphasis on the environment as the cause of differences between populations opened the way to another kind of irrational censure: climate might affect not only the outward appearance but also the inner moral and intellectual qualities of entire communities, condemning inhabitants of the tropics to inveterate laziness or incorrigible stupidity.
The eighteenth-century debate known to historians as ‘the Dispute of the New World’ vividly illustrates the ambiguities of environmental determinism. Georges-Louis Buffon, one of the foremost naturalists of the mid-eighteenth century, who specialized in the acclimatization of plants from around the world, launched the dispute by claiming that the Americas could be characterized in general as a horrible, corrupting hemisphere, where the extremes of climate, the exhalations of swamps, and the inferiority of the very air debilitated all life forms, restricted the variety of species, and condemned human inhabitants to puny stature, feeble physiques, and backward intellects. Where the Old World had lions and tigers, America had pumas and ocelots; to rival the camel, the best the New World could come up with was the llama; to challenge the elephant for majesty, America had only the tapir. 32
Buffon formulated these claims in the context of a broad theory of environmental determinism. He thought, for instance, that fierce sun and winds were responsible for varied pigmentation. As with so many philosophes of the Enlightenment, anticlericalism underpinned his thinking. He sought to explain the diversity of species in a way that was independent of the Bible, defiant of the Church, and even dismissive of God. He thought—as many scientists do today—that life originated spontaneously and evolved in response to environmental change, which also, a fortiori, accounted for humans’ variety of aspect and character. Contemporary readers—especially those who shared his secular outlook—found his work persuasive. Voltaire endorsed much of it.
Followers and admirers added to the stock of examples. According to Cornelis de Pauw, Siberians and Canadians shared ‘natural melancholy’, which ‘the gloom of their forests’ induced. The Abbé Raynal, who was one of the most influential spokesmen of the Enlightenment and a patron and inspirer of Rousseau, thought America induced degeneracy that incapacitated its people. In all the hemisphere, he opined, there was no civilized race, no individual genius. The congenital laziness of the natives extended to erotic indifference, which evinced an ‘organic imperfection’ similar to that of pre-pubescents in the rest of the world. 33 Claims of this sort could not survive the accumulation of evidence of what the New World was really like. American partisans responded with counterclaims that the New World, governed by ‘new stars’, stimulated progress and genius. 34 Thomas Jefferson is said to have disproved the theory that the American environment had stunting effects by towering over his fellow diners at a party in Paris.
The Dispute of the New World ended in the New World’s favour; in general, the failure of the tradition Buffon founded probably helps to explain why history and natural history could part company in the nineteenth-century West. Yet environmental determinism survived. Still popular in the early nineteenth century, for instance, was the widely held eighteenth-century theory that Jean Baptiste de Lamarck reformulated in 1809. Summarizing a commonplace of the time (which, for example, as we have seen, Dr Johnson had espoused in the previous generation), Lamarck argued that organisms adapt to their environments and pass on adapted characteristics by means of heredity. Darwin—whose theory of evolution is now recognized to be incompatible, or at least in tension, with Lamarck’s—actually endorsed his predecessor’s views. In deference to Lamarck, Darwin advised young women to acquire ‘manly skills’ before starting families. The Lamarckian idea has never quite vanished from the repertoire of scientific explanation, though the arguments of Darwinism have tended to eclipse it. Experimental data do not seem to support it and common observation is against it. You may sit in the sun all your life, but your children will be no darker for it.
Even after Darwin’s critique made environment seem less decisive in determining physical characteristics of life-forms, almost everyone who thought about the subject continued to invoke environmental determinism to explain differences of culture. Early in the twentieth century, Ellen Churchill Semple—notable as one of the first women to make a major contribution to environmental science—summarized the tradition: physical geography was ‘the physical basis of history,…immutable in comparison with the other factor in the problem: shifting, plastic, progressive, retrogressive man’. 35 The superiority of Aryans over others, she argued, for instance, was the result of ‘inherited aptitudes’ and ‘traditional customs’ forged by the influence of ‘remote ancestral habitats’. 36 In general, she concluded, ‘a close correspondence exists between climate and temperament’. Hence northern Europeans are energetic, serious and cautious, whereas sub-tropical dwellers are improvident, easy-going, and emotional, ‘all qualities which among the negroes of the equatorial belt degenerate into grave racial faults’. 37
Most proponents of environmental and biological determinism, in short, based their views on irrational prejudices, false data, and superannuated thinking. If history and natural history were to be reunited—if culture and nature were to be reintegrated in a single subject of study—at least one new starting-point was needed. In the mid-nineteenth century the world got two.
* * *
The new foundations were the work of two geniuses who, independently but roughly simultaneously, from the 1830s to the 1870s, approached nature and culture from contrasting perspectives, though both thought of themselves as scientists. Both were outsiders, however, in the academic world of their day. Charles Darwin was a scientific amateur, whose inherited prosperity enabled him to think independently and work capriciously. Karl Marx, who also inherited wealth but could not manage it profitably, was an indigent journalist, whom exclusion from the establishment liberated for radicalism. Both were theorists of change—not of change in general, but of changes of particular types: Darwin’s theory of evolution described and to some extent explained how life-forms change; Marx’s theory of class struggle tried to do the same for history.
According to Marx, every instance of progress is the ‘synthesis’ of two preceding, conflicting events or tendencies. He based his theory on a method of thinking that German philosophers devised or developed in the first two decades of the nineteenth century: everything is part of something else. So if x is part of y, you have to understand y in order to think coherently about x. You cannot know either without knowing x + y—the ‘synthesis’ that alone makes perfect sense. This seems unimpressive: a recipe for never being able to think coherently about anything in isolation. As well as ‘dialectical’, as this method came to be called, Marx’s thinking was ‘materialist’. Change was economically driven (not, as most of the German exponents of dialectic thought, by ‘spirit’ or ‘ideas’). Political power, for instance, ended up with whoever held the sources of wealth. Under feudalism, land was ‘the means of production’; so landowners ruled. Under capitalism, money counted for most; so financiers ran states. Under industrialism, as the British economist David Ricardo had shown, labour added value; so the society of the future would be under the rule of workers. A further, final synthesis remained vaguely delineated in Marx’s work: a classless society in which the state would ‘wither away’, everybody would share wealth equally, and all property would be common.
Apart from this last, perfect consummation, each transition from one type of society to the next, Marx thought, was inevitably violent: the ruling class held on to power while the rising class struggled to wrest it. So he tended to agree with the philosophers of his day who saw violence as conducive to progress. The effect of his idea was therefore generally baneful, helping to inspire revolutionary violence, which sometimes succeeded in changing society, but never seemed to bring the communist utopia into existence.
Surprisingly, perhaps, Marx, who read insatiably in history, economics, and philosophy, took little interest in biology or physiology. He assumed, however, that all human behaviour starts with instinctive imperatives. In a work he wrote with his patron, Friedrich Engels, in 1846, ‘we do not start,’ he said,
from what men say, imagine, or conceive, nor from people as narrated, thought of, imagined, or perceived in order to understand them in flesh and blood. We look at the real lives of people in action, and on the basis of the way they really behave we show how their ideologies reflect their material circumstances. The phantoms that take shape in people’s brains are also, inescapably, sublimations of the way they live, which is the product of experience and of objective determinants. In the light of our findings, morality, religion, metaphysics, and ideology of all other kinds, with the thinking that goes along with them, no longer look spontaneous or like products of free choice. They have no history, no dynamic of their own. Rather, as the means of production change, and the material ways in which people interact develop, so, in a direct chain of consequences, their real existence, their thinking, and the issue of their thoughts change. Life is not determined by consciousness, but consciousness by life. 38
Marx and Engels never succeeded in demonstrating those egregious claims, because they had no way of proving that material circumstances govern thought. It could be the other way round. When Marx read Darwin’s work, he recognized it as supplying something his own thought lacked: a scientific basis for the assumption that biological urges drive human behaviour, ‘a basis in natural science’, as he wrote, ‘for the historical class struggle’. Nowadays, most readers detect a profound antipathy between Darwin and Marx: capitalists extol the former as an apostle of a competitive approach to life and vilify the latter as an enemy of enterprise. But they were kindred spirits in some respects, both of whom immersed human life deeply, inextricably in the struggle for survival of all biota where ‘nature, red in tooth and claw’ tears culture to shreds.
Their characters, however, could hardly be more different. Marx, the prophet of peace, was combative, restless, declamatory, venomous, and relishing of controversy; Darwin, the hierophant of struggle, recoiled from conflict in his personal life and scientific relationships. He was shy, retiring, deferential, and tentative; though he could treat enemies viciously he preferred to do so secretly, behind their backs.
* * *
Whereas Marx developed his thinking in reaction to the prevailing attitudes of his day, Darwin reflected them. The air of the mid-nineteenth century was thick with comprehensive schemes for classifying the world. George Eliot satirized them in the obsessions of the characters in her novel of 1852, Middlemarch, in which one character sought ‘the key to all mythologies’ and another ‘the common basis of all living tissues’. Darwin was part of the second of these projects. Most scientists already believed that life had evolved from, at most, a few primitive forms. What they did not know was ‘the mystery of mysteries’: how new species arise.
In 1832, on Tierra del Fuego at the southern tip of South America, Darwin encountered ‘man in his savage state…a foul, naked, snuffling thing with no inkling of the divine’, apparently ‘bereft of human reason or at least of arts consequent to that reason…The difference between savage and civilised man’, he added, ‘is greater than between a wild and domesticated animal.’ Islanders’ language ‘scarcely deserves to be considered articulate. Captain Cook has compared it to a man clearing his throat, but certainly no European ever cleared his throat with so many hoarse, guttural and clicking sounds.’ The specimens encountered later in the voyage, on the western side of the island, were even more bestial, sleeping ‘coiled up like animals on the wet ground’, condemned by cold and poverty to a life of ‘famine, and, as a consequence, cannibalism accompanied by patricide’,
stunted in their growth, their hideous faces bedaubed with white paint, their skins filthy and greasy, their hair entangled, their gestures violent and without dignity. Viewing such men, one can hardly make oneself believe they are fellow-creatures and inhabitants of the same world…How little can the higher powers of the mind be brought into play! What is there for imagination to picture, for reason to compare, for judgement to decide upon? To knock a limpet from the rock does not even require cunning, that lowest power of the mind. Their skill in some respects may be compared to the instinct of animals; for it is not improved by experience. 39
The Fuegians taught Darwin two things: that a human is an animal like other animals and that the environment moulds us. The germ of the theory of evolution entered his head as he puzzled over how Fuegians could endure the climate in a state of near-nakedness. ‘Nature, by making habit omnipotent and its effects hereditary, has fitted the Fuegian to the climate and productions of his miserable country.’ Later in the Galápagos Islands, he observed how small environmental differences cause marked biological mutations to take hold.
When he was back home in England, among game birds, racing pigeons, and farm stock, Darwin realized that nature selects strains, as breeders do. The specimens best adapted to their environments survive to breed and pass on their characteristics. Darwin held the struggle of nature in awe, partly because his own sickly offspring were victims of it. He wrote, in effect, an epitaph for his dying children: the survivors would be more healthy and most able to enjoy life. ‘From the war of nature’, according to On the Origin of Species, which he published in 1859, ‘from famine and from death, the production of higher animals directly follows’. Orang-utans, whose influence on humans’ self-image has been so pervasive, were a further source of inspiration for Charles Darwin. He liked to visit London Zoo to observe little Jenny, the menagerie’s curious specimen of the species. She was, he thought, uncannily like a human child, understanding her keeper’s language, wheedling treats and showing off her pretty dress when her keepers presented her to the Duchess of Cambridge. Darwin evidently preferred her to some of the humans he knew.
The narrative of the genealogy of man that Darwin published in 1871 started with marine animalculi which he likened to larvae. From these descended fish, ‘a very small advance would carry us on to the amphibians…but no one can at present say by what line of descent the…mammals, birds and reptiles were derived from…amphibians and fishes’. Among mammals, placental animals succeeded marsupials.
We may thus ascend to the Lemuridae; and the interval is not wide from these to the Simiadae. The Simiadae then branched off into two great stems, the New World and Old World monkeys; and from the latter, at a remote period, Man, the wonder and glory of the Universe, proceeded. Thus we have given to man a pedigree of prodigious length, but not, it may be said, of noble quality…We thus learn than man is descended from a hairy quadruped, furnished with a tail and pointed ears, probably arboreal in its habits. 40
Gregor Mendel, the kind and gentle Austrian monk whose experiments with peas established the foundation of the science of genetics, died two years after Darwin published the Origin of Species. The implications of Mendel’s work were not followed up until the end of the century, but, when drawn in, they were abused. With the contributions of Darwin and Gobineau, they helped to complete a supposedly scientific justification of racism. Genetics provided an explanation of how one man could, inherently and necessarily, be inferior to another by virtue of race alone. To the claim that this represented a new departure in the history of human self-perceptions, it might be objected that racism is timeless and universal. What the nineteenth century called ‘race’ had been covered earlier by terms like ‘lineage’ and ‘purity of blood’. No earlier idea of this kind, however, had the persuasive might of the scientific racism of the first half of the twentieth century; nor the power to cause so much oppression and so many deaths.
* * *
Evolution, meanwhile, opened up new possibilities for reintegrating the study of nature and culture. As Darwin’s theories became accepted, other thinkers proposed refinements that later came to be known as ‘social Darwinism’—broadly speaking, the idea that societies, like species, evolve or vanish according to whether they adapt successfully in mutual competition in a given environment.
Three probably misleading assumptions underpinned the move to appropriate evolution for the study of society: first, that society is quasi-organic—that it behaves, in some respects, like a beast—and could be said, for instance, to grow from infancy to maturity and senescence; second, that, like plants and animals, society tends to get ever more complex over time (which, though broadly true, is not necessarily the result of any natural law or inevitable dynamic); and finally, that what Darwin called ‘the struggle for survival’ favours what one of his most influential readers, Herbert Spencer, called ‘the survival of the fittest’. Spencer put it like this:
The forces which are working out the great scheme of human happiness, taking no account of incidental suffering, exterminate such sections of mankind as stand in their way with the same sternness that they exterminate beasts of prey and useless ruminants. Be he human being or be he brute, the hindrance must be got rid of. 41
Spencer claimed (with conscious mendacity, as his autobiography made clear) 42 to have anticipated Darwin, not to have followed him; but the very disavowal seems to align him with social Darwinism. 43 Well disposed scholars have exempted Spencer from the charge of engendering the doctrine on the grounds that his understanding of biological evolution owed as much to Lamarck as to Darwin. He was a practitioner of compassion and an advocate of peace—but only in acknowledgement of the overwhelming power of the morally indifferent force of nature. He was ideally placed to spark and stimulate the reintegration of history and natural history, because he had little formal academic training and was never encumbered by the need to specialize. He achieved vast influence, perhaps because his confident assertions of the inevitability of progress helped restrain or dispel contemporaries’ uncomfortable doubts. He fancied himself as a scientist—his rather exiguous professional training was in engineering—and he ranged in his writings over science, sociology, and philosophy with all the assurance, and all the indiscipline, of an inveterate polymath. He hoped to bring to fruition Auguste Comte’s prediction of a synthesis of science and humanism in ‘social science’. His aim, he often said—recalling Comte’s search for a science that would ‘reorganize’ society—was to inform social policy grounded in biological truths.
Instead, he encouraged political leaders and policymakers in dangerous extrapolations from Darwinism, including the idea that conflict is natural, therefore good; that society is well served by the elimination of antisocial or weak specimens; and that ‘inferior’ races are justly exterminated. Hitler made the last turn in this twisted tradition: ‘war is the prerequisite for the natural selection of the strong and the elimination of the weak’. 44 By advocating the unity of creation, Darwin implicitly defended the unity of humankind. But there was no clear dividing line between social Darwinism and the original ‘scientific Darwinism’. Darwin was the father of both.
The fact that the theory of evolution has been abused should not obscure the fact that it is true. Natural selection probably does not account for every fact of evolution. Random mutations happen—they are the raw material with which natural selection works, but they occur beyond its reach. Functionless adaptations survive, unsieved by struggle. Mating habits can be capricious and unsubmissive to natural selection’s supposed laws. The glaring problem, however, of Darwin’s theory was and is where and how to fit culture into it. In the twentieth century, enquirers whose work is the subject of the next chapter proposed solutions.