3

The Great Reconvergence

Restoring biology to history

Half a line from Vergil’s Æneid confronts the visitor to the main foyer of the London School of Economics. Painted elegantly on the wall that faces the door, proclaiming a motto for the scientific study of society, the words rerum cognoscere causas appear. Out of context the phrase sounds like an audacious aspiration: ‘to know the causes of things’. For Vergil, it was part of a more modest, pious aphorism, evidently to be fulfilled only rarely: the whole line reads, ‘Felix qui potuit rerum cognoscere causas’—happy is he who is able to learn the causes of things. The LSE’s motto, I think, is unrealistically ambitious. Perhaps even Vergil was excessively sanguine.

I can remember encountering his line when I was, I suppose, about 15, reading the Æneid for homework and thinking to myself what a good joke of the poet’s this was. For Vergil’s world resembles the cosmos of chaos theory, where causes are untraceable and effects untrackable. The fates spin away offstage, directing history towards a pre-ordained goal; meanwhile, the random interventions of shifty people and capricious gods keep twisting and snapping the thread. What makes the Æneid a good story is that it is impossible to know what is going to happen next. You cannot know the causes of things; therefore you cannot predict their outcome.

Yet this irony of Vergil’s has been transformed on the wall of the LSE into a solemn pronouncement that some academics take all too literally. Maybe there are no causes to know, or, at least, maybe much that happens is uncaused. Anyone who thinks that everything is explicable as the result of something else—who sees causation as the ‘cement of the universe’, making each event adhere closely to the next—may be the victim of an unwarranted assumption. ‘Just-so’ explanations may be the only true ones. Or put it as Alexander McCall Smith does in one of his canny, quaint novels of life in Edinburgh. A character worries about whether to take the initiative with the man she loves. To her father, a psychiatrist, she puts her suspicion that evolution has equipped men with filters against sexually forthcoming women. He denounces her for ‘sociobiological nonsense’ and assures her that her inhibitions are culturally induced. We behave as we do, he concludes, for much of the time, ‘for no discernible reason’. 1 Or say, as the anthropologist Robert Lowie famously said in 1920, that culture is ‘a planless hodgepodge’, a thing of shreds and patches. 2

The disciplines that we class as scientific deal in predictability and there are only two ways of making a successful prediction: you might succeed with an inspired guess; or you might set about your task systematically, assuming that the future is a consequence of the past. Scientists rarely admit to guesswork. By observation and experiment they generally establish an apparent pattern of cause and effect and expect it to be repeated. When Edward Cannan devised the LSE’s motto in 1922 he was one of the great cohort of economists, led by Alfred Marshall, who piloted the School away from its early role as a partisan training-ground for moderate socialists, to become an independent institution in which people studied society objectively. In selecting from arts students’ favourite text, Cannan was making a bid to proclaim the LSE’s curriculum as scientific—picking a path through the complexity of causation to predictable outcomes. Therein lay the happiness of knowing causes.

He was not alone. Almost all scholars in the early twentieth century wanted their disciplines to ape or filch the prestige of science. Even the theological tendency called ‘fundamentalism’—which we now think of as being at war with science—started, according to a historian who has open-mindedly studied its origins, 3 as an attempt in the early years of the twentieth century by divines in Princeton and Chicago to root the study of God in incontrovertible facts, in imitation of the methods of the observatory and the lab.

In the course of the new century, science came to set the agenda for the world. While previously scientists had tended to respond to the demands of society, now science drove other kinds of change. The pace of discovery—with dazzling revelations about the cosmos, nature, and humankind—commanded admiration and radiated prestige. Ever larger and costlier scientific establishments in universities and research institutes served their paymasters—governments and big business—or gained enough wealth and independence to set their own objectives and pursue their own programmes. New theories shocked people into revising their images of the world and their place in it.

No wonder every academic department wanted rebaptism in these transforming waters. No wonder every art wanted to be a science. All academic disciplines became highly professionalized and specialized, with their own jargons and long training-programmes designed to exclude outsiders and amateurs. Practitioners of other kinds of learning tended to treat science as a benchmark discipline, the objectivity of which they wished to emulate, but the language and findings of which they could barely comprehend. In these conditions, the reconvergence of nature and culture in academic thinking became possible.

* * *

The way, however, was hard and fraught with frustration. Almost as soon as Marx and Darwin seemed to have discovered the means of putting science and culture back together, critics tried to drive them apart, or to keep them in the distinct spheres to which the nineteenth-century curriculum assigned them.

In the early twentieth century, the two disciplines that supplied the most effective critics were sociology and anthropology. This may seem surprising, as in theory sociology matched Comte’s dream of an all-embracing discipline that would subject culture to scientific scrutiny; and, as we have seen, anthropology had produced a stream of biological and environmental determinists. The turnaround is best exemplified in the lives and work of three individuals, representatives of the new directions sociology and anthropology took: the sociologists Max Weber and Lester Ward, and the anthropologist Franz Boas.

Weber’s education in history and law focussed his attention on the unscientific side of life—the arbitrary, contingent, chaotic mess of experience. He wanted to sort the mess out—to make some sort of sense of it, in which predictable consequences flow from identifiable causes, but neither Marx’s nor Spencer’s models attracted him. His nurturing under his mother’s wing, against his hedonistic father, on the committedly Christian side of his divided household, prejudiced young Max against materialism. His own conviction of the power of thought made him recoil from the idea that instincts could chain it. Weber, who was a bourgeois Evangelical, reacted to trends of his times by extrapolating from his own circumstances—searching for an Evangelical and bourgeois answer to Marxism. Between the lines of his many citations of Marx, Weber’s real revulsion emerges. Marx was the great bogey, the dominant, malign intellectual force of the era. Marx said economics determine religion. Weber stood Marxism on its head and said that religion determines economics. Marx said religion was the opium of the workers. Max tried to show that, on the contrary, it stimulated work. It was Max versus Marx.

Weber, who was very active in Evangelical politics, wanted to make values, especially religiously inspired values, the cause of everything else—the motor of civilization, in the way that evolution is the motor of organic life. You can see why this sort of thinking is misleading by considering a modern instance. People who say nowadays that Confucianism explains the stagnation of China in the nineteenth century and its rise in the twentieth make the same mistake as Weber. Something so elastic that it can explain everything cannot be exact enough to explain anything. It would be wonderful if people really did behave as their religion teaches. The world would be so much better if Christians practised universal love and Buddhists actively strove for enlightenment. But practically nobody does. Religion is unhappily over-rated as a source of influence in society. Still, Weber’s influence helped to convince historians and sociologists that they could continue to study their subjects without having to trouble themselves with scientific knowledge. Culture could be treated—to borrow a term from science—as ‘autocatalytic’, changing from within, according to a dynamic of its own.

Rather as Weber responded to Marx and his schematic reconstruction of history, Lester Ward, at about the same time, reacted against Spencer and social Darwinism. He knew the struggle for survival too well to like it. In his early teens, he had been a frontier pioneer, travelling by wagon with a family of trail-blazers to settle in Iowa. He fought in the Civil War and sustained three wounds. He defied poverty and worked his way through college. He rated collaboration higher than competition. He denounced laissez faire—social Darwinists’ standard prescription for improving society. You could transfer to the present, without modification, his denunciation of the effects of the under-regulated business practices of his day. ‘Nothing,’ he wrote, ‘is more obvious to-day than the signal inability of capital and private enterprise to take care of themselves unaided by the state.’ He defined the ‘paternalism’ capitalists decry as ‘the claim of the defenseless laborer and artisan to a share in this lavish state protection’. He accused fat-cat bosses of ‘besieging legislatures for relief from their own incompetency’. 4 He was a typical liberal in the US sense of the word: he wanted the state to restrain the iniquities and inequalities of capitalism; but he was equally anxious to reclaim human freedom from history as the Left saw it—as the plaything of vast, impersonal forces that dwarf human wills.

Ward did not repudiate science. He claimed to work on ‘the highest of all sciences’. 5 He did, however, dismiss as simplistic Spencer’s insistence on a close analogy between the ways in which organisms and cultures develop. He rejected biological determinism on the grounds that human physiology changes insignificantly or not at all, while society changes immeasurably. ‘The artificial modification of natural phenomena’—the effects, that is, of culture on nature—greatly exceeded any changes biological evolution wrought in society. 6 History, according to Ward, was ‘not a simple extension of natural history’ but ‘the results of will, ideas, and intelligent aspirations for excellence, and hence conscious and personal’. In some respects, his rejection of the claims of nature was disturbingly uncompromising. He denied that humans are naturally gregarious—on the contrary, he thought that, if anything, humans are mildly antisocial creatures. Therefore he denied that culture is part of nature. Rather, it is a contrivance humans have thought up for themselves: ‘purely’, as he put it, ‘a product of reason’. 7 He was surely wrong on that score, partly because reason is a faculty with which nature equips us, and partly because—as we now know—humans are not the only cultural animals. In general, however, Ward’s critique of Spencer was highly effective. It is not hard to envisage the victim, for instance, of this delicious lampoon:

When a well-clothed philosopher on a bitter winter’s night sits in a warm room well lighted for his purpose and writes on paper with pen and ink in the arbitrary characters of a highly developed language the statement that civilization is the result of natural laws, and that man’s duty is to let nature alone so that untrammeled it may work out a higher civilization, he simply ignores every circumstance of his existence and deliberately closes his eyes to every fact within the range of his faculties. If man had acted upon his theory there would have been no civilization, and our philosopher would have remained a troglodyte.

Ward concluded in favour of what he called the ‘spontaneous development’ of culture, the ‘improvement of society by society’.

In anthropology, meanwhile, Franz Boas led a similar reaction in favour of the autonomy of culture. Among the supposedly scientific certainties treasured in the late nineteenth-century West was that of the superior evolutionary status of some peoples and some societies: an image of the world sliced and stacked in order of race. This picture suited Western imperialists, who treated it as a justification of their rule over other peoples. But Boas upset it. Like Darwin, he had a formative experience among people Westerners dismissed as primitive. But whereas the Fuegians disgusted Darwin, Boas admired the Inuit. When he worked on Baffin Island in the 1880s he found himself looking up to them and appreciating their practical wisdom and creative imaginations. He turned his perception into a principle of anthropological fieldwork (which also works well as a rule of life): empathy is the heart of understanding. When you work with others, you have to strive to see the world as they do. In consequence, your eye is drawn to the intriguing peculiarities of different cultures. You eschew risky generalizations. Determinism of every kind becomes unconvincing, because no single explanation seems adequate to account for the divergences you observe.

As well as a teacher, who dominated the study of anthropology in North America, Boas was a fieldworker in his youth and, additionally, a museum-keeper in maturity, in touch with the people and artefacts he sought to understand. His pupils had Native American peoples to study within little more than a railway’s reach. The habit of fieldwork piled up enormous quantities of data to bury the crudely hierarchical schemes of the nineteenth century. Boas showed that no ‘race’ is superior to any other in brainpower. He made untenable the notion that societies can be ranked in terms of a developmental model of thought. People, he concluded, think differently in different cultures not because some have superior mental equipment but because all thought reflects the traditions to which it is heir, the society by which it is surrounded, and the environment to which it is exposed. Shortly after the end of the first decade of the twentieth century, he summarized his findings: ‘the mental attitude of individuals who…develop the beliefs of a tribe is exactly that of the civilized philosopher’. 8 And

there may be other civilizations, based perhaps on different traditions and on a different equilibrium of emotion and reason, which are of no less value than ours, although it may be impossible for us to appreciate their values without having grown up under their influence. The general theory of valuation of human activities, as developed by anthropological research, teaches us a higher tolerance than the one we now profess. 9

Each culture shapes itself. There is no universal pattern; therefore there are no universal determinants. The facts fieldwork disclosed, as Robert Lowie, one of Boas’s brilliant students put it, are ‘inconsistent with the theory of linear evolution’.

The new anthropology took a long time to spread beyond Boas’s students. But it was already influencing British methods in the first decade of the century, and gradually became orthodoxy in the other major centres of anthropological research in France and Germany. Cultural relativism was among the results: the doctrine that cultures cannot be ranked in order of merit but must be judged each on its own terms. This proved problematic: should cannibals be judged on their own terms? Or cultures which license slavery or the subjection of women? Or those which practise infanticide or head-hunting or other abominations? Or even those that condone relatively milder offences against values approved in the West—offences such as the mutilation of criminals, or female circumcision? Cultural relativism had to have limits, but anthropology compelled educated people everywhere to examine their prejudices, to see merit in cultures formerly despised, and to question their own convictions of superiority.

Boas’s revolution deprived anthropology of the power of prediction by filleting determinism out of it. Cultural anthropology split from physical anthropology—reinforcing the division of academic life in uncommunicating trenches. Boas’s disciples included some of the most tenacious opponents of biological and environmental determinism, including two we have already encountered: Alfred Kroeber and Margaret Mead. Perhaps the most influential anthropological book of all time was Mead’s Coming of Age in Samoa, published in 1928. The author worked with pubescent girls in a sexually unrepressive society. She claimed to find a world liberated from the inhibitions, hang-ups, anxieties, and neuroses that psychology was busily uncovering in Western cities and suburbs. In the long run, as she rose to the top of her profession, to academic eminence, and social influence, her work helped to feed fashionable educational nostrums: uncompetitive schooling, rod-sparing discipline, cheap contraception. Western educationists could learn from Samoan adolescents in a world without barbarians and ‘savages’, in which the language of comparison between societies had to be value-free. What had once been called primitive cultures and advanced civilizations came respectively to be labelled ‘elementary structures’ and ‘complex structures’. The long-standing justification for Western imperialism—the ‘civilizing mission’—lapsed, because no conquerors could any longer feel enough self-confidence to impose their own standards of civilization on their victims. Mead made naïve mistakes, but she stuck to the lessons she learned from Boas: ‘in the central concept of culture,’ she wrote, ‘as it was developed by Boas and his students, human beings were viewed as dependent neither on instinct nor on genetically transmitted specific capabilities but on learned ways of life that accumulated slowly through endless borrowing, re-adaptation, and innovation.’ Culture fought free of biology and asserted its own dynamic. For a moment, it looked as if Boas might have unchained culture from biology forever. ‘The ethnologist will do well,’ declared Boas’s student, Robert H. Lowie, in the lectures he gave as Curator of the New York Museum of Natural Sciences in 1917, ‘to postulate the principle, omnis cultura ex cultura’—rendering as a Latin aphorism, in effect, Lester Ward’s principle of culture as an autonomous system. 10 ‘Sociology’, affirmed Luther Lee Bernard at the University of Minnesota in 1923, ‘is at last shaking itself free from biological dominance’. 11

While Boas and his pupils were at work, the autonomy of culture got a curious, unintended boost from the psychology of Sigmund Freud. This is surprising, because psychology aimed to explain individual behaviour scientifically, by uncovering universal urges. Crucially, however, by concentrating on universals and individuals, Freud left culture, in a gap between them, to explain itself. He was even more subversive of scientific orthodoxy than Boas, because his discoveries or claims reached beyond the relationships between societies to challenge the notions individuals had about themselves. In particular, the claim that much human motivation is sub-conscious challenged traditional notions about responsibility, identity, personality, conscience, and mentality. In an experiment Freud conducted on himself in 1896, he exposed his own ‘Oedipus Complex’, as he called it: a supposed, suppressed desire—which he believed to be universally, sub-consciously present in male children—to supplant his father. In succeeding years he developed a technique he called psycho-analysis, designed to make patients aware of their sub-conscious desires: hypnosis or, as Freud preferred, the mnemonic effects of free association, could retrieve repressed feelings and ease nervous symptoms. Patients who rose from his couch walked more freely than before.

Freud seemed able, from the evidence of a few of his patients, to illuminate the human condition. Every child—he claimed to show—experiences before puberty the same phases of sexual development; every adult represses similar fantasies or experiences. Women who only a few years previously would have been dismissed as hysterical malingerers became, in Freud’s work, case studies from whose example almost everyone could learn: this made an important, indirect contribution to the re-evaluation of the role of women in society. For some patients psycho-analysis worked, and in his own lifetime Freud was successful in representing his psychology as scientific. His ‘science’, however, failed to pass the most rigorous tests: when Karl Popper asked how to distinguish someone who does not have an Oedipus complex, the psychoanalytic fraternity had no answer. And despite its pretensions, the study of the sub-conscious tended to make society seem unscientific: if the mental features Freud claimed to discover really did occur, as he thought, at all times in all cultures they were of no help in explaining cultural differences. Ironically, as he only studied directly members of the Western bourgeoisie of his day, it may be that features he represented as universal were themselves the products of cultural divergence.

* * *

Weber, Ward, Boas, and Freud, considered from one aspect, were immersed in the intellectual priorities of their own days: part of an immense project, a loosely connected movement, among radical thinkers to unpick the complacency of nineteenth-century Western thinking. Historians have a habit of tampering with chronology: treating the twentieth century as starting in 1914, for instance, as if the trenches of the Great War were a crucible for the world. The years preceding the war become, in this tradition, a period of inertia when nothing much happened—a golden afterglow of the Romantic Age, turned blood-red by the real agent of change—the war itself. But even before the war broke out in 1914, when the worlds of thought and feeling were already alive with new hues, a scientific counter-revolution exploded inherited certainties.

When the century opened, the scientific world was in a state of self-questioning, confused by rogue results. In the 1890s, x-rays and electrons were discovered or posited, while puzzling anomalies became observable in the behavior of light. In 1902 a young French mathematician, Henri Poincaré, questioned what had previously been the basic assumption of scientific method: the link between hypothesis and evidence. Any number of hypotheses, he said, could fit the results of experiments. Scientists chose between them by convention—or even according to ‘the idiosyncrasies of the individual’. 12 Among examples Poincaré cited were Newton’s laws and the traditional notions of space and time. He provided reasons for doubting everything formerly regarded as demonstrable. He likened the physicist to ‘an embarrassed theologian,…chained’ to contradictory propositions. 13 His books sold in scores of thousands. He became an international celebrity, whose views were widely sought and widely reported. He frequented popular stages, like a celebrity-scientist today, haunting TV chat shows. Unsurprisingly, in consequence, he claimed to be misunderstood. Readers misinterpreted Poincaré to mean that ‘scientific fact was created by the scientist’ and that ‘science consists only of conventions…Science therefore can teach us nothing of the truth; it can only serve us as a rule of action.’ 14 But the history of science is full of fruitful misunderstandings: Poincaré was important for how people read him, not for what he failed to communicate.

Without Poincaré, Einstein would have been unthinkable. The former published his critique of traditional scientific thinking in 1902. Three years later Einstein emerged from the obscurity of his dead-end job in the Swiss Patent Office, like a burrower from a mine, to detonate a terrible charge. Relativity made absurdities credible: twins of different ages, light that is simultaneously waves and particles. Within the next few years physicists split the atom and revealed the dazzling gyrations of the quanta of which all matter is composed. While science subsided to the level of convention, Ferdinand de Saussure raised doubts about the reliability of language to capture facts. In lectures he gave in Geneva in 1907, the influence of which gradually seeped into every educated mind, de Saussure questioned whether words can match reality. He made meaning seem a construct of culture, rather than an objectively verifiable property of the world, and placed language outside the reach of scientific explanation. Common sense crumbled. Notions that had prevailed since the time of Newton turned out to be misleading. 15

The arts made confusion visible and audible. Painting, which is the mirror of science, held up shattered or distorted images of the world. Primitivism subverted racial hierarchies. Cubism distorted perceptions. After reading about the splitting of the atom, Kandinsky set out to paint ‘abstract’ pictures that were as removed as possible from anything real. The syncopations of jazz and the new noises of atonal music—released in Schönberg’s Vienna in 1908—subverted the harmonies of the past as surely as quantum mechanics began to challenge ideas of order. The period was both a graveyard and a cradle: a graveyard of certainties, the cradle of a civilization of crumbling confidence, in which it would be hard to be sure of anything.

Potentially devastating philosophical malaise eroded confidence in traditional notions about language, reality, and the links between them. By 1914, the New York Times averred, ‘the spirit of unrest’ had ‘invaded science’. 16

* * *

Still, the linear narratives of change that Marx and Darwin had proposed survived and scholars’ desire to explain cultural change scientifically kept resurfacing. In part this was because science continued to solve other kinds of problem with enviable ease. Science remoulded life—sometimes for the worse, but generally with godlike dexterity. Technology hurtled into a new phase. The twentieth century would be an electric age, much as the nineteenth had been an age of steam. In 1901, Marconi broadcast by wireless. In 1903 the Wright brothers took flight. Plastic was invented in 1907. The curiosities of late nineteenth-century inventiveness, such as the telephone, the car, and the typewriter, all became commonplace. Other essentials of technologically fulfilled twentieth-century lives—the atom-smasher, the ferro-concrete skyscraper frame, even the hamburger and Coca-cola—were all in place before the First World War. It began to look as if technology could do anything. In the rest of the century it almost did. Military technology won wars. Industrial technology multiplied food and wealth. Information systems devised in the West revolutionized communications, business, leisure, education, and methods of social and political control. Medicine saved lives.

Partly in consequence of progress in technology practical medicine registered spectacular advances. X-rays and the successor technologies that improved on their readings made the secrets of physiology visible. Doctors could control diseases ever more effectively by imitating the body’s natural hormones and adjusting their balance: that story began with the isolation of insulin, which controls diabetes, in 1922. In 1931, penicillin was discovered: the first antibiotic—a killer of micro-organisms that cause disease inside the body. Microbes evolved with stunning rapidity, but on the whole the drugmakers kept pace with them. Preventive medicine made even bigger strides, as inoculation programmes and health education—gradually, over the course of the century—became accessible almost everywhere in the world. Doctors sometimes aggravated bad health by inventing new diseases, medicalizing social problems, and convincing healthy people that they were ill—but these things were evidence of the prestige and power of medical professionals. Despite the annoying way in which new diseases evolved, there seemed no limits to what medical science could do: prolonging life to the point at which it became conceivable to defeat death.

Meanwhile new fields of study transformed human biology, with further consequences for medicine. Beginning in 1908, T.H. Morgan at Columbia University initiated a series of experiments in animal breeding that ultimately demonstrated how some characteristics are inherited by means of the transmission of genes and led, in the second half of the century, to a new form of medicine in which doctors could treat disease directly by manipulating people’s genes. After Morgan’s famous work with fruit flies—demonstrating how chromosomes are vectors of heredity—no reasonable person could doubt the power of evolution to explain the way living organisms change. Neuroscience, increasingly, appropriated psychology for biology, making enormous progress in mapping the brain, demonstrating the distribution of mental functions, and recording how electrical impulses and releases of proteins occur, as different kinds of thinking, feeling, memorizing, and imagining take place. Even the notion of ‘mind’ distinct from brain became incredible to some observers. Among them was Charles Hockett, one of the few scholars educated in Boas’s tradition—a pupil of Boas’s pupils, formed in the kind of fieldwork Boas enjoined—to react unreservedly against the master (we shall meet another, Leslie White, soon: below, p. 160). He turned back to the project of reclassifying culture as a subject of biology. In 1948 he proposed the term ‘sociobiology’ to denote the kind of science he foresaw. 17 Later developments would make Hockett seem representative and his terminology prescient.

* * *

The science that grew most spectacularly in the twentieth century focussed on the environment. The rise of ecology, the study of the interconnectedness of all life and its interdependence with aspects of the physical environment, exposed a vast range of new practical problems arising from human overexploitation of the environment and became a major source of influence on changes in the late twentieth-century world. The context was rampant consumerism in the Western world. While global population roughly quadrupled during the twentieth century, per capita consumption increased almost twenty times over—almost all of it concentrated in the United States and a few other Western countries. As early as the 1920s the Jesuit polymath Pierre Teilhard de Chardin saw the pain of what he called the biosphere, stressed by the demands of humankind. He proposed a synthesis of evolutionary science and theology, which proved too religious for many scientists and too scientific for many theologians. But he had a convincing message: what he called the biosphere was a single, vast, fragile, system, every part of which depended on others.

The effects of the ecological turn were equivocal. On one hand, growing awareness of global environmental problems gave science a new role: to confront previously unidentified dangers from climate change and microbial mutation, which threatened to shrink humans’ habitats or decimate them in a new age of plagues. On the other hand, science seemed to be adding to the problems rather than solving them. Every technology scientists devised seemed to spawn adverse consequences. Hydroelectricity supplied energy but leeched moisture and nutrients from soil. Nuclear power improved on fossil fuels but generated intractable waste. The ‘Green Revolution’ fed millions who might otherwise have starved, but decimated bio-diversity and impoverished poor farmers.

People’s awareness gradually increased of the potential exhaustion of the Earth’s resources and the havoc arising from the growing volumes of fertilizers, pesticides, and pollutants that poisoned the Earth. 18 Insects lost their weedy habitats. The birds, reptiles, and small mammals that feed off the insects lost their food supply. By the 1960s, the effects were so marked that Rachel Carson, a former United States’ government agronomist, published her immensely influential book, Silent Spring, in which she predicted an America without birdsong. An ecological movement sprang up and mobilized millions of people, especially in Europe and America, to defend the environment against pollution and overexploitation. ‘Pollution, pollution,’ sang the satirist Tom Lehrer, warning listeners to beware of two things: ‘don’t drink the water and don’t breathe the air’.

Norman E. Borlaug, the Nobel Prize-winning agronomist who helped to develop fertilizer-friendly crops, denounced ‘vicious, hysterical propaganda’ against agrochemicals by ‘scientific halfwits’, but he could not stem the tide of environmentalism at a popular level. Only the resistance of governments and big business could check it. Partly because the environment seemed too important to leave to any one body of experts, environmental studies became an interdisciplinary opportunity. Oxford is usually comfortably padded against shocks from the outside world, but even there, in the early ’seventies, a few of us, led by Alistair Crombie, started a seminar on what we called ‘historical ecology’, trying to understand humans in relation to the whole of the rest of nature: the climate that surrounds us, the landscape that enfolds us, the species with which we interact, the ecosystems in which we are bound. So although the environment was a zone in which scientists faltered or failed, the effect of the ecological movement was, on the whole, to make students of the humanities yearn to be better informed about science.

Other twentieth-century circumstances also favoured the reconvergence of science with the humanities. In the North American system of higher education undergraduates had to study both. The United States increasingly dominated the world of learning, as wars and relative economic decline undermined the former superiority of German, British, and French universities. By and large, US institutions scooped the prizes, forged the innovations, financed the research, and published the journals. The cleavage of universities into ‘two cultures’ was still strong at the level of the professorate: indeed, the chasm broadened during the course of the twentieth century, as all disciplines got increasingly specialized, and therefore increasingly introspective, while some interdisciplinary departments split—physical anthropologists, for instance, deserting their cultural colleagues, environmental scientists separating from geographers, and econometricians abandoning practitioners of social studies. Yet at least the US universities bred scientifically literate, numerate humanists.

* * *

Meanwhile, social problems opened up an opportunity for science and a battleground with the arts: first war, which dominated the first half of the century, then postwar anomie, which dominated the rest.

The First World War checked the drift to Boas’s views among students of society: his revulsion from nationalism and jingoism was out of sorts with the time; his pacifism and his refusal to endorse the abuse of anthropologists in espionage and propaganda triggered accusations of treachery. His German origins made him an easy target for his enemies. His views remained precariously supreme among anthropologists, but their wider acceptance stalled or ceased. 19 In any case the climate of war favoured investment in technology and gigantic intellectual oversimplification, not subtle thinking or avowedly useless knowledge. War in general, rather than any war in particular, was the main source of re-evaluations of the relationship of culture and evolution.

Alike for those who made a virtue of conflict—who thought it winnowed the weak, or enabled progress, or stimulated heroism and self-sacrifice—and those who knew its vices, it was vital to know why war happened. The problem of whether it was natural or cultural brought the relationship between nature and culture to the foreground of the debate. Everyone responded according to his or her prejudices. Field Marshal Montgomery used to refer enquirers who asked about the causes of conflict to Maeterlinck’s The Life of the Ant, whereas, according to the free-thinking relativist, Margaret Mead, ‘war is an invention, not a biological necessity’. 20

Ever since classical antiquity, at the latest, the issue had divided learned opinion. According to a notion widely diffused among ancient philosophers, humans are naturally peaceful creatures, who had to be wrenched out of a golden age of universal peace by socially corrupting processes. Equally little evidence supported the opposite view: that humans’ natural violence is uncontrollable except by coercive social and political institutions. After watching the death-tolls of the First World War pile up, Freud frankly proposed to fill the evidence gap with speculation, inspired by the recurrence of traumatic themes in dreams and in children’s play. A death-wish, he thought, may be embedded in the human psyche and—in conflict with the drive for life, which includes a potentially violent sex-urge—‘comes to light in the instinct of aggressiveness’.

The politics of the twentieth century exacerbated the dispute: it suited the Right to extol competition as natural, while the Left wanted to believe that naturally collaborative instincts would shape society.

Some supporters of the view that war is part of the natural order of things attempted to supply the deficiencies of proofs by appealing to analogies with various animals. Indeed, zoologists and ethologists often seem to find it hard to resist the temptation to extrapolate to humans from whatever other species they study. In the case of animals closely related to humans in evolutionary terms, such as chimpanzees and other primates, the method is often fruitful. Konrad Lorenz, however, got his inspiration from studying gulls and geese. His work before and during the Second World War inspired a generation of research into the evolutionary background of violence. He found that the birds he worked with were determinedly and increasingly aggressive in competing for food and sex. He suspected that in humans, too, these instincts would overpower any contrary tendencies. Neither the taint of Lorenz’s enthusiasm for Nazism, nor the selectivity of the data he used to support his views of human and non-human animals, could prevent Lorenz from winning a Nobel Prize, or exercising enormous influence, especially when his major work became widely available in English in the 1960s.

Among his admirers, Robert Ardrey, the Chicagoan playwright and pop-anthropologist, focussed the search for the origins of war on ‘a force’ in evolution ‘perhaps older than sex’. 21 He called it ‘the territorial imperative’. He was an accomplished popularizer, who helped convince inexpert readers of the African origins of humankind. His intervention in the debate about instinct was less felicitous. He explained war as the indirect outcome of the drive for survival, which demands territory—it is tempting to retranslate Ardrey’s use of the word as Lebensraum—to secure food and water. In the case of humans, Ardrey argued, a long past spent in dependence on hunting as a food-source sharpened and deepened the aggressive instinct. Our reasons, he said, for fighting to defend land are ‘no less innate’ than those of other animals. He airily dismissed the notion that culture could contribute anything to behaviour independently of nature. One day, he ventured, science would discover that learning and instinct are both ‘based on the molecule within the cell’. 22

At the time, the relatively scanty archaeological record of intercommunal conflict in paleolithic times seemed to support the case for seeing war as an artefact of some cultures. Now, however, evidence of the ubiquity of violence has heaped up, in studies of ape warfare, of war in surviving forager-societies, of psychological aggression and of bloodshed and bone-breaking in Stone-Age archaeology. In some versions of the fate of the Neanderthals, our own ancestors wiped them out. The evidence is insufficient to support this, but the world’s earliest known full-scale battle was fought at Jebel Sahaba about 11,000 years ago, in a context where agriculture was in its infancy. The victims included women and children. Many were savaged by multiple wounds. One woman was stabbed twenty-two times. The strategy of massacre is found today among peoples who practise rudimentary agriculture. The Maring of New Guinea, for instance, normally try to wipe out the entire population of an enemy village when they raid it. ‘Advanced’ societies seem no different in this respect, except that their technologies of massacre tend to be more efficient.

Primatologists have witnessed so much warfare in the wild that many of them assume that our wars are part of a general pattern of behaviour among apes. A team led by the mould-breaking primatologist Jane Goodall, whose fieldwork uncovered numerous previously unknown aspects of chimpanzee behaviour, first saw it in Gombe late in 1974, when a group of chimpanzees sent out a party of eight warriors—including the group’s alpha male and one sterile female—against neighbours to the south. The mature males did the killing, while the female provided whoops of encouragement, and a young member of the expedition watched and learned. 23 The primatologists kept up observations of prolonged, generally seasonal raids between the hostile groups. The war lasted four years. Further outbreaks in the same region have occurred periodically. The level of violence is horrific. Encounters take the form of raids from each community into the other’s foraging grounds; when they find a lone male some of the invaders pinion him while others, yelling and leaping with frenzy, hurl rocks and batter him insensible, and usually to death, ripping at testicles, limbs, and fingers, crushing bones. The raiders typically abduct females, rather than killing them, but slaughter their young. Because among chimpanzees, as among humans, warbands are staffed almost entirely by males, at Ngogo in Uganda John Mitani of the University of Michigan has reported what looks like chimpanzee imperialism. Over a period of ten years, an exceptionally large group, some 150 strong, has made war a specialized strategy for increasing its resources. Bands of about twenty raiders infiltrate neighbouring territory, advancing in single file, cautiously and silently, picking off enemies one by one. The war ends with the extinction or absorption of the victim group and the annexation of the entire territory. 24 Robert Ardrey’s nightmare of humans’ ancestral ‘killer apes’ seems embodied in the tale.

These data do not, however, prove that humans are hard-wired for war. On the contrary, the evidence from chimpanzees puts war into the category of culturally variable behaviour, rather than an inescapable, universal, hard-wired trait. Primatologists in Côte d’Ivoire have watched for war but have not seen it. 25 The archaeological evidence suggests, at least equally, that the scale and degree on which societies organize violence made a huge leap when people started settling in permanent villages and practising tillage.

Talk of an ‘aggression gene’ makes no better sense than any other reference to straightforward, one-on-one mapping of genes and behaviours. If there were a preponderant violence gene in most modern people’s DNA, it could be an effect of a warrior culture, not a cause of it. Not all cultures behave as if they have it, and, as we have seen (above, p. 42), the balance of probabilities is that the San, say, lack the supposed ‘aggression’ gene found among the Yanomamo because their culture eschews conflict, while the Yanomamo abound in aggression because their society exalts violence. In any case, it is doubtful whether the causes of war are best sought through the quest for an explanation of violence. The evidence from ape violence is impressive, and shows that chimpanzees can organize for bloodshed and battle in small bands, like human gangs of streetfighters and thugs; but it also suggests that violence-genes, if they exist, are not enough to make whole cultures warlike: for war, individual urges to violence have to be controlled by collaborative imperatives. War is, in one sense, more the result of collaborative than competitive tendencies. If one wants a gene to be responsible, it might be better to look for a team sport-gene than a violence-gene: war more resembles games like rugby or hockey than crimes like murder or mugging.

* * *

Though war has proved a disappointing line of enquiry, another, better opportunity arose for relating culture to biology in the context of a broader controversy about the origins of social problems in general. One of the twentieth century’s most significant scientific disputes—significant, that is, in its direct impact on people’s lives—was the ‘nature versus nurture’ debate. On one side were those who believed that character and capability are largely inherited or otherwise determined, and therefore not adjustable by ‘social engineering’. On the other were those who believed in the power of experience and who insisted that culture can therefore affect our moral qualities and achievements. Broadly speaking the conflict again pitched the Left against the Right, with supporters of social radicalism ranged against those reluctant to make things worse by ill-considered attempts at improvement.

The controversy staggered and stalled and ended in stalemate, with an undogmatic consensus that emerged among experts in the 1920s and remained more or less intact for four decades. ‘Hereditarians and environmentalists’, according to the summary by the historian of the conflict, Hamilton Cravens, ‘assumed the interaction of culture and nature…reaffirmed man’s animal ancestry, his descent from the brutes, and at the same time they explained his social behaviour in cultural terms’. 26 During the late 1960s the debate recrystallized, however, in the pages of rival academic reports. Arthur Jensen, at the University of California—Berkeley, claimed that 80 per cent of intelligence is inherited (and, incidentally, that blacks are genetically inferior to whites). 27 Christopher Jencks and others at Harvard used IQ statistics to argue that aptitude is predominantly learned. The same argument was still raging in the 1990s, when Richard Herrnstein and Charles Murray published The Bell Curve, arguing that society has a hereditary ‘cognitive elite’ and an underclass, in which blacks are disproportionately represented. 28

The IQ evidence was unconvincing: subjective tests, unreliable results. Developments in genetics, however, fed the anxieties. Genetic research in the latter half of the twentieth century seemed to confirm that more of our makeup is inherited than was previously supposed. In lectures in Dublin in 1944 the Austrian physicist Erwin Schrödinger speculated about what a gene might look like. He predicted that it would resemble a chain of basic units connected like the elements of a code. The nature of DNA as a kind of acid was not yet known, and Schrödinger expected a kind of protein, but the idea he outlined stimulated the search for the ‘building blocks’ of life.

A few years later, James Watson, a biology student in Chicago, read Schrödinger’s paper. When he saw x-ray pictures of DNA, he realized that it would be possible to discover the structure Schrödinger had envisaged. He joined Francis Crick’s project at Cambridge University to identify DNA’s molecular form. They got a great deal of help (not very generously acknowledged) from a partner laboratory in London, where Rosalind Franklin suspected that DNA had a helical structure. It took a long time for the significance of the results to emerge fully: increasingly, Crick’s and Watson’s readers realized that genes in individual codes are responsible for some diseases. By analogy, behaviour, perhaps, could be regulated by changing the code. Two fundamental convictions have survived in most people’s minds: that individuals make themselves, and that society is worth improving. Still, we find it hard to resist the feeling that genes circumscribe our freedom to equalize the differences between societies and individuals.

Progress in research has been so rapid that it has raised the spectre of a world recrafted, as if by Frankenstein or Dr Moreau, with unforeseeable consequences. People now have the power to make their biggest intervention in evolution yet—selecting ‘unnaturally’, not according to what is best adapted to the environment but according to what best matches agendas of human devising. ‘Designer babies’ are already being produced in cases where genetically transmitted diseases can be prevented, and the prospect that some societies will want to engineer human beings along the lines that eugenics prescribed in former times is entirely likely. Morally dubious visionaries are already talking about a world from which disease and deviancy alike have been excised.

Meanwhile, the genetic revolution filled in a gap in Darwin’s description of evolution: genes provided what it is tempting to call a missing link in the way evolution works—explaining the means by which traits pass from parent to offspring. It became rationally impossible to doubt that Darwin’s account of the origin of species was essentially right. Evolution seemed attractive again as a theory of potentially elastic power that could stretch to cover culture. The decoding of DNA, moreover, profoundly affected human self-perceptions, nudging people towards a materialist understanding of human nature. It has become increasingly hard to find room in human nature for non-material ingredients, such as mind and soul. ‘The soul has vanished,’ Crick announced. 29 Cognitive scientists subjected the human brain to ever more searching analysis. Neurological research showed that thought is an electrochemical process in which synapses fire and proteins are released. These results made it possible, at least, to claim that everything traditionally classed as a function of mind might take place within the brain.

Artificial intelligence research reinforced this claim—or tried to, with a new version of an old hope or fear: that minds may not even be organic but merely mechanical. Pablo Picasso painted an amorous machine in 1917. Automata were an old topic of romance, but after Karel Čapek introduced what he called robots in a play in 1921, mechanical humanoids featured increasingly as antiheroes of science fiction—the imaginary next stage of evolution, succeeding humankind as the inheritors of Earth. In the second half of the century, computers proved so dexterous, first in making calculations, then in responding to their environments, that they seemed capable of settling the debate over whether mind and brain were different. The debate was unsatisfactory because people on different sides were really talking about different things: AI proponents were not particularly concerned to build machines with creative, artistic imaginations, or intuitive properties, or with susceptibility to love or hatred—things opponents of the AI concept valued as indicators of a truly human mind. Questions of this kind could only be tested by working on ever more sophisticated robotics, and seeing whether robots with highly complex circuitry developed the cognitive properties humans have.

Progress in AI did, however, influence the debate about culture. Obviously the intelligence machines AI researchers bade for could hardly qualify as ‘natural’. Nevertheless, they helped undermine people’s confidence in ‘mind–body dualism’—the belief that mind operates in ways beyond the scope of the brain. Maybe humans’ thinking equipment is not merely mechanical, but if AI’s assumptions are right, it must be biological, at best, and not metaphysical. Early in the twenty-first century, some AI exponents shifted focus to biological, rather than mechanical modelling, attracted, in particular, by the impressive brainpower of cuttlefish and octopods, who can manipulate shells to obtain shelter and communicate by radiating colour-coded signals. According to some experiments, they can imitate observed behaviour—which is a prerequisite for culture.

* * *

In significance for the debate about the origins of cultural diversity, genetics and AI pale by comparison with the big new source of data in the 1960s and 1970s; the lessons that accrued (as we shall see in the next chapter) from the study of the cultures of non-human primates. It became possible to envisage what the ingenious Harvard entomologist, Edward Wilson, called a ‘new synthesis’ of nature and culture. Wilson loved his ants. I recall an occasion when my wife jokingly asked his advice on how to cope with an infestation of carpenter ants in our house. ‘The important thing is,’ Wilson smilingly replied, ‘when you feed them honey, which is their favourite food, don’t forget to add a little water. It’s bad for them to eat too much if you don’t dilute it.’ In 1995, when an ill-tempered controversy with Harvard colleagues made Wilson think about taking a chair elsewhere, he decided to stay, because ‘I could not bear to leave Harvard’s ant collection’. 30

Consciously, I think, Wilson made an implicit contribution to political debate. He favoured nature over nurture in the dispute about the origins of social problems. He found cultural relativism disturbing and looked for arguments in support of the superiority of some societies over others. He helped to create a powerful scientific constituency for the view that differences between societies arise from evolutionary pressures and for the inference that some societies can be ranked accordingly as more evolved than others and therefore, in a sense, as better. He often insisted that biological and environmental constraints do not detract from human freedom, but his texts seemed bound in iron, with little spinal flexibility, and close-printed without space for freedom between the lines. He imagined a visitor from another planet cataloguing humans along with all the other species on Earth and shrinking ‘the humanities and social sciences to specialized branches of biology’. 31 Ants and bees were his models for understanding humans, as gulls and geese were Lorenz’s. Humans differ from insects, according to Wilson, mainly in being individually competitive, whereas ants and bees are more deeply social: they function for collective advantage.

The comparison led Wilson to his great insight. What he called ‘flexibility’ or variation between human cultures is, he suggested, the result of individual differences in behaviour ‘magnified at the group level’ by the multiplicity of interactions. That seems a promising line of thought, since there is, as we shall see, an observable link between the size and numbers of intercommunicating groups, the range of exchange between them, and the cultural diversity they exhibit. Wilson was on less secure ground in supposing that the mechanism that makes cultural change possible is genetic. By the time he wrote his most influential text, Sociobiology, in 1975, researchers had already discovered or confidently postulated genes for introversion, neurosis, athleticism, psychosis, and numerous other human variables. So it was theoretically possible, Wilson argued, that evolution ‘strongly selected’ genes for social flexibility, too, although there was and is no direct evidence. 32

He also reasoned that lack of competition from other species has meant that humans can occupy a wide range of possible social configurations, just as they can dominate an extraordinary range of physical environments; the argument seems fallacious and the premise false. Humans have colonized most of our habitats in defiance of competitors; and in any case there is no reason, as far as I know, why cultural diversity should not promote human success in competition with rival species. It might well be an advantage, since the more cultures construct means of coping with competitor species, the greater the likelihood that a successful strategy will emerge. Strictly speaking, if cultural diversity were not conducive to the survival of species, it would fail to match the basic criterion of a successful evolutionary adaptation.

Wilson admitted the possibility of ‘nongenetic’ cultural traits that ‘could be arrayed alongside biology’. 33 But these were only the fastest-changing kinds, such as fashions and tastes, which are too volatile to explain genetically. Universal features of culture, such as incest prohibitions, taboos, totemism, magic, religious beliefs, and rituals must, according to Wilson, be genetically encoded. Their emergence would be predictable, even in a society built from scratch in isolation. 34 That may be true, but it still leaves unexplained the practical variety of these features’ forms that different societies display.

Working in parallel with Wilson, but independently, the equally ingenious Richard Dawkins produced what, on the face of it, seemed an appealing take on the question the year after Sociobiology appeared. Like Wilson, he claimed to believe in human freedom to elude genetic inheritance, but never made it clear how. Unlike Wilson, who was conciliatory in his language about religion, Dawkins was an apostle of atheism. Though he was a zoologist by training and spent a quarter of a century as a zoology tutor at Oxford, he was more gifted as a writer and populist than as a researcher. He produced delightfully eloquent books, becoming a professor not of science but of the ‘public understanding’ of it, occupying a chair specially created for him.

Whereas Wilson argued that cultures are collections of evolved individuals, whose inherited characteristics determine what happens to human communities, Dawkins claimed that ‘units’, of which, he said, culture is composed, behave in ways so closely analogous to genes as to conform to evolutionary rules. Biota evolve as genes replicate. Genes, according to the standard figure of speech, are units of information or fragments of code. Dawkins thought culture is composed of similar bits of information, which he called ‘memes’. Successful memes replicate spontaneously, using and sometimes abusing their hosts, just as viruses do. They evolve, for instance, by selection of environmentally successful variations or by way of competition among units within culture, just as genes replicate within organisms. Culture spreads like a virus, colonizing minds the way microbes invade bodies. 35

In outline, there was nothing new about drawing analogies between cultural and genetic change. ‘Cultural heredity is analogous to genetic heredity,’ wrote an earlier ant-enthusiast, the Chicago zoologist, Alfred Emerson, in 1965. 36 Earlier researchers preceded Dawkins in claiming to be able to split culture into discrete units. 37 Wilson called such units ‘culturgens’. 38 But none of Dawkins’s predecessors quite anticipated his key innovation: the division of culture into units that were not only discrete but also self-replicating. Emerson, for instance, even endorsed ‘the valid division line between social and biological sciences’, because he could see no medium, other than ideas transmitted from teacher to learner, for what he called ‘social heredity’. 39 The name Dawkins coined for the replicators he postulated was irresistibly cute. Thanks, I think, to his deftness as a wordsmith, the notion was staggeringly successful: to borrow one of the author’s own favourite similes, it spread like a virus among his readers and their readers in turn. ‘Memes’ passed instantly into the realm of popular wisdom. ‘Memetics’ became an academic sub-discipline.

Yet on close examination the whole notion seems vacuous, not least because there is no evidence for the existence of memes, in the sense of evolved units of culture, or of any mechanism analogous to heredity, by which evolution could select them for transmission to other cultures. Unlike genes, which can be transmitted intact, from one generation to another between bodies that cannot modify them, culture is transformed in the act of transmission, between active brains that sometimes modify it—by misunderstanding it, or consciously revising it, or reacting to it with some new inspiration. So, if culture could be broken up into constituent units, they would not resemble self-mutating replicators so much as mutable representations. 40

According to Dawkins, a meme is a ‘replicating entity’ and ‘a cultural trait [that] may have evolved in the way that it has, simply because it is advantageous to itself’ 41 —not to the people or society who adopt it. It would be inconsistent with Dawkins’s concept even to speak of memes being ‘adopted’ in any sense that implies conscious adoption: rather, they colonize their host societies, somewhat as parasites infest bodies. This is a doubly unsatisfactory doctrine. First, it requires another set of explanations to account for why different traits achieve different levels of social influence: it is easy to accept, for instance, that genes for brown eyes should prevail over those for blue eyes in a body where both are inherited; but the same mechanisms cannot explain why, say, Islam should prevail over Christianity in a society with access to both. 42 Second, in the imaginary world of the meme elements of culture have no way of emerging except by a form of self-replication reminiscent of spontaneous generation: innovations occur by way of random mutation, rather than as a result of human inventiveness.

Even Dawkins finds this an unsustainable way of thinking about culture. He credits Socrates, Leonardo, Copernicus, and Marconi with ‘contributions’ of ‘meme-complexes’ commendable for their longevity. He admits, in effect, that human minds originate cultural traits—which is what everyone’s experience suggests. If that is so, it is unnecessary to endow memes with a life of their own. Humans think them up in the first place; so humans can adopt them and reject them as they wish.

Indeed, what Dawkins calls cultural traits can all be fairly represented as ideas, because everything else he includes—technologies, techniques, tunes, teachings—does not appear on earth fully formed or leap from culture to culture except, in the first instance, as purely mental facts, communicated between minds. Even in the case of an artefact that arrives by trade or chance in a milieu where it is unfamiliar, and spreads by being copied, it is not effectively transmitted from its culture of origin to its host culture unless and until a recipient conceives an idea of it. At the risk of oversimplification, we might summarize the case against memetics like this: in genetics, mutations arise randomly and spread according to evolutionary laws, whereas in culture, innovations arise consciously and spread capriciously. Unsurprisingly, despite its short-lived vogue, and the passion of some enduring partisans, 43 memetics has become part of the lumber of sociobiology, rejected even by scientists keen to maintain faith in biological models of culture change. In the second edition of their textbook on cultural evolution, the biologist Kevin Laland and the evolutionary psychologist Gillian Brown have simply dropped the chapter on memetics in its entirety. 44 The doyen of British sociology, Walter Runciman, who used his immense influence admirably to try to reconcile sociologists to science, clung to the word ‘meme’ in the distillation of his life’s work, The Theory of Cultural and Social Selection, which appeared in 2009. He defined the term, however, more strictly than Dawkins as ‘packages of information’, leaving ‘practices’—which are what culture consists of—out of the category and shifting the explanation for the success of some such packages away from the supposedly inherent self-advantage of the unit to ‘the features of the environment which do or not favour the reproduction and diffusion of the memes’. 45 ‘Meme’ was a term of convenience for Runciman, eluding circumlocution, without the magical, angelic, or demonic nature or function it had in mainstream memetics.

Dawkins was right, I think, to suppose that when culture begins, biology yields the driving seat and culture’s own dynamic takes over. He was wrong to think that ensuing changes are properly called ‘evolution’ and happen in ways closely analogous to changes in organic life.

* * *

In any case, we do not have to rely on speculative vapourizings for a sense of how cultural changes happen. There are plenty of empirical data, which the memetics fanatics overlook, in the field known as diffusion studies, which sociologists, economists, and business students till freely and deeply in their own work, but which has still-unrealized implications for the understanding of cultural change in general. Diffusion students focus on innovation, but all cultural change starts as innovation; so their findings are relevant and, on the whole, they subvert memetics.

The scholar with the best claim to have founded diffusion studies, or at least to have launched the diffusion studies movement, was Everett Rogers, whose 1962 book Diffusion of Innovations might have made a good case-study, attracting imitators, generating allied research, and diffusing in its own, right around the world. His starting-point was in agricultural economics, the discipline to which his background as a farmer’s boy called him. When he was only 5 years old, Rogers found his father’s selective attitudes to new farming technologies puzzling. Dad embraced mechanical innovations, but distrusted hybrid seed corn, which the state of Iowa promoted heavily in the 1930s. Like some of his neighbours, Rogers senior feared sacrificing independence to the experts and committing himself to reliance on the suppliers of new seed, instead of selecting it himself from his previous season’s corn. The rest of young Rogers’s life was devoted to making sense of his childhood experience.

Successive editions of his magnum opus, though tedious to read, do a great job of summarizing his research and that of the followers he inspired. Rogers thought he was engaged in a scientific enterprise. He aimed to expose scientific laws that would predict which innovations would succeed, and how fast people would take them up: the illusion of predictability attracted funds from business and governments and made the diffusion discipline rich and active. Rogers devised mathematical models that are still useful to some forecasters. He devised a standard narrative of the spread of innovation, dividing the process into phases according to the rate of adoption, and produced descriptions of the adopters most likely to emerge at each phase. These were convincing mainly, perhaps, because they were platitudinous: young, educated, large-scale operators would likely be among early adopters, for instance, while the poor and old would bring up the rear. Some of Rogers’s nostrums were of the kind you hardly need expensive research to anticipate: if you want to promote innovation, get opinionmakers on your side, appeal to the existing prejudices of the community, advertise.

The most interesting findings of diffusion research, for our present purposes, were of a different kind, demonstrating the prevalence of wild cards in the pack and the importance of serendipity and misunderstanding in making some innovations catch on. Most surprising of all the revelations of diffusion research is the fact that what an innovation is hardly matters to its chances of success. The nature of an innovation—how good or bad it is, how economical, how attractive, how flexible—has far less impact than the cultural context that receives or rejects it. Among Rogers’s key examples, one of his most engaging stories was of ‘Serendipity in the Discovery of Warfarin’. The research that produced the drug in the 1930s was the result of an investigation into the cause of haemorrhaging in cows; people accepted it, a generation after its discovery, with an extraordinarily counter-intuitive kind of enthusiasm, as the world’s most popular rat poison and treatment for human heart disease. 46 Similarly, the video games-player is among the most mysterious success stories in modern marketing, with propensities to amuse and entertain utterly disproportionate to the costs of the games; yet video games displaced traditional competitors of smaller cost and greater power to stimulate, such as the bat and ball or the book. The Nintendo company turned the new technology into a phenomenal world-conqueror by actually suppressing the fact that it was a kind of computer and, until well into the 1990s, inducing most purchasers to forgo most of its functions. 47 According to Everett Rogers, in what sounds like a fictitious case, ‘Dr “Chicken” Davis, a US poultry expert’ introduced millions of battery chickens into Eastern Nigeria in the 1960s. Despite the project’s evident unsuitability, it earned ‘handsome profits’ and, for Dr Davis, the award of:

a hero’s medal by the President of Nigeria. Two weeks later, a poultry epidemic swept through Eastern Nigeria, killing all the imported birds…Within a year of ‘Chicken’ Davis’s departure, only an unpleasant memory remained of his work. Not a Western Chicken survived. 48

Rogers shared a prejudice with evolutionists: the success of an innovation, as of a species, would depend on relative advantage, and among the hard-headed US farmers who were the early subjects of his research ‘the economic aspects of relative advantage’ counted for a lot. Still, even that cost-sensitive constituency was more prone to value an innovation for promoting physical comfort than for restraining cost. Relative advantage generally proved to be more ‘an important part of the message’ advertisers spread than of the product itself. 49 In the whirligig of fashion, no one stops to ask whether a shorter hemline is a superior adaptation to the environment. Over and over again, diffusion studies showed that culture is crucial. Women in a Peruvian village resist water-boiling for sanitation because their traditional medicine identify it as a remedy appropriate only for the sick. 50 Typists resist rational keyboard layouts because of traditional investment in the QWERTYUIOP system. 51 Balinese irrigators leave fields fallow because their religious cosmology so commands, not because it makes ecological sense (though, by coincidence, it does). 52 US farmers with elementary scientific education always want to test a product for themselves before adopting it, while counterparts in Colombia will accept an innovation on good authority. 53 The Amish are notorious for resisting consumerism, but Rogers found them ‘very innovative in adopting new ideas that fit with their religious and family values’, such as sustainability strategies and organic farming techniques. 54 In his story ‘How the refrigerator got its hum’, Rogers told how the gas-powered ’fridge, which dominated the market until after the First World War, was more economical, more robust, and much quieter than its hum-crazy electric counterpart. But the electricity companies mounted the investment and mobilized the publicity to drive the gas version out of the domestic market in the USA. 55 Some Australian aboriginal peoples refuse to kindle fire, but accept it ready-kindled from neighbouring tribes. The world is full of similar examples of peoples whose cultural prejudices have mandated apparently irrational adhesion to inferior technology or irrational rejection of advantageous innovations: people who have abandoned navigation, or the bow and arrow, or firearms, or blood transfusions. 56

Caprice and culture also combine to ensure that bad technologies drive out good ones. The snowmobile almost destroyed the economy of the Skolt Sami when it replaced reindeer sleds in the late 1960s. 57 David Edgerton, the renowned historian of science at Imperial College, London, has gathered many fascinating examples in a book that undermines a common form of twentieth-century self-congratulation: the myth that technology has been uniformly progressive in recent times. 58 Some of his most striking examples relate to military technology. The atom bomb, the author argues, was not cost-effective compared with the conventional arms sacrificed to the costs of research. The German V-rockets cost half a billion dollars but were grotesque failures. General Patton longed for cavalry in North Africa and Italy. The contraceptive pill almost drove the condom off the market, but the old technology has proved itself more useful, cheaper, and safer in the long run. From personal knowledge, we can all add at length to the list of examples. In selecting new culture, cultures respond to calculations not of potential advantage so much as of existing coherence.

Like fashions and technical innovations, food taboos illustrate the peculiar intractability of culture to influences from outside itself. All cultures have such taboos: indeed, like incest prohibitions, they might be classed as defining features of human cultures. Typically, enquirers have tried to explain them by seeking some rational, material, scientific motive for preventing the consumption of certain resources. Cicero was first in a long line of theorists to allege economic reasons—where bovines, for instance, are too valuable to eat, elites sacralize them as a conservation measure. 59 But this must be false, since people eat beef in many places where bovines provide vital services in ploughing, transport, and dairying, whereas sacralization greatly diminishes cows’ general exploitability. Food bans are not designed primarily to be ecologically adaptive, or to promote bio-diversity, or to spare threatened species—though such consequences sometimes ensue. Nor are taboos applied for reasons of health or hygiene—that is a long-discredited bit of silliness—though they may have beneficial effects on practitioners’ bodies. There is little or no difference in cleanliness, for instance, between meats Moses categorized as forbidden and those he permitted. The great anthropologist Mary Douglas made the nearest approach to a convincingly systematic justification of the Mosaic rules, arguing that the prohibited creatures are anomalous in their own classes and that integrity, necessary for holiness, is offended by terrestrial creatures that wriggle, or airborne ones with four feet, or those that are cloven-hoofed but non-ruminant, like the pig and the camel. 60 The pretence that health or ecology are at stake vanishes.

It is pointless to seek rational or material explanations for taboos because they are essentially, necessarily super-rational. The lack of any rational purpose for particular taboos makes them socially functional, because they bind those who respect them and brand those who do not. If they had any objective justification—if, for instance, they induced health, or improved nutrition, or protected threatened species—they would not work, because they would appeal as much to outsiders as to those in the group. Permitted foods feed identity, excluded foods define it. In Fiji, no man may eat the plant or creature that is his totem, though a neighbour may eat of it freely. Plants that grow near a shrine or in a graveyard are taboo, but the same plants may be eaten if harvested elsewhere. Bemba women must protect their cooking hearths from practitioners of unpurified sex. Among the Batlokwa of Botswana, pubescent boys may not have honey. Teenage girls are not allowed eggs or fish. New mothers may not eat with their hands. 61

None of these examples on its own makes culture look like an entirely autonomous system, changing from within itself without input from genetics or environment. But in sum they do suggest, first, that culture has a dynamic of its own, the power of which greatly exceeds that of other sources of change; and, second, that it operates in people’s minds: the decision to adopt an innovation or imitate some other community’s behaviour is rarely rational, but it is always conscious. Cultural traits do not replicate like genes—people accept or reject them according to criteria of their own—criteria that have nothing to do with the merits of the innovations, or their potential for survival, or for enhancing the survival of the group.

Neither sociobiology nor memetics succeeded in their day in explaining culture satisfactorily. They operated, however, in a uniquely favourable context—not just because the prestige of science in the twentieth-century world disposed people generally to seek or accept professedly scientific explanations for everything, but also because, over the last half century or so, gradually, increasingly, the effort to understand culture has benefited from a previously unknown source of new information. All earlier enquiries proceeded a priori, with no evidence to go on except human culture itself and no standards of comparison, because of the conviction that humans are a uniquely cultural species. Now, however, we can do better. We are not alone. We know that there are—and in the deep past have been—other cultures ‘out there’ and that we have an opportunity, unavailable to our predecessors, to learn about ourselves from them. My own first lesson in the comparative invocation of non-human cultures came—though I did not appreciate it at the time—when I was 5 years old. It is now time to summon up the memory.