Chapter 3
The digital ape emerges
WE WERE BORN with axes in our hands. Everyone reading this book is perfectly fashioned by evolution to make and use one. The brain bred into us by handaxe culture is what enables us to read this sentence. We were tool-using higher apes, coexisting with tool-using hominins, long before we became Homo sapiens. We became expert handaxe makers and deployers, and expert teachers of our children about handaxes. That was a precursor to, and a necessary condition for, the development of language and the complex self-aware ideation which goes with it. Our modern brains were both broadly, and then finely, tuned by our use of tools. The stone hand-tool is around 3.3 million years old, Homo sapiens about a quarter of a million. Hominid brains had evolved around, adapted to, tool use over something like 200,000 generations before we took the stage. We didn’t invent our original tools. They invented us. Our brains, our minds, our instincts, our nervous systems, the shape of our fingers, the length of our arms … all were bred into us and shaped by our use of tools.
Well, of course, it’s not nearly as simple as that, although all of it has some real force. A couple of million years in the life of the several sophisticated species which preceded us is a good chunk of stable history, 10 times as long as Homo sapiens has managed so far to avoid extinction. A lot happened in that time, much of it still opaque to our science. From early on, the use by hominins of axes, fire, shelter, and eventually clothes was beyond doubt essential to their survival as a species, and was a social and symbolic fact as much as it was a physical one. Also, beyond doubt, this required brains — and patterns of behaviour rooted in them — more complex than any that had previously evolved. Beyond doubt again, we, our brains, and our approach to life, developed directly from the hominins. Elementary Darwinism, though, says that there must have been new environmental challenges to adapt to, to require the very significant leap from hominins to Homo sapiens to be made. The short-listed candidates to be honoured as those challenges are legion. Prime amongst them is the behaviour of other members of the species. As hominins, over many millennia, lived more complicated lives, they became harder for their friends and families to live with. More rewarding, too. Shelters, as Professor Robin Dunbar notes, were a very good idea, but the sociality needed to invent and construct them needed urgently to be supplemented with other new kinds of sociality, rules for cohabitation at close quarters. The growth of brain capacity was very much entangled with new ways of grooming and socialising with those near to you. Probably laughter, probably sophisticated vocalisation, probably fire rituals, probably burial rituals, probably singing and music and art … played vital roles. Dozens of crucially important modes of behaviour, which both required smarter brains and delivered the smarter ways of life that supported the brains. And then, perhaps, a perfect storm of many of these and other factors was midwife to the birth of our species.
The debate amongst scholars about how this all fits together is intense and nowhere near conclusion. The takeaway for our narrative arc, though, is clear: that none of it can happen without tools, and that development of complex tool-using hominins preceded Homo sapiens by some millions of years.
It follows, too, that present-day psychology, present-day brain studies, present-day sociology and anthropology, should pay regard to the nature of the relationship between handaxes and humans. Perhaps the first notable academic to put forward this view was the anthropologist and geologist Kenneth P. Oakley, one of the exposers of the Piltdown Man hoax. The British Museum issued his short book Man the Toolmaker in 1949, which argued that the chief biological characteristic that drove the evolution of humans was our mental and bodily coordination, and that our production of tools was central to this. This strain of studies has ranged widely since. In the past 15 years, investigations, notably those led by the neuroscientist Professor Dietrich Stout at Emory University in Atlanta, Georgia, and colleagues around the world, have revolved around brain scans of experimental subjects as they learn about axes.
Early humans, over hundreds of millennia, gradually improved their axes. Initially, no doubt, they simply found sharp flints, and gradually realised their value in hunting, fighting, and building shelters. For thousands of centuries, over much of the planet, a handaxe was de rigueur. The archaeological evidence points to their use, and the continual improvement of their use, over all that time. Every band or tribe needed a culture organised around the handaxe. Groups without powerful axe rules would not live to pass their genes or their culture on to their grandchildren. Always carry your axe, always pick up better axe stones if you see them, always fight to protect your axe. Societies became more sophisticated and more productive, as brains gradually grew bigger, and the extra food needed to meet the considerable energy costs of the big brains could be afforded.
This ancient history is important to our thesis. The digital ape is not simply a very sophisticated biological entity which accidently has the general capacity to do lots of things, and happens, therefore, to be smart enough to use the machinery and other devices which our industrial electronics can deliver. Our particular kind of ape grew out of a previous world suffused by and dependent on the use of tools. It would be wrong to say: see the mobile phones in the hands of all those people in the crowd over there, that is just exactly the same as our forebears carrying handaxes. The readout is much subtler than that. The brain developed as social networks and behaviours developed. In parallel, manufacture and the use of handaxes and fire and clothes developed. In parallel also, language developed. All of these fed back into each other. The improved brain became increasingly able to engage in a wide variety of tasks, many of which involved complex monitoring of activities and switching between activities and goals, tactical and strategic, physical and social. As well as the appropriate motor coordination. The ape continues to exhibit all these traits, in the digital era.
Axes began as found objects. Then hominins, the early humans, noticed the origin of natural sharp flints. Rocks break down into stones, often helped by water courses, then the stones break into the flints. They looked out for them. Groups discovered sites with natural flints in abundance — we have evidence of many such places — and passed the knowledge on to succeeding generations. Cultural patterns would have emerged: a tribe at regular intervals making the trip from the good hunting grounds where they lived to the good stone grounds. Perhaps at every full moon, or some other natural reminder. They began to do more than pick them up: they smashed stones and treasured the best resulting flints. So now they had the beginnings of conscious manufacturing techniques, in the service of the beginnings of strategy.
The subsequent stages of the progress of handaxe technology seem clear enough. Hominins began by just hammering away at cobbles with other stones, and picking amongst the debris. Then came a key breakthrough, no doubt made thousands of times in different places until it caught on generally. Smart individuals began to select whole large stones purposefully, with a clear idea about the result they wanted. Soon enough they learned to carefully choose both those cobbles, and also the stones they used to strike them with. And progressed to skilfully modulate their movements as they worked. The ability to modulate developed as part of axe manufacture and making clothes, cutting branches for shelter, aiming and throwing clubs and stones, and no doubt a hundred aspects of hominin life. It was the foundation of our ability to play tennis and drive cars, a combination of embedded motor skills and conceptual situational grasp. For about a million years it was how our predecessors made their implements. This was the so-called Oldowan technology, in which simple sharp flakes were broken off the cobble, named after the Tanzanian gorge where such tools were discovered by the Leakeys in the 1930s. Then the technology underwent a worldwide change. To quote Professor Stout:
About 1.7 million years ago flake-based Oldowan technology began to be replaced by Acheulean technology (named after Saint-Acheul in France), which involved the making of more sophisticated tools, such as teardrop-shaped hand axes. Some Late Acheulean handaxes — those from the English site of Boxgrove that date back 500,000 years, for instance — were very finely shaped, with thin cross sections, three-dimensional symmetry and sharp, regular edges, all indicating a high degree of knapping skill.
‘Tales of a Stone Age Neuroscientist’, Scientific American, April 2016
There are, dotted around Europe in particular, large sites where flint extraction and tool manufacture took place on an industrial scale over many generations. At Grime’s Graves in Norfolk, England, which dates from about 5000 years ago, the remains of over 400 pits can be seen and, indeed, explored by visitors. It will have been one of many such sites, over millions of years.
It is safe to think of this whole process as the greatest ever demonstration of innovation as a massive cooperative venture, with myriad incremental steps, good ideas being invented in many places, implemented extensively, then forgotten in all of them, rediscovered, lost again, but gradually taking hold in enough minds across sufficiently large and diverse populations that they become indestructible, embedded in human culture generally. This was one of the foundations of the philosopher Karl Popper’s ‘World 3’ we mentioned earlier, the collective knowledge available to all, but far too large to be carried by one person, or even one group of people. A large enough population doesn’t need libraries, let alone hard drives and servers, for life or death information. Tautologically, they die out if they fail to develop cultural patterns of remembering and teaching sufficient to preserve the knowledge over generations.
But at the same time, there must have been many outstanding individual contributions. Charismatic, inventive knappers may have been immensely celebrated locally for new ideas and methods that startled whole communities. With no records, we can’t know. Equally, there can be little or no archaeological or fossil record of the actual changes to brains, as opposed to the flints. Brains decay very quickly after death. Preservation of significant lumps of ancient body tissue is nearly impossible. There are fascinating examples of natural mummification. Ötzi the Iceman is perhaps the best known, discovered in an Austrian glacier complete with his hunting kit, and arrow and bludgeon wounds from fatal adversaries. He seems, from third and fourth party blood on his weapons, to have wounded his enemies, or worse, before they finally killed him. He may have been on the run from them — he had wounds from a few days before. But he is the oldest known European example and dates from around 3200 BCE, which might as well be today in terms of the original development of our brains. Wooden structures and weapons, and carved animal bones, survive somewhat longer than mortal, not too solid, flesh, but in practice all the historical evidence we have of the history of our use of technology is the worked, often exquisitely fashioned, stones. This could, of course, introduce a strong bias in the evidence. It makes Stout’s work the more exciting.
Stout and his university colleagues make axes themselves, in as close to the original way as can be, spending the hundreds of hours necessary to learn at least a basic version of skills our forebears had in abundance, after even greater perseverance. From personal experience, and observation of each other, carefully monitoring generations of new graduate students as they pick up the ancient skills, they have a clear picture of what is involved: primarily, constant switching between several different learned tasks, whilst holding in mind both tactical goals (this part of the stone needs to be sharp) and the overall strategy (this axe needs to be large because it is for killing animals; the one I just made was smaller because it was for tailoring clothes).
Then they use modern neurological brain-scanning instruments. Magnetic resonance imagers (MRI scanners) require the subject to be particularly still, so can only be used for before and after analysis; which can give a good picture of changes to the brain. Stout also injects chemicals into the bodies of his admirably dedicated graduate students, which show up most in those parts of the brain used during specific tasks. After a couple of hours of the amateur knappers bruising and tearing their hands, the researchers can track exactly which part of the brain was fired up. As students become better at tasks, those parts can be seen to develop. Professor Stout, again:
In truly experienced knappers, who can approximate the documented skills of real Oldowan toolmakers, something different is seen. As shown by Bril and her colleagues, experienced tool-makers distinguish themselves by their ability to control the amount of force applied during the percussive strike to detach flakes efficiently from the core. In the experts’ brain, this skill spurred increased activity in the supra-marginal gyrus in the parietal lobe, which is involved in awareness of the body’s location in its spatial environment.
A different method is that subjects can lie still in the MRI scanner and watch videos of their colleagues knapping away. Since the early studies of the Italian Professor Giacomo Rizzolatti onwards, discussed in a later chapter, we have known that the watching of an activity is located close to the same parts of the brain that control both doing that activity, and general understanding of that activity. A different activity will be collocated at a different venue. This was, initially, a surprise, since the common sense assumption was that sight would have its own black box, with a hotline to the eyes; limb control would emanate from a black box wired to the motor powerhouse; and so forth. Actually, the overlap is so great that some studies appear to show that simply observing other people exercising, or strongly imagining oneself doing so, can lead to the brain ordering that the relevant muscles be built up. (It is less of a surprise, though perhaps not in principle a different thing, that a human can, all alone, purely imagine themselves into sexual arousal, with no other person present.)
Stout draws radical conclusions from this:
Neural circuits, including the inferior frontal gyrus, underwent changes to adapt to the demands of Paleolithic toolmaking and then were co-opted to support primitive forms of communication using gestures and, perhaps, vocalizations. This protolinguistic communication would then have been subjected to selection, ultimately producing the specific adaptations that support modern human language.
Here, Stout is gazing at what he must feel is the direct link between knapping a stone and the Tower of Babel. A precursor of humankind exercised burgeoning physical and mental skills. Those best at it would tend to win the natural and sexual selection races. The next generation would, to a slightly greater degree, have the requisite brain shapes. And so on over the thousands of generations, as cognition and speech developed.
Again, this can only be an important part of the whole kaleidoscopic story. Fire, to pick up another element, was also a major tool on which pre-historic humans became dependent. In parallel with the gradual improvement of techniques of flint-knapping, and the adoption of clothes and shelters, heating for warmth and cooking emerged. Digestion for all mammals requires a great deal of chemistry and work, which need to be fuelled. The process consumes a significant proportion of the energy value of the food eaten. Fires caused by lightning, very frequent in the equatorial and tropical zones, accidently left behind nutritious food. In effect, meat that was partly pre-digested. Early humans didn’t have to puzzle about what a fire might be and how to start one, before matches and Zippos had been invented. They had urgently to learn how to avoid them. They will also have noticed many times the species of birds and insects ahead of hominins at this game, those which had already adapted to chasing fires. As Harvard’s Professor Richard Wrangham emphasises, once our ancestors tamed the bushfires that naturally burst out around them in temperate zones, and learned to cook, they effectively externalised a big part of the digestive process. Meat and plants were no longer broken down into fuel solely in our guts by stomach acids, but also in pots, over fires. Consequently, over many generations, our internal organs changed; we became an animal that cooked — and benefited hugely from that dependence. We could now devote less of our energy to digestion and a larger proportion to our brains, in a virtuous circle that enabled cooperation with other people, and more effective planning to hunt big game. Fire enabled human beings to become smart — but left them ill equipped to survive without cooked food. Communal meals may well have facilitated the development of spoken language, which in turn maintained the social relationships between the cooks and the hunters.
So, to take stock, there is a well-formed skeleton of understanding here, quite some way perhaps from what may never be a final picture, unless someone invents a telescope that can see deep into Earth’s past. Our relationship with tools, whether axes or the management of fire, or the use of clothes and shelters, is ancient, and a central driver towards who we are. The acquisition of these skills took thousands of centuries. Early human species who noticed burnt foodstuffs and specialised in eating it had a big advantage over others: less of their energy had to be devoted to digestion, more could be devoted to their brains. The larger brains that developed both enabled and were caused by socialising with larger numbers of fellow hominins. And this was linked to the acquisition of collective intelligence, language, memory, and learning. All of what became humanity developed together. The physical ability to use external agencies, and the need for collective tool making, hunting, and cooking enterprises, caused larger brains just as much as they were the result of them. The growth of both required deep time to elapse.
One solid truth here is that our biology has developed to be dependent on the objects we use that amplify our capacities. For many thousands of years, human stomachs have been unable to readily function without fire, certainly not sufficient to sustain life across the whole varied range of places we inhabit, from poles to the equator. We no longer grow enough fur over our bodies to keep us warm outdoors without clothes. Originally, in Africa, this may have been a simple upgrade, unrelated to a later invention of clothing. Our sweat system was effective for sudden or sustained exertion, shelter and shade would keep us warm at night and cool at noon. When we left Africa for colder places, the pattern changed. From then until now, unless we find or build shelters from the weather and predators, we die. For all of these we need tools. Most of all, we need other people. An important perspective on this general theory, known as the social brain hypothesis, has been expounded and elaborated by Professor Robin Dunbar. Readers will recall that Dunbar makes an estimate of how many substantial social relations an individual can effectively maintain, and places it at approximately 150, a number arrived at after considerable analysis of primate and human behaviour. In Dunbar’s view, this remarkable facility is hard-wired into our brains because the benefits to individuals and the species are greater than the considerable energy cost in acquiring it. But now the web can enable us to have 10 times that number of relations. Perhaps these are lesser, thinner, more perfunctory relationships.
There is some evidence that brains have got smaller recently, over the last 20,000 years. Partly because our bodies as a whole have got more gracile, lighter and thinner. The percentage of us that is brain may have only have fallen a bit. But domestic animals in general have smaller brains. There is also reason to believe that smartness in brains, as well as machines, manifests itself as more connections, and as we know from our gadgets, when functioning involves electricity moving about a lot, shortening the already short distances, becoming more dense, works better. But also the very network effects made possible by better, smarter brains lead to specialisation across a group. The adaptable brain can use its speed on tasks allotted by the community.
Our ancestors externalised some of our key animal characteristics, like fur or the early stages of digestion, and replaced them with conscious collective endeavours, like clothing or cooking. The social bonding activities necessary to cement a group of people to hunt, cook, make clothes, cut and drag branches for huts, represent a considerable investment of time. The bonding also incorporated smart efficiencies. Cooking, axes, clothes, and shelter were more efficient, too, essential catalysts promoting the investment by the species of its surplus energy in the evolution of smarter brains. Tools amplify what each individual can achieve; they are indeed what Marshall McLuhan called extensions of the self. The collective culture of a group consequently enables the other aspects of a simple economy to develop and flourish: specialism, diversification, returns to scale.
In this sense, our tools have always been part of us. They are not a recent add-on, part of some veneer of civilisation. The digital ape is a direct descendant of the axe-wielding hominin. Our biology is closely coupled with our technology. The unfortunate who can never use a hammer except to hit his thumb may not look it, but every human is designed to function in intimate partnership with technology. So is every group or network of humans, whose ability to function as such is built on skills developed alongside the capacity for technology. Augmentation is a topic of much contemporary interest: we are able now to expand nearly every sense exponentially. It is not a new part of our lives. It has been with us since before we were us. Here is a great insight from the Australian performance artist Stelarc. His views on the relationship between humanity and technology set a useful frame of reference:
The body has always been a prosthetic body. Ever since we evolved as hominids and developed bipedal locomotion, two limbs became manipulators. We have become creatures that construct tools, artefacts and machines. We’ve always been augmented by our instruments, our technologies. Technology is what constructs our humanity; the trajectory of technology is what has propelled human developments. I’ve never seen the body as purely biological, so to consider technology as a kind of alien other that happens upon us at the end of the millennium is rather simplistic.
Joanna Zylinska and Gary Hall, ‘Probings: an interview with Stelarc’, The Cyborg Experiments, 2002
*
Can we learn about the emergence of the digital ape by examining the purposeful use of objects by other species? There is no simple hard and fast borderline between the capacities of animals and how humans behave. What in us would be regarded as altruism or self-sacrifice is often seen in animal mothers towards their offspring, and makes perfect sense within the metaphor of selfish genes: there are circumstances in which the mother’s genes will more likely survive if she dies and her children live. Many animals make use of tools, in small ways, but which may be crucial to them. Birds build nests. Anteaters poke anthills with sticks. Chimpanzees in the wild make some use of tools and can be induced further along that road.
Other animals use tools quite extensively, too. Indeed, the plain tool use, and learning by mimicry, in other species, has been adduced to argue that tool use is not really one of the distinctive engines of human-ness. (Everybody agrees there are several, which overlap and feed each other). Surely, it is argued, our distinctive feature is in plain sight: our social and communication abilities — not least our constant Machiavellian competition with each other, the need to outsmart other humans — is our miracle ingredient.
Professor Thibaud Gruber of the University of Neuchatel in Switzerland studies chimpanzees and bonobos, our two closest relations amongst surviving ape species, for themselves and for what they can tell us about us.
Here is Gruber’s description of one of his early projects, from his university’s website:
I developed the ‘honey-trap experiment’, a field experiment proposed to several communities of chimpanzees in Uganda. Results showed that chimpanzees would attempt to extract honey from a hole drilled in a log using their cultural knowledge. In particular, the Sonso chimpanzees, who do not use sticks to acquire food in their natural environment, adapted a behaviour normally used to collect water, leaf-sponging, to extract honey from the hole. My subsequent studies and current work aim at understanding how chim-panzee cognition is influenced by their cultural knowledge.
There is, clearly, a lot to learn from animal tool use. Equally, perhaps even blindingly, obvious, extensive complex tool use is now, since the extinction of other hominin species, solely and distinctively an attribute of human beings. Our language, our tools, our knowledge and our memory form the essence of human nature. Our gadgets, our databases and our virtual economy are now lightning fast and genius clever. In the long run they might change our biology, beginning with the wiring in our brains. We have only in recent years begun to appreciate and understand the extraordinary plasticity of the human brain: young children use theirs differently to the way their parents did at the same age. For the next few centuries, children’s brains at birth, if we do not intervene with our new knowledge of genetics, will probably remain broadly the same, changing only very slowly, while the individual brain of each person will continue, as it does now, to take on such new shapes as that person’s environment enables or requires. Young gamers do — they actually do, it’s not just a prejudice of their ancient parents — develop neurology to match their hobby. Their brains grow apps, shape neural pathways, for game versions of climbing, running, killing, at high speed. They won’t pass their own neurological changes on genetically to their children. However, in the coming decades, traits that help individuals to succeed in a networked world will be more highly valued in the search for mates and will therefore be more likely to appear in succeeding generations. Everything we know from developmental neuroscience suggests that, in the coming centuries, our gene pool, our neurology at birth, our very biology, will alter to match the challenges and opportunities of the environment.
A quicker mechanism for passing information down the generations is the one we initially used. We embed information and rules for living in our cultures. Humans have a unique ability to pass on what we have learnt. This enables incredibly efficient transmission, and is the reason we have in a relatively short time become Top Species. Professor Francisco J. Ayala of the University of California is a Spanish-American philosopher and evolutionary biologist, and a one-time Dominican priest. He argues that: ‘cultural adaptation is more effective than biological adaptation, because it is more rapid, because it can be directed, and because it is cumulative from generation to generation.’ At the same time, there is no reason to suppose that biological adaptation has ceased. Our sexual partners, differential birth and death rates, and in-built genetic mutation continue to mean that those most suited to succeed in specific environments are more likely to pass on their genes to future generations. A recent study of 210,000 US and UK subjects found the telltale signature of this evolutionary change. People with longer lifespans less frequently show genetic variants associated with Alzheimer’s disease and heavy smoking. The question also arises, are our brains evolving too, will we be mentally different in generations to come?
A species-wide-web in the form of language and culture emerged with Homo sapiens in Africa a quarter of a million years ago. They had tools, but they didn’t have digital processors, HTTP, and universal connectivity. Technological change has speeded up in the 200,000 years of Homo sapiens, but we are confident our fundamental great ape biology has remained much the same, and will do so beyond the next century. If we could see 2100 on our video screens, the human environment might look as strange to us as the screens would have looked to Abraham Lincoln. But the people would look as we do now. They might be wearing interesting augmentative devices, probably not visible at first glance, extensions of those we already carry. They might have been given interesting genetic help to avoid illness, or repair damage, as we shall see in a moment. But they would be recognisable, straight away, as us.
*
We have known, at least since Darwin’s work in the 1850s, that the operating system of each human being is more or less the one developed over millions of years by vertebrates in general, and mammals in particular. It was adapted by dozens of pre-human species to meet the challenges of environments that are profoundly unlike modern cities. The operating system is still being updated in real time. Not, just to be clear, because random mutations are accidently throwing up new forms of person fitted to meet the challenges of the twenty-first century, an evolutionary process that takes many thousands of years. But through sexual selection within the dominant cultures. If all the interesting and exciting parts of the world become associated with technical digital development, then that might encourage the healthiest and most fertile women and men to mate with … well … geeks. Or perhaps it’s all a bit more sophisticated than that. Evolution is at least in part about each population changing to meet the urgent challenges of their habitat. Today’s human environments have myriad facets, but very few situations where not being geeky enough kills the individual. The more complex the society, the more ways there are to be fitted to one’s surroundings.
Our modern understanding of how evolution works builds on Darwin’s work, but also incorporates an avalanche of later insights from biology and genetics. Every child is born slightly different to the sum of their parents and forebears because we all carry within us genetic mutations. Indeed, it appears to be the case that the older the father, the more susceptible to copying error his DNA is, so younger siblings are not quite as much their father as older ones, by a tiny margin. One of Darwin’s key insights was that a species must, through accidental mutation, have what we now call a gene (or complex of genes) that either requires the species to behave in essential ways, or at the very least facilitates the cultural maintenance of any particular essential behaviour, or the species would die out. Corn that happens to be able to bend in the wind can’t tell all the other corn stems in the meadow about it. Nor can a monkey that knows which berries are poisonous. They solve a problem because they have the right genes, which when expressed in a particular environment enable them to live long enough to pass on those same genes to their descendants. After sufficient generations, the only survivors are those with genes that meet the challenge of whatever environment the species inhabits.
For the digital ape, on the other hand, the cultural aspect of this is overwhelming. Some well-respected academics — the eminent geneticist and writer Professor Steve Jones, for instance — believe that evolution, in what we might call the traditional sense, has effectively ended for humanity. We are no longer, in this view, a species adapting genetically to the pressures of its environment, with the carriers of less fit genes dying, whilst the carriers of fitter genes live and perpetuate the race. The modern world is replete with dangers wide and narrow, but the risk of death from unfitness has, as we say, declined greatly. This is perhaps still the minority view. Few dispute that the massive continuing changes in the human realm, the combination of us and our created surroundings, mean that what it takes to be human also changes, and is bred into the succeeding generations. Not by death of the least fit, but by sexual selection, changes in diet, and so forth. It took perhaps between 200,000 and 100,000 years to get from the earliest modern humans to here, via improvements in technology and culture, and their interaction with brain size. What anthropologists call ‘characteristically modern behaviours’ seem to have begun about 40,000 years ago. At that point, humans living in groups were able to flourish in what would previously have been inhospitable environments. Their improved neurology enabled changes in social organisation and the inter-generational retention of learning, particularly in the use of tools and fires for cooking. The causal chain goes in every direction: the investment in brain size, which entails longer foetal gestation and several years of vulnerable childhood, and the upgraded brain that needs a fifth of all energy intake to run, could not happen without the ability to use weapons to capture food, fire to cook it, and sophisticated new social mores to protect pregnant women and children. None of which, in turn, would work without that new command module inside our heads. Incremental changes in every aspect of the human were contemporary and mutually sustaining. That will continue for the digital ape, but much more quickly. We are already utterly dependent on our technology, perhaps more than most of us grasp day-to-day. We think, unlike Stephen Hawking in his more pessimistic moments, that the technology can be managed to remain dependent on us. But there are big social and political choices to be made about the quality of that symbiosis. Who ensures that algorithms are accountable? Who decides what enhancements to the human body are acceptable, in different circumstances? Was it right that citizens of Berlin should be largely unobserved by official closed circuit video, in contrast to the citizens of London, who live with around half a million cameras public and private, one for every dozen people? (Beijing’s Sky Net is the largest official observation system, but London’s, the second most extensive, is also beyond saturation coverage. No public space is unobserved.) Who makes access and usage decisions about the vast parts of modern data infrastructures owned by private corporations? We rehearse more of these questions in later chapters.
*
And then we come to what may be a watershed in the emergence of the digital ape, the defining phase. Digital science has brought us many wonders and many monstrosities. Perhaps the most significant is deceptively simple: astonishingly, we have learned how to cut and paste our own DNA. This new power derives from three things. Our grasp of mathematics. The availability of massive computing power. And our understanding of genetics. We now have the intellectual and physical tools to control our own evolution, and, indeed, that of any other species we choose to manage or farm. The practice will take decades to catch up with the simple principle. Nevertheless, this is where we pick up again the question of what humans of 2100 might look like if we could catch a glimpse of them. The digital ape’s evolution as a species can henceforth be self-conscious, purposive. It scarcely needs saying that we are the first and only species to possess this dangerous privilege and duty. And, quite rightly, every commentator goes straight to the key question. The headline on the front page of the National Geographic gets it in one: ‘With new gene-editing techniques, we can transform life — but should we?’ …
Elements of this are, in principle, not new. By selective breeding over thousands of years, we have altered the genome of dogs, horses, pigeons, and ‘farm animals’ generally. ‘Farm’ here means creatures from other species we bring into life so we can kill and eat them. We might, at any stage, have tried to breed humans. That possibility, called eugenics, began in the canon of western thought with Plato. The Spartans and others had civil codes and practices to improve the human stock:
The Spartans used exposure to the environment to kill imperfect babies (Plutarch II). Every father had to present his child to a council of elders. If it was not healthy it had to be exposed, as it would not become a good citizen or soldier. On the other hand, if a man had three sons he was relieved from military obligations, and if he had four sons, exempt of taxation.
Darryl R. J. Macer, Shaping Genes: ethics, law and science of using new genetic technology in medicine and agriculture, 1990
The Victorian scientist Francis Galton, a cousin of Darwin’s, the originator of the term, gave eugenics its modern form. At a broad level, many governments in the twentieth century have sought to manipulate the gene stock, by sterilisation or murder of supposedly inferior or defective people. In the United States, the first compulsory sterilisation law was passed in Indiana in 1907. West Virginia only repealed theirs in 2013. There was widespread approval for sterilisation of women particularly for either moral or genetic reasons, and of men particularly who exhibited criminal behaviour, or of anybody considered ‘mentally defective’. The nadir of this approach was, of course, that of the Nazi regime in Germany. Darwin himself simply can’t be blamed for every consequence of his tremendous insights over 150 years since 1859, neither eugenics nor Social Darwinism in its many, often ugly, forms.
There is little or no reason to suppose that any of these programmes would, even if maintained for centuries, have ‘improved’ the gene pool. It is, perhaps, at an abstract level, true that, were a government to simply kill or sterilise everyone who grew over a certain height, then shorter people would begin to predominate. (Although there are many reasons for the increase of the height of humans, like reduction in childhood diseases and better diets, which would be pushing in the other direction.) It would not be impossible to breed humans for longevity, but it would take an extremely long time. The already long lifespan makes it a very difficult proposition. Laboratories use fruit flies for tests, where a generation may be less than a month; or mice, where it may be less than a year. The human generation is a little over 20 years, and if the aim is people who live past 80, then selective breeding over several generations, followed by some centuries of experiment, would be on the cards. Again, given information about the dates of death of great-grandparents, a scientist could make a start, but people die early for all kinds of non-genetic reasons, which would fog the data. It is also worth adding, to temper optimism on the future length of human lives, that there may well be biological constraints just baked in that make it impossible in practice for humans to live beyond the present, very infrequently reached, maximum of 114 years or so. Not least, the diseases of old age, including dementias, would need to be preventable or curable before longer life was worth it. Thirty extra years suffering from Alzheimer’s is an unpleasant prospect. Moreover, eugenicists have generally been in the grip of other ethical, political, or social aims, which they have convinced themselves have a scientific basis, but are in fact without foundation. All efficiency is socially defined. The muscle of Brunel’s great steam engines may apparently be measured by the amount of useful work they can do, their pulling power produced per ton of coal. But their efficiency is ultimately measured by how many passengers can be transported from London to Bristol at what price, speed, and so forth, and those goals are social. People decide what a steam engine is for. By analogy with fine racehorses, the European nobility believed that marrying within their narrow social class would ensure that their innate superior characteristics would be preserved and enhanced. They socially defined the efficiency of their marriage programme. So ‘good breeding’ for centuries involved mating dim princesses with dimmer princes. The blood got bluer and bluer right up to porphyria and the Habsburg lip. Since there was no actual superiority to begin with, the net result was not the maintenance of a better class of people, but the accumulation of the problems of inbreeding. Fine racehorses actually did run fast, so could be bred to run faster. Monarchs were not actually more divine to begin with. So, yes, like any other animal we could be consciously bred for desired characteristics, but the breeders need to be knowledgeable and careful.
We have been breeding variants of species for thousands of years. We now have a strong general understanding of how the genome works, and are in perhaps the early middle stage of an ability to grapple with genes at the individual level. The first mammal to be cloned from an adult cell was Dolly the Sheep, born at Edinburgh University’s Roslin Institute in 1996. It is certainly in principle possible that, by intervention at the gene level, we could create many new animal species, and strange and exotic hybrids of existing species, with sufficient sentience qualifications to win a seat at the senior mammal table. We can build any new chimerical animal we like. At least, any from the restricted range of those able to function. Pigs with horses’ heads will just fall over. We can build a blue tree if we want to. We can replace a problematic gene in an individual, perhaps a child with cystic fibrosis, with gene therapy. As we identify genes, usually coalitions of genes, associated with particular desirable characteristics, we could insert a bias or tendency to have those abilities or attributes.
And, of course, if it is a practical proposition to clone a sheep, then it is a practical proposition to clone a human being. Without doubt, the moral position is very different. Cloning sheep involves many false starts that might seem ethically unacceptable if the same techniques were applied to digital apes. Laboratory-built humans, engineered to meet a scientist’s or a politician’s or a ‘high net worth’ customer’s preferences, are just a different kettle of fish. We are unable at present to have a sufficiently sophisticated dialogue with sheep to know how they feel about the science. We do know the fears that humans have about the topic, the ethical norms and risks that may need careful re-examination.
There are now two new astonishing developments. The first is CRISPR, which we mentioned in our opening chapter. The second is the gene drive. CRISPR is a technique by which short sequences of DNA can be cut and pasted, enabling the import of characteristics from another individual or indeed species. The gene drive is well explained by the science journalist Michael Specter:
Gene drives have the power to override the traditional rules of inheritance. Ordinarily the progeny of any sexually reproductive animal receives one copy of a gene from each parent. Some genes, however, are ‘selfish’: Evolution has bestowed on them a better than 50 percent chance of being inherited. Theoretically, scientists could combine CRISPR with a gene drive to alter the genetic code of a species by attaching a desired DNA sequence onto such a favored gene before releasing the animals to mate naturally.
‘How the DNA Revolution Is Changing Us’, National Geographic, August 2016
The two tools in combination are very powerful: DNA altered by CRISPR can be piggybacked on a gene drive, and will thus quickly spread if introduced into a species. Work is in progress, for instance, to alter the DNA of insects that bring disease to crops or people. The insects are either made sterile, so males fruitlessly occupy the time of females who would otherwise breed; or they are made resistant to carrying the disease vectors. Similarly, plans have been laid to make white-footed mice immune to the bacteria which cause Lyme disease when transmitted by ticks from the mice to humans. The ‘improved’ DNA would ride on gene drives, rapidly, unstoppably, through any isolated population of the mice. In fact, there are so many of the mice and the ticks across the United States that total eradication would be uncertain or take a very long time. An experiment is currently proposed on the island of Nantucket. The power of these techniques is obvious. A programme at Imperial College London, called simply Target Malaria, has been funded by the Bill & Melinda Gates Foundation. The equally transparently named Sculpting Evolution group of scientists at MIT in Boston are conscious that they are learning to fundamentally alter the natural world.
There would seem, on the face of it, to be potentially horrific dangers here. The US director of national intelligence James Clapper in 2016 described gene drives as weapons of mass destruction. The science community thought that a little exaggerated. There are easier, more certain ways for terrorists to wreak mayhem already available to them, without mastering the difficult technology to build a virus that might fizzle out, or just lead to an outbreak of acne. But it is true that CRISPR kits are already available for home enthusiasts and for school laboratories, and, terrorism aside, when their use is an everyday thing then sooner or later there will be an unintended release of something noxious. Many in the field feel this argues for great openness about who is experimenting with what. That must be right. And yet … despite the still huge room for inaccuracy and misunderstanding, when the moment comes that we really can intervene on a sensible basis, why wouldn’t we?
In principle, humanity can — nearly — control our own evolution. And, to elaborate on the point we made in our first chapter, transcendence — the singularity, as others call it — the products of science overwhelming the race of scientists, were it to be realised, is surely not going to be rooted in the mathematics of machines. It is in principle already here, and it is rooted in the mathematics of biology. If a first-rate multidisciplinary laboratory was this morning given the task of building a production model of a hitherto unknown sentient being, for sale on the mass market in 2030, they would not waste time establishing a supply chain of silicon and aluminium. They would gather together farmers, stock-breeders, and geneticists. At present, it would be wise to bet against them producing a useful product in such a timescale, but that would be their productive avenue.
We don’t have to wait for evolution, and patently in some areas we just won’t. There have been good structured debates and safeguards around progress (or the road to hell, if you prefer) in genomics in several countries. Soon, we will much enhance the abilities we already have to ‘fix’ painful and life-threatening defects or diseases in much-loved and otherwise doomed children. Parents of disabled or dying children who can be brought back from the brink will rightly form a powerful lobby. It is difficult to imagine that society as a whole will wish to trample on the hopes of the family of a cystic fibrotic child. But then, how will we respond to the tiger mothers of not very bright children? ‘School exams and SATs are vital to future careers and status, please improve my child’s cognition, you know you can do it.’ This is a matter of 20 years or less, not centuries.
Hence the final step in the emergence of the digital ape. Machines can change us by giving us the analytics to change ourselves. The human genome is huge. It has about 3 billion base pairs on the famous double helix twists, in 23 pairs of chromosomes within the nucleus of every one of our cells. Picking one’s way, as a scientist, through to the genes ‘for’ this or that needs industrial scale number-crunching over years. The over-arching science is there already: we increasingly and accurately understand the principles of the relationship between genes and health and physical disability, for instance. We have some understanding of the principles of the relationship between genes and character and subtle abilities. And, crucially, ageing. Sufficient already to start to talk about precision genomics. Getting from here to fuller understanding is a matter of the data and mathematical analytics. This is one of the greatest triumphs of our new digital phase, and one of the greatest dangers.
*
The extrordinary Irish modernist author Flann O’Brien satirised the dependency of humans on their tools. His village policeman introduces the molecular theory of the bicycle:
The gross and net result of it is that people who spent most of their natural lives riding iron bicycles over the rocky roadsteads of this parish get their personalities mixed up with the personalities of their bicycle as a result of the interchanging of the atoms of each of them and you would be surprised at the number of people in these parts who are nearly half people and half bicycles … And you would be flabbergasted at the number of bicycles that are half-human, almost half-man, half partaking of humanity … When a man lets things go so far that he is half or more than half a bicycle, you will not see him so much because he spends a lot of his time leaning with one elbow on walls or standing propped by one foot at kerbstones … But the man-charged bicycle is a phenomenon of great charm and intensity and a very dangerous article.
The Third Policeman, 1967
We will return to the study of that very dangerous article, the machine tending to the human. We cannot quite consent to the Flann O’Brien school of philosophical engineering, but share his observation of the utter interdependence of Homo sapiens and tools, each shaping the other over millennia.
*
In summary, even though the digital ape is by nature a hyper-sociable animal, we are still members of the great ape family. We select mates, search for food, and maintain relationships with our family and friends, using devices designed initially to control moon rockets. The digital ape feeds, seeks love, and preens in public, like the early hominids in a forest clearing. We post our most trivial doings on Twitter and Facebook. Digital apes also create poetry and journalism, science and literature, music and art, in our new technological habitat, write huge encyclopaedias, discover cures for cancer, and respond to humanitarian crises through the agency of a new electronic world.
Homo sapiens can manage more social relationships than our hominin precursors, because we have smarter brains than they had. One of the things we have done with those smarter brains is develop ever smarter tools. Our tools became intertwined with our biology in deep time, yet the nature of brain and the nature of memory are different for the digital ape. Tools have always externalised evolution: blades from flint gave humans an evolutionary shortcut to becoming predators, and could not be made without significant leaps forward in brain capacity and social learning. Our new technologies are not just sparkly devices in the hand. They will change the nature of the ape. We now have the ultimate external evolutionary tool. Future changes to the human genome might come over many centuries through natural selection. They might come in decades through sexual selection, and, across the poorer half of the world, the adoption of richer lifestyles. They could come instantly through the mathematics of biological knowledge powered by digital machines. We are in charge now. And with great power comes …