6

THE VISION ANIMAL

The people themselves are friendly and intelligent,
with a good sense of humor. Though fond of relaxation,
they’re capable of hard physical work when necessary.
Otherwise, they don’t much care for it—but they
never get tired of using their brains.

THOMAS MORE

AT THE TIME THAT ANATOMICALLY modern humans evolved in Africa, the social insects were arguably the dominant form of terrestrial animal life on Earth. The ants alone are estimated to have achieved a biomass roughly equal to that of the current human population (more than 6.5 billion people). The social insects had assumed control of the most favorable nesting sites, forcing solitary insects into marginal zones. Their success is attributed to their organizational adaptations, particularly impressive among the eusocial, or “true social,” insects such as bees and ants. The ants are represented by thousands of species and a wide variety of social adaptations.1

The eusocial insects exhibit many parallels with modern humans. The honeybees are noted for their rare ability to externalize representations, which they do with body movements in analogical and iconic form. But it is the ants that reflect the most complex social organization and display extraordinary related behavior. The farming leaf-cutter ants of the Americas are said to be the most advanced. Colonies of Atta comprise more than a million individuals, subdivided into numerous specialized worker castes, which coordinate their work through a multimodal communication system. Foragers move along a road network maintained by road-worker crews, cutting and transporting leaf fragments back to garden plots, where they are planted to produce fungus, which subsequently is harvested to feed the colony. The quantity of sediment moved to excavate one of their nest complexes in Brazil was 40 tons.2

Naturalists and philosophers have long marveled at the human-like features of insect societies, but the most remarkable aspect of the parallel is that the characteristics of eusociality should have surfaced among a hominoid taxon. With their massive bodies, low population densities, slow reproductive rates, and protracted infant dependences, apes and humans are in many ways the antithesis of the social insects. Eusociality is extremely rare among vertebrates, undoubtedly due to their reproductive biology. The only examples are found among the relatively fast-breeding rodents, specifically two genera of African mole rat.3 As a result of some improbable developments in their evolutionary history, humans ended up re-creating features of the social insects, which provide an evolutionary context for humans as important as that of the primates.

The historical or genetic evolutionary relationship between social insects and humans is obviously remote. Humans belong to the phylum Chordata, which separates them from the insects by hundreds of millions of years. The similarities between the two are a classic case of convergent evolution,4 in which the same or similar features evolved independently in unrelated groups. In this case, the convergence seems to reflect the emergent properties of highly complex systems.5 Among the eusocial insects, these properties yield a super-organism.6

Although modern humans are rapidly approaching the point at which they could reproduce themselves biotechnologically in a manner similar to that of the insects, the complex social and economic hierarchies of their organizations are not based on reproductive biology and genetic relationships. They are instead based on information or, rather—because genes themselves are a form of information—a type of information that is coded symbolically in vast networks of nerve cells in the brain. Insects also store coded information in their brains, but if the organizational adaptations of humans are constrained by their reproductive biology, the storage and manipulation of nongenetic information among the insects is constrained by their small body and brain size. Humans inherited a large brain from the African apes, and it tripled in size within a few million years. Moreover, at a critical point in their later evolution, humans developed the structures (for example, an expanded prefrontal cortex) that are necessary to integrate many brains into one—a super-brain.

Humans evolved two features related to the coded information stored in neural networks that rendered them unique among all living organisms. First, they developed the ability to translate that information into various material forms outside the brain—to create phenotypes of ideas rather than genes. Second, they developed the ability to combine and recombine that information with a sufficient number of elements and hierarchical levels that the variety of creations is potentially infinite.

After the formation of the super-brain among anatomically modern humans in Africa roughly 100,000 years ago, human organizations came to exhibit properties not previously observed in organic evolution. They created a growing and ultimately immense quantity of structured information outside the brain, and the processes by which the structures were generated and selected cannot be accounted for by the principles of evolutionary biology (genetic mutation and natural selection). They accumulated knowledge—organized nongenetic information—as a superorganismal entity, unconstrained by biological space or time.

These human organizations or societies were not super-organisms, however. The reproductive biology never caught up with the neocortical integration, yielding a society largely composed of competing and cantankerous individuals without close genetic ties. The result is a freakish variant of eusociality in which the collective mind—the super-brain—coexists uncomfortably with a group of organisms.

The Mind as History

[A]ll progress depends on the unreasonable man.

GEORGE BERNARD SHAW

Humans are odd creatures. They are organisms produced by the process of biological evolution, but they also participate in the phenomenon of the mind. They lead conflicted lives.

On the one hand, humans cannot escape the realities of organic life; consciousness itself is dependent on the functioning of evolved systems for the digestion of food, circulation and oxygenation of blood, release of neurotransmitters in the brain, and other bodily functions. Much of human life revolves around issues clearly related to evolutionary biology: the acquisition of energy, control of territory, defense of resources, reproduction and provisioning of offspring, and aging and death. On the other hand, humans receive and generate mental representations as part of a collective entity that seems to operate in accordance with the emergent properties of systems more complex than those previously observed in organic evolution. The structures of the super-brain or mind are generated (and selected) by a process of recombining information in complex hierarchical form that is unknown among other organisms.

I began with a discussion of how scholars of the seventeenth century addressed the problem of the mind. They were attempting to explain the universe in mechanistic terms and to place humans somewhere in this technologically inspired view of reality. Descartes laid the foundation for a debate that continues today by excluding the mind from natural science. There was opposition to this view from the outset, for example from Thomas Hobbes, and it only increased as technology and scientific explanation continued to progress through the eighteenth and nineteenth centuries.

The discovery of a mechanism for evolutionary change in the mid-nineteenth century was a major development. Darwin’s idea was so powerful that it seems to have tipped the balance toward incorporating humanity into natural science. It significantly reinforced a trend that already was apparent in the writings of Johann Herder, Herbert Spencer, and others.7 Thus in the early twentieth century, the philosopher of history R. G. Collingwood found little support for his complaint against the “tyranny of natural science” over philosophy and his contention that history is an autonomous form of thought—as well as the key to understanding the mind.8 One of his allies was V. Gordon Childe. But Childe had been pigeon-holed as a doctrinaire Marxist by the 1940s, and his later writings were largely ignored.9

Reviewing centuries of thought about history, Collingwood praised those who had affirmed the autonomy of mind and history. He was particularly impressed with the work of his contemporary Benedetto Croce (1866–1952) in Italy.10 Collingwood also admired the historical thinking of the two most eminent German philosophers: Immanuel Kant (1724– 1804) and G. W. F. Hegel (1770–1831). Both had an advantage over many of their successors: they felt no compulsion to explain the mind in the context of evolutionary biology.

Kant was not a historian and wrote only a few short pieces on the subject, but his observations and reflections are still worth reading. He was appalled by the senseless waste and folly that characterized much of history and referred disparagingly to the “idiotic course of things human.”11 But like others of his epoch, he accepted the idea that history was progress, if somewhat halting and uneven. Kant defined this progress as the development of the mind and emphasized the cumulative nature of the process. According to Collingwood, “he identified the essence of mind as … autonomy, the power to make laws for oneself. This enabled him to put forward a new interpretation of the idea of history as the education of the human race.”12 Kant went on to address the question of what drives progress. His answer was “antagonisms” or conflicts within and between societies. Because individuals constantly put their selfish desires above the common good, Kant concluded that “from such crooked wood as man is made of, nothing perfectly straight can be built.” Moreover, much of the attention of nation-states was consumed by war or “constant readiness for war.” In their efforts to overcome these evils, Kant believed, humans had made and would continue to make progress toward a more rational world.13

Although notoriously obtuse, Hegel articulated similar ideas about both the direction and the propulsion of history. These ideas, which reflected the unmistakable influence of Kant, were presented in the introduction to his Philosophy of History (1840). Hegel also endorsed the notion of progress in history (“an advance to something better, more perfect”), which he felt was lacking in nature (“only a perpetually self-repeating cycle”).14 He saw the passion of individuals as the driving force behind change and progress because “in history an additional result is commonly produced by human actions beyond that which they aim at and obtain…. They gratify their own interest; but something further is thereby accomplished, latent in the actions in question, though not present to their consciousness, and not included in their design.”15 Even more so than Kant, Hegel confined his view of historical progress to the political realm, rather than the broader growth of knowledge, and this, as Collingwood noted, encouraged him to the unfortunate conclusion that it had achieved culmination in the eighteenth century nation-state of Prussia.16

The conflict that Kant and Hegel identified as the catalyst for historical progress is the dissonance between individual humans as organisms enhancing their own reproductive fitness and the organizational creations of the collective mind. It is the same distinction made earlier that modern human society is based largely on a super-brain, not a super-organism. The genetics of the super-organism yield a society composed of individuals that function like the organs or cells of an organism; everyone follows the program, and there are no runoff elections or letters to the editor. The super-brain produces something very different.

Progress in the form of technological innovation with widespread social and economic consequences is especially evident in capitalist industrial societies after 1500. Much of the creation and dissemination of novel technologies has been driven by competition in the commercial sector. The novelties cover everything from jumbo jets to disposable diapers. Some of them, such as the mass-produced automobile or the birth control pill, have clearly effected changes large and small on the societies and super-brains that made them. It is more difficult however, to see how competition among individuals in earlier or simpler societies has driven innovation. Was this the source of writing or the spear-thrower?

Competition between societies—often erupting into open conflict—is another inescapable source of innovation. Here, too, the best examples derive from nation-states of the industrial era (although not necessarily capitalist nations). International competition drove innovation in military technology after 1400 across Europe, and the process became especially dynamic after the mid-nineteenth century. But in these cases, the antagonists represent separate super-brains, or at least poorly integrated components of a super-brain (for example, the French and Germans), and the dissonance between individual organisms and the collective mind would not seem to apply. In fact, it would seem to be the reverse: patriotic individuals laboring on the design of armored vehicles or microwave radar for the homeland appear to function as parts of a super-organism after all. The same principle is illustrated by individual sacrifices on the battlefield.17 Moreover, some of the technological innovations, particularly in the realm of communications, have been steadily enhancing the integration of the super-brain, including the integration of those collective minds that once represented antagonistic nation-states. The most striking example is Europe in the early twenty-first century, but the broader trend is global. The diversification of super-brains that began more than 50,000 years ago in Africa with the dispersal of modern humans has been reversed. Humans seem destined to re-create the original unified super-brain or mind of the later African Middle Stone Age.

Perhaps the critical element in historical progress is the evolved capacity of the mind to imagine the future, to create alternative realities. It may be recalled that the essential function of the metazoan brain is to acquire information about changes in the environment and that the creative powers of the human mind represent a unique variant of this function—imagining things that never were and asking why not. The imaginings range from an altered shape of an arrow point to the workers’ paradise and include an immense body of idiocy, but they clearly provide a constant stimulus for actions that may have significant consequences. Other animals may respond to the visual image of a food item or the ominous tread of a predator, but only humans risk their lives to pursue daydreams.

A History of the Future

[Y]ou can make systems that think a million times faster than a person. With AI, these systems could do engineering design. Combining this with the capability of a system to build something that is better than it, you have the possibility for a very abrupt transition.

K. ERIC DREXLER

It was Lord Byron who suggested that he and each of the guests at his villa on Lake Geneva in the summer of 1816 write “a ghost story,” as recounted years later by Mary Shelley. The future wife of Percy Bysshe Shelley eventually composed a tale inspired by discussions between Shelley and Byron about the principle of life and the experiments of Erasmus Darwin. She began to wonder if “perhaps the component parts of a creature might be manufactured” and endowed with life.18

Ironically, the tale that Mary Shelley wrote and later published as a novel was the antithesis of a ghost story. Frankenstein is not about the returning spirit of a dead person, but about a human produced by technology. The genre is science fiction, and it may be considered a literary variant of the speculations of Francis Bacon. Writing two centuries later, Shelley could see, quite accurately it seems, where nineteenth-century technological progress was leading. Her novel was a deeply religious work that reflected not the bright optimism of Bacon, but the conviction that the inevitable attempt to “mock the stupendous mechanism of the Creator” would produce a monster.19

Like so many projects, from agriculture to pyramids, this one has deep roots in the Upper Paleolithic. In one way or another, we have been working on the creation of life and mind for at least 50,000 years—in both robotics and biotechnology. The earliest robots were the traps and snares designed possibly as early as 40,000 years ago and almost certainly by the later Upper Paleolithic. These were simple devices built to perform a single and highly specialized human function: catching a small mammal or bird (aquatic versions were designed for fish). They were “smart machines” with a cognitive function: respond to a stimulus received from the prey animal as it stepped into the snare or trap. In theory, they could be produced in large numbers and deployed across wide areas along traplines, creating an army of robots to perform important tasks for a small group of humans.

More broadly speaking, mechanical technology, which is unique to modern humans and the mind, mimics the functions of an organism. At first, mechanical technology operated as an extension of the body, drawing its energy entirely from human muscle power. The creation of technologies powered by other sources, initially wind and moving water, was a major step toward building a machine that could mimic a whole organism. The mechanical clock, although ultimately powered by muscle, stored that energy and controlled its release over an extended period of time—like a beating heart—through an ingenious escapement device. Clocks fascinated people: Thomas Hobbes referred to them as “artificiall life.” They had a profound effect on how their makers viewed the world, including the living world.

As to the manufacture of the “component parts” that Mary Shelley mentioned, here also the history is lengthy. It has a precursor in the Upper Paleolithic with the production of various technologies designed to enhance the function of specific body parts: tailored clothing to function as artificial skin and hair, a spear-thrower to serve as an elongated arm, and artificial memory systems to enhance brain function. It began in earnest with the manufacture of replacement parts—prosthetic limbs, wooden teeth, and wigs, among others—which apparently dates to at least 500 B.C E. and had became common by the sixteenth century in Europe. In recent decades, impressive advances have been made with the application of electronic information technology to artificial organ function, including vision.

Key elements of the brain also have been replicated technologically. The storage of information in coded digital form dates to the early Upper Paleolithic, and with exponential increases in the size and complexity of human interaction networks in the postglacial epoch, the quantity of data stored on information technology became immense. The computational function of the brain also was re-created, at first with comparatively simple mechanical devices like the ancient Greek Antikythera Mechanism and Charles Babbage’s analytical engine of 1837.20 Powerful computing machines awaited developments in electronics that yielded programmable digital computers in the 1940s that greatly exceeded the computational abilities of the human brain. At the same time, other technologies have replicated another key function of the animal brain: receiving sensory input in various media (chemical, tactile, sound, and light) and transforming it into coded data sets. These devices have, in turn, often been connected to a computing machine to analyze the received data—for example, a radar early-warning system that detects fast-moving objects and identifies potential incoming missiles.

For biotechnology, the Upper Paleolithic background extends back at least to the millennia following the maximum cold of the last glacial (18,000 years ago). The genetic modification of canids was succeeded by the domestication of various plants and animals in the postglacial epoch, and—as during the Upper Paleolithic—progress was tied to settlement and reduced mobility. And as with the mechanical clock, the technology provided insights into natural processes; the origin and evolution of life eventually was incorporated into the same mechanistic worldview. Recent advances in biotechnology have, of course, been extraordinary, and it is now possible to manipulate genetic material directly to produce desired phenotypes (for example, genetically modified corn).

At the beginning of the twenty-first century, it is apparent that the issue is not the creation of life (although this is a concern for many people). It is the creation of a mind, or true artificial intelligence (“strong AI”), that represents the most profound potential step. After 50,000 years of growth and development, the mind has achieved a substantial understanding of and some measure of control over organic life. We can effectively imitate life, including a higher animal brain, with robotics and information technology, while genetic engineering provides opportunities for rearranging organic forms. It is the generative properties of the mind that remain mysterious and thus far impossible to replicate with technology. Perhaps these properties will be comprehended only if and when AI can be successfully engineered, which, in turn, will probably shed light on the processes and events that underlay the emergence of the mind in anatomically modern humans.21

But how is it possible to build a machine that might, for example, write a novel like Frankenstein? Alan Turing, universally regarded as the founder of modern computing science, famously stated that a truly intelligent machine would have to be able to speak or write like a human.22 AI has yet to be created, and some have argued that it is impossible. For one thing, it implies mechanical consciousness, which is hard for most people to imagine. Skepticism increased after earlier forecasts of an imminent breakthrough proved too optimistic, although some continue to predict that AI is coming; for example, the date proposed by Ray Kurzweil is 2045.23 If so, perhaps it will be achieved, as was the digital computer, only after new developments in other areas, such as nanotechnology, are applied to the problem.24

Because AI seems remote and quite possibly unattainable, it has not been an issue of much concern. Far more attention has been paid to recent developments in biotechnology, such as genetically modified foods and cloning of animals, and the current and likely future directions of this applied science. A small number of people have nevertheless questioned the potential consequences of AI. A concern is that AI could be used for unscrupulous purposes as an ultimate weapon of sorts.25 But another worry is that AI could, and perhaps inevitably would, become an independent entity—a mind that, in Kurzweil’s words, “transcends biology.” Supposing the mind could free itself from its host organisms and exist in an entirely nonbiological form?26

To begin with, the mind would have to retain or acquire two characteristics from its biological antecedents, rather like Mary Shelley’s creature. The first is a pair of hands or some other means to effect changes in its environment. Otherwise, it would be reduced to persuading humans to translate its thoughts into action. This includes the means to re-engineer itself: to continue progressive development. The second is motivation: the source of historical progress that Hegel identified as “passion” and Shaw described as “unreasonable.” Humans would have to create AI in their own image with the mechanical equivalent of hormones.

What would be the impact on humanity of an entity with vastly superior computational and perhaps even greater creative powers than our own? The mind could be liberated from the biological processes that produced it. It would require an energy source, but no food, water, oxygen, or sleep; no clothing, shelter, bed, or toilet. Among other things, the mind could embark on serious space exploration; unconstrained by biological time or the need to sustain an earthly environment, a mind could drift across the galaxy and beyond for millions of years. What would be the role of humans in such a world or, for that matter, of any life-forms? What purpose would they serve? Would they not be like rats in the cellar?

In a remote future, the mind might look back on its own history in the same way that humans reconstruct and explain their evolution from less complex forms of life. It might be a little embarrassing, much as “ape” ancestry was for the Victorians who contemplated it in Charles Darwin’s time. What place would humanity occupy in the mind’s heroic account of its own origin? We might be viewed as simply the final, and relatively brief, transitional phase of a long process that began with the first information systems more than 500 million years ago. From this perspective, Homo would appear as a handy vehicle through which one of these information systems could assemble itself into the mind, before moving on to bigger and better things.