2

Spirit of the Beehive

image

Hermetic Resonances in Cybernetics, Artificial Intelligence, and Cyberspace

ESOTERIC MACHINES

In what now seems a prescient description of a planet reticulated with computer networks, Oswald Spengler provided this description of the machine in his Decline of the West (1918):

Machines become in their forms ever less human, more ascetic, mystic, esoteric. They weave the earth over with an infinite web of subtle forces, currents, and tensions. Their bodies become ever more and more immaterial, ever less noisy. The wheels, rollers, and levers are vocal no more. All that matters withdraws itself into the interior. Man has felt the machine to be devilish, and rightly. It signifies in the eyes of the believer the deposition of God. It delivers sacred Causality over to man and by him, with a sort of foreseeing omniscience is set in motion, silent and irresistible.1

Writing at the turn of the last century, it is not at all difficult to imagine that Spengler was seeing way beyond his times and describing the metamorphosis in the nature of machines at the end of the twentieth century, when the era of “information technology” was (and is still) popularly regarded as having superseded the era of heavy industry and its attendant technologies.

What has Spengler’s description of mystic machines to do with computers or computer networks? While it is difficult to imagine a steam engine, coal furnace, or turbine as a “mystic, esoteric” object, I would say that this is certainly not the case in regard to present manifestations of information technology. It is possible to regard the dream of artificial intelligence, artificial life, or the cybernetic web of communication devices both on and off the planet as ineluctably connected with a mystic, esoteric worldview.

Entertaining for the moment the idea that Spengler was indeed a visionary (if only as a relatively harmless gedankenexperiment), the idea that the machine has become increasingly immaterial, its raison d’etre “withdrawn into the interior,” is singularly apposite in regard to what may well be the most important postmodern mythologem, the computer. Although the machine I am using to write this book is undoubtedly a material object (we shall not go too deeply into the implications of the meaning of material just yet), its function, its meaning as an object, lies purely within the realm of ideas. Paralleling the ghostly cogito of Descartes, it is this invisibility of its functioning—the “transformation of information”—that distinguishes the essence of the computer from any one of its particular physical manifestations. It engenders an attendant set of ideas such that we may legitimately regard the computer as a mythologem central to a contemporaneously developing mythology. This mythology has its own special world (“cyberspace”), its guardians (the Rand Corporation, DARPA), it’s ne’er-do-wells (“hackers”), and a plethora of secret handshakes, protocols, heroes, and villains.*16

In this chapter I want to explore the provocative morphological shifts within the “imaginary of the machine” occasioned by the creation of the computer and consequently connect this to a central mythologem of the Hermetic tradition, the anima mundi. The essential nature of a computer—any computer—is an idea, an idea the history of which can be dated from about the mid-seventeenth century. Although the idea of the computer essentially derives from this period, few of its original proponents (Leibniz or Pascal, for example) could have imagined the great changes such an idea would occasion in the imaginary associated with the machine. From a physical object the very definition of which lay within the realm of Descartes’s res extensa, the machine in the form of the computer is now envisioned as a virtual simulacrum, an object the operations of which lie within the liminal region between the physical and the intellectual. Computer scientist Frank Tippler describes the computer as an exact “emulation” of an “ideal” universal computer.

An emulation is an exact simulation, an absolutely perfect copy. Everybody’s computer emulates other computers, although the average person is not aware of that. In any running computer there are several computers there. All but one of them are virtual computers, perfect imitations of other computers. Writing commands into your machine, you see the physical machine, but in reality an emulation of another computer exists inside this machine. But it only exists as bits of information.2

Although Tippler is probably not aware of it, his description is strongly reminiscent of Hermetic and Neoplatonic considerations of the relationship of the One to the many; of lesser monads to the one, supreme monad. It has become a truism of contemporary computing science to consider that while the CPU (central processing unit) and the various buffers and so on of von Neumann computer architecture (see here) exist at various physical locations on the circuit board, their functioning qua functioning is quite literally virtual and does not occupy the “physical” space of the machine. The imaginary of the contemporary computer therefore recapitulates such conceptions as Leibniz’s metaphysical monads.

The history of the idea of the computer contains within it a central, seemingly inexorable “slippage” concerning the very notion of computation: once considered a capability purely and uncontestably human, the notion of computation seems now to have fallen squarely within the province of the machine. What would have once seemed a kind of Rylean “category mistake” has now, in terms of mathematical and computing science, become a truism. As most forcefully argued by mathematician Martin Davies, this theory goes so far as to state that previous to the idea of the “Turing machine” (the essential idea behind contemporary computing devices) there was no adequate definition of computation at all, and such a definition had to wait until Alan Turing invented his now famous mechanical conception of computation.3 For Davies computation is essentially mechanical, with the unequivocal implication that when a human being “computes,” this activity is equivalent to the operations of a machine. According to this view a considerable part of what it means to be human has been given over to the machine, with the consequence that we were evidently just deluding ourselves in believing that this was a purely human activity. All along, in fact, the activity of computation was actually in the realm of the machine.

RAREFACTION OF THE IMAGINARY

It was only in the mid-twentieth century that the idea, the dream of a universal computing device, was realized—in the form of John von Neumann’s Eniac (1945), the first stored-memory serial digital computer. From this computer’s first instantiation, the nineteenth-century concept of a machine the functioning of which was determined by the transformation of mechanical energy—thermodynamic, hydraulic, big-piston power—was irrevocably pushed to the side as the esoteric, immaterial nature of the transformations of information came into technological consciousness.

We can push the notion of Spengler’s imagined prescience even further. In the early twenty-first century it appears that the advent of the von Neumann architecture computer was only the larval stage, as it were, of something of greater consequence: the construction of cyberspace. The vast network of interconnected computing devices has yielded the sort of omnipresent omniscience indicated by Spengler’s vision of late Faustian machinery. How was it that he could have been so close to describing the contemporary nature of things from so far away? The explanation can be found in an examination of the imaginary that early produced intimations of future technologies.

Already in Spengler’s era the imaginary of both machines and human communication were undergoing profound changes. Spengler’s times were witness to the adoption and/or completion of the vast network of the railway system in the developing West, a model of the “communication network” par excellence. Allied with this was the consolidation of the telephone and telegraph systems. New archival systems such as the phonograph record, piano rolls, and photography were becoming increasingly available to the general public, with the radio soon to follow. Telegraphs, telephones, and radios were a new kind of machine. They seemed to embody the very world of electrical fields, lightning-like instant communication between bodies, and invisible matter-energy exchanges that had been described by a succession of scientists, from Faraday and Maxwell in the mid-nineteenth century to Einstein in the early twentieth. The imaginary that had been so fruitful for Sigmund Freud, drawn from the world of thermodynamics and hydraulics, of forces under great pressure, of release valves, vents, and pistons was, even with the first publication of The Interpretation of Dreams in 1900, beginning to yield to another world constitutive of an imaginary more rarefied, more elementary, increasingly invisible.

Western consciousness, in other words, was already “primed” for the appearance of the electronic computer. With the failure of the Michelson-Morley experiment in 1887 to discover any trace of the “luminiferous aether” (thought to be the physical substrate that carried Maxwell’s electromagnetic waves), hope of ridding the physical description of the world of the “occult quality” of the propagation of forces across a vacuum was finally dashed—the conclusion being that Newtonian mechanics were simply not enough to account for material effects. If we couple this event with J. J. Thomson’s description of the atom in 1897 as consisting of corpuscles (later called electrons) orbiting a space the size of which was, relatively speaking, astronomical—the atom being mostly “vacuum”—then we find that by the beginning of the twentieth century the concept of “matter” as the stuff that one could see and touch, that was palpable and “massy,” had all but disappeared, or at any rate had become so rarefied that the words use in physical descriptions necessarily acquired a considerable ambiguity.

We observe, then, the emergence of new elements in the imaginary of the machine. An imaginary that encompassed the dreams of the scientific/Enlightenment project, yet would cast them in a new light. The two great twentieth-century interpretations of the physical world, special and general relativity and quantum mechanics, the former the culmination of the classical worldview, the latter the eclipse of this worldview, are greatly contributory to this new imaginary. Unlike Einstein’s special and general relativity theories or the revelations of the subatomic world discovered in quantum physics, however, the physical manifestation of the universal computer had one thing going for it that the other revelations did not: it already had a relatively long intellectual history such that its immediate, almost intuitive impact as a model for human cognition was almost a foregone conclusion.

And this was not all. Proof of the computer’s abilities to reflect the immaterial, geometrical evanescence of reality was quickly to come to hand. Soon after von Neumann set Eniac to the task of working on physics equations at the Los Alamos Theoretical Physics Division in relationship to the development of the first atomic bomb, the world would hear of the results of the first atomic test. Einstein’s depiction of reality as the product of the curvature of “space-time,” of the subatomic constituents of matter as being an infinite but closed field of post-Newtonian forces, had suddenly and horrifyingly become concrete.

For Spengler of course, machines were becoming “less human” because they no longer seemed to embody the concept of force the origin of which is ultimately derived from the experience of human corporeality. Following his Nietzschean outlook—the glorification of the strong, the will triumphant of the Übermensch—Spengler disdained what we might now call the neo-Cartesianism represented by the new technology, a technology defined by the ghost in the machine, the immaterial essence of humankind, rather than the exertions, tensions, and release of levers and gears, of muscle and bone. This new imaginary is still with us, yet it is an imaginary in a problematic relationship with the object around which it constellates—the computer, and by extension, cyberspace. Popularly the “cybernetic revolution” easily accommodates the notion of a machine just as Spengler described it, but at the level of the expert this imaginary suffers some interesting convulsions.

The development of one aspect of the imaginary of the universal computer—artificial intelligence—is the subject of this chapter. To begin a critique of the very notion of artificial intelligence the ideas of a contemporary computing scientist will be examined in an attempt to provide, first, evidence of their place within a continuity of figuration such as is provided by the Hermetic concept of the anima mundi, and second, an examination of some recent morphological shifts within the related field of cybernetics.

THE SINGULARITY AND BIG NUMBERS

A recent idea in computing science has the shape of a bad joke. You see, there is good news and there is bad news. First the good news.

The good news is that by the year 2030 CE a greater-than-human intelligence will have been created such that all the needs of human society will be taken care of. All technology will be optimally improved (in fact it will surpass anything that lesser human intelligence is even capable of creating), and all problems—intellectual, societal, and (I presume) spiritual—will be answered once and for all.

The bad news is that we will all most likely become enslaved—and more likely summarily destroyed—by this same unforgiving, omnipotent cyber-intelligence.

Science fiction perhaps? Not if one computing scientist is to be believed. This future scenario has been proposed by mathematician/computing scientist and author of several influential science-fiction novels Vernor Vinge. But in regard to this particular future scenario he is deadly serious. The simultaneous apotheosis and annihilation of human culture and history will be determined by the advent of what Vinge calls the “Singularity,” a superhuman intelligence that will come about as the result of one of four possible events, or perhaps a combination of them. His choice of the term singularity is employed, no doubt, to engender resonances with the terms use in theoretical physics. In contemporary theoretical descriptions of a “black hole,” the singularity is the infinite-mass point at the center of the black hole where all the known laws of physics irrevocably collapse and where time and space no longer have any meaning. Like this “cosmological singularity” (as it is called), Vinge’s “Technological Singularity” similarly initiates a world where all the paradigmatic modes of social, political, and technological interaction break down and are erased. It is the reification of the cliché of “the end of civilization as we know it,” just as a black hole is the destruction of reality as we know it.

The first event that will possibly lead to the Singularity is the immanent development of a computer that is intelligent and “awake,” in other words, the fulfilment of the dream of artificial-intelligence engineers. The second event may be that large computer networks, coupled with their human users, will, by reaching a sort of combinatorial catastrophe point—a density of information exchange—suddenly become “awake” as a creature greater than the sum of its parts. The third event is basically a less ambitious interpretation of the second: present human-computer interaction may become so “intimate” (Vinge’s word) in the near future that these human users must be considered superhumanly intelligent. The fourth event, somewhat unrelated to the preceding three, is that the biological sciences may find means to greatly improve human intelligence.4

Vinge states that the first three possibilities depend on improvement in computer hardware, and if we are to take the improvements in computer science over the past fifty years as any indication, then the approach of a technology powerful enough to support the Singularity is not far off (he predicts about fifteen years, i.e., 2030). He recognizes that AI scientists have been saying similar things about the exponential acceleration of the hardware/software equation necessary for the realization of artificial intelligence since at least the early 1960s—and that they have been consistently (and greatly) off the mark—so to (inadvertently, I suppose) throw his lot in with them he provides his own apocalyptic “use-by” date. Compressed into Vinge’s GOFAI*17 scenario are a number of implicit assumptions, assumptions that he acknowledges only briefly when he states that those dedicated to the idea of machine intelligence assume that intelligence does not depend on a biological substrate and that algorithms are somehow essential to the existence of minds.

Central to his argument for the future instantiation of a machine intelligence is the improvement of computing power, of the ability of a computer to “crunch” numbers at a far greater rate than is possible with present technology. Implicit in this requirement is Vinge’s assumption that it is the computational power of the human mind/brain that is primarily responsible for human intelligence. Attendant to this is the idea that computers are already like human minds, but like poor cousin Clem from the swamps, they are just too slow to pass an IQ test. Leaving aside for the moment the question as to whether computers and human minds are alike in their essential computational nature, it is important to realize that Vinge (and others who believe in the immanent creation of an artificial intelligence) sees a continuum that spans from what is basically a fancy calculator (today’s computing devices) at one end to an intelligent machine (the Singularity) at the other. This exponential curve is determined by computer hardware power (that is, more calculations in less time); or in prosaic terms, more is better. According to this view, greater computational power will inevitably lead to a mind.

What evidence is there that this is so? Why is it that the computational equivalent of “critical mass” should determine a qualitative change in a machine? We know that certain natural phenomena, at a given level of complexity, undergo what is called a “phase change”—water changes to ice, for example. A peculiar aspect of phase changes is what is called “critical slowing down”: when water changes into ice, initially the change happens very fast, but then the changes become slower and slower. Contemporary theorists in both “chaos” and information theories posit that at some level the water is calculating the matrix structure needed to complete the phase change, and the increased complexity of the calculations accounts for the slowing down of the process. When I referred earlier to a “given level of complexity,” the sense intended is the technical sense of complexity as the interface between order and disorder, the contemporary “twilight zone” of mathematical modeling where quantitative operations somehow transmogrify into qualitative change.5 It should be remembered, however, that this transmogrification is a component of the model—which might or might not account for the changes in the object it is modeling (i.e., other unknown factors may account for the phase change in the phenomenal object).

One of the ways in which a phase change can be characterized is that the models/methods one uses to describe the “before” and “after” states are different—the methods used to analyze laminar flow are not the same methods used to analyze the lattice structure of ice, for example. Yet the manner in which one analyzes the operations of a computer would presumably remain unchanged when directed at the operations of the Singularity; after all, human programmers would be responsible for its early development (according to Vinge, of course, that time is now) and therefore an attentive examination of the details of its software design plus a careful comparison of the Before Singularity (BS) and After Singularity (AS) power output of the computer would presumably account for the advent of the Singularity. A new algorithm would then emerge, the significance of which would (literally) shatter research projects using such superseded notions in mathematical physics as Planck’s constant or Landauer’s physical minimum for informatics.

Although he does not explicitly discuss it, the concept of emergent properties is one of the key factors behind Vinge’s conception of the Singularity. Emergent properties are qualitative changes in a system that appear once a certain “critical mass” is reached. As such, the concept of emergent properties is like a “catchall” phrase that covers all phenomena for which scientists have no explanation—at least no explanation that conforms to strictly mechanical laws. The notion of emergent properties therefore marks the liminal territory where Newtonian mechanics and biological notions of the organismic meet. In apparent defiance of the second law of thermodynamics, certain systems—particularly living systems—seem to display an increasing order in proportion to their complexity. Signaling its import for biology, the idea of emergent properties was inherent in late nineteenth-century observations of bee and ant colonies. No one would grant the individual bee any great intelligence, and in fact observations of the behavior of beehives supports the view that the actions of individual bees is little more than a mechanical response to the necessities of the hive. Yet this swarming mass of individual drones manages to coordinate and sustain what is in the end a very sophisticated living system. Writer Maurice Maeterlinck, in his Life of the Bee, called this sustaining coordination the “spirit of the beehive.”

It is the spirit of the hive that fixes the hour of the great annual sacrifice to the genius of the race: the hour, that is, of the swarm, when we find a whole people, who have attained the top-most pinnacle of prosperity and power, suddenly abandon to the generation to come their wealth and their palaces. . . . Where has this law been decreed? . . . Where, in what assembly, what council, what intellectual and moral sphere, does this spirit reside to whom all must submit, itself being vassal to an heroic duty, to an intelligence whose eyes are persistently fixed on the future?6

Where indeed does this systematic coordination come from? It certainly does not derive from the queen bee—she apparently just lies at the center of the hive and reproduces, and when the bees swarm and travel to another hive, she merely follows all the others. In 1911 entomologist William Morton Wheeler thought he had come up with an answer: the hive is an organism in itself, all its individual bees add up to something much greater—a huge swarming animal: “Like a cell or the person, it behaves as a unitary whole, maintaining its identity in space, resisting dissolution . . . neither a thing nor a concept, but a continual flux or process.”7

This “beingness” of the hive, its organismic nature, is something that is only present when you have a certain mass of individuals; if you try to find the “beingness” in any individual bee, you will not find it.

Even at a less complex level than that of a bee or ant colony, the idea of an emergent property is known in engineering. If a number of dynamos are set in train, but each activated at different times, the rhythms of all the machines will be “out of sync.” After a period of time, however, one finds that all the dynamos are following a single rhythm, a single pulse that did not derive from any one dynamo. The phenomenon is called “virtual governance,” and is seemingly a purely mechanical example of emergent “behavior.”

So it seems that, at least in fields outside computing science, the notion that some sort of qualitative change could emerge from a quantitative description of processes is well known. For a computer, however, the necessity of attaining a critical computational power threshold seems to be the key. Landauer’s concept that there are physical (that is, thermodynamic) limitations on the power of computation8 is acknowledged by Vinge himself: it may well be that in the early twenty-first century the hardware-performance curve will level out. Hardware would be extremely powerful, “giving an analog appearance even to digital operations, but nothing would ever ‘wake up’ and there would never be the intellectual runaway that is the essence of the Singularity,” maintains Vinge.9 This “power curve” that maps the birth of the Singularity has a strong intuitive appeal—an appeal, one suspects, drawn from analogy. Whatever intelligence is—and the definition varies depending on whether one were to ask a schoolteacher, psychologist, biologist, or computer programmer—it surely is integral to all living things. And, in the natural world at least, intelligence is not discontinuous: there is no catastrophe point in advance of which intelligence begins and anterior to which intelligence does not exist. All living things, from the simplest unicellular creature through the animals of the savannah to Homo sapiens are possessed of degrees of intelligence.

If one substitutes “psyche” for “intelligence” in the above description, one obtains a clearer perspective on the artificial intelligence project. The “power curve” of intelligence is like the continuum imagined in the panpsychic hypothesis. Everything has a “little bit” of mind, and the higher up the scale one proceeds, a greater complexity comes into play and consequently more “mind” is in evidence. In the light of this equivalence of conception, it seems quibbling to insist on a clear distinction between mind and intelligence. Whatever is alive is possessed of some degree of mind, and all minds are intelligent: they learn, they remember, they put that which was experienced in the past to use in the future. In the panpsychic hypothesis an amoeba does this as does an invisible electron. In the AI scenario, a thermometer displays intelligence to a lesser degree, but equal in kind, to a Cray supercomputer. Like a computer or living being, a thermometer reacts to an outside stimulus (heat) and adjusts its internal equilibrium accordingly. It is this internal adjustment in response to external stimuli that is the essence of “lifelike” responses in the eyes of AI researchers.

In the AI project, just as in the pansychic view, life and psyche become equivalent. Mind, then, becomes a transcendental principle simultaneously overarching and entirely interpenetrating the phenomenal world. This conception was early adumbrated in the Hermetica, where the divine nous was represented as the distinguishing ground of the visible world.

Who is more visible than god? This is why he made all things: so that through them all you might look on him. This is the goodness of god, this is his excellence: that he is visible through all things. For nothing is unseen, not even among the incorporeals. Mind is seen in the act of understanding, god in the act of making.10

Leaving aside the demiurgic aspects of this quotation (“god [is seen] in the act of making”), if “mind [nous] is seen in . . . understanding,” then it becomes clear that a particular concept of mind informed Hermetic philosophy: psyche was all pervasive, for it was evident to the Hermetic philosopher that “all things” display some degree of understanding. Further elaboration of this concept led to the notion of affective resonance between all things in the world: the philosophy of “correspondences” so beloved of Agrippa. In the Hermetic view there existed a global “understanding” constituted by the correspondence of all things with each other.

Preceding the initial quotation above, Corpus XI states that the entire universe is like thoughts within god. The text furthermore encourages the Hermetic philosopher to “make himself equal to god,” and provides keys to accomplishing this task: “Make yourself grow to immeasurable immensity, outleap all body, outstrip all time, become eternity and you will understand god.”11

This passage has unmistakable resonances with Vinge’s own description of his omniscient Singularity. It too, would demonstrate an intellectual “immeasurable immensity,” immeasurable because its capacities would be far beyond the ken of lesser humankind. Vinge states that the “essence of the Singularity” is the fact that it will be an “intellectual runaway.” In this respect it is the perfect answer to the call expressed in Corpus XI.

Vinge’s Singularity resembles Ficino’s understanding of the Hermetic Adam as described in the Pymander (Corpus Hermeticum I). Ficino notes the similarity between the Pymander’s description of the creation of the first being and the account in Genesis,*18 but he goes further than an orthodox reading of Genesis would allow when he states that the Hermetic Adam is possessed of the same creative powers as his creator. As Yates describes Ficino’s conception, “This Egyptian Adam is more than human; he is divine and belongs to the race of the star demons, the divinely created governors of the lower world. He is even stated to be ‘brother’ to the creative Word-Demiurge—Son of God, the ‘second god’ who moves the stars.”12

The Singularity too would be possessed of the same creative powers as its creators (that is, computer scientists); it surely would be modeled after human cognitive capacities, but would exponentially outstrip them, becoming (as Vinge clearly imagines), a “second god.” It is this hyperbolic, apocalyptic imagery that distinguishes Vinge’s Singularity from the careful, measured ideas and hopes of his AI scientist peers. And it is his insertion into the machinic imaginary of a hyperintelligence that represents the “vanishing point” on the great curvature of psyche that marks his vision as an unheralded eruption of the Hermetic worldview into the AI research project.

That which makes the AI view of the psychic continuum different from the Hermetic conception is any assumption that the curve is also a mapping of computational ability—this is an assumption made only by AI enthusiasts. For Vinge and his fellow AI thinkers, the challenge is to find the Cartesian point on the curve where a mind “kicks in” because of a quantifiable, digital complexity.

VIRTUAL MIND

Reading Vinge’s thoughts on the Singularity, one could easily get the impression that “hardware power” is all that is really needed for this cybernetic Frankenstein’s monster to emerge. Yet we can take it for granted that software design is a concomitant in this creation, for all GOFAI dreams have their foundation in this software/hardware operational duality, the architectonic for the artificial-intelligence research project since John von Neumann’s equating of computers and brains in the late 1940s. The essence of this duality is simple: a mind/brain, like a computer, is a program run on a physical substrate.

Software is basically a series of linked algorithms designed to sort input to appropriate output. Hardware is the physical base that supports the running of this software program. The upshot of this is that minds/brains do not have to be the product of carbon-based life-forms (animals), but can presumably be “run” like software on any appropriate “platform,” including a computer. And a computer does not have to be a machine composed of integrated circuits powered by electricity (as indeed they are now), but can in theory be created from any material base capable of mimicking the operations of a Universal Turing machine. William Gibson and Bruce Sterling in their novel The Difference Engine depict an alternative nineteenth century in which Babbage’s “Difference Engine” (a mechanical computing device) is powered by steam, for example. The importance of software design in AI is made clear by recalling the early influence of the computer metaphor in modeling the mind.

The most influential psychological project of the twentieth century, behaviorism, benefited greatly from its historical proximity to the creation of the first serial digital computer. Its methodology was essentially derived from the operations of a computational device: animal behavior was the result of sensory “input” going into a “black box” (the mind/brain) and being matched to appropriate “output” (the observable behavior of the animal). In strict positivist fashion, no debate was entered into concerning what happened to the sensory information once it was inside the black box of the mind/brain; only the correlation of input to output was significant and scientifically justifiable. The AI project was born of the realization that without a theory of what was happening inside the black box, true scientific understanding of the mind/brain was impossible. GOFAI theory eventually emerged in all its simplicity—software engineering was the theory of the mind/brain. If you could write the software for a mind, your theory of mind was the very program itself.

Proponents of GOFAI generally hold to what has been called the “functionalist” approach to modeling/making*19 a mind. In this view, software is the virtual structure that governs its material instantiation. This virtual structure is wholly algorithmic in nature and could in theory be run on any appropriate device; it is the “shape” of the software that produces the overall gestalt of a mind. A functionalist would hold, for example, that it is not the appearance of a tornado that allows us to recognize it for what it is, but that its virtual structure (its program) allows us to infer the presence of the phenomenon. The “tornado-ness” of the tornado is the result of a certain set of instructions that govern its material substrate. In the same way software, the program of the mind/computer, is the virtuality that gives the actuality of the hardware its function as a mind/computer.

The essential problem with this canonical functionalist description of the software/hardware operational distinction is that it is totally arbitrary. At the level of the mechanical device itself, the program is isomorphic with the logic gates that instantiate it. Any given “bit” of information is realized in the physical description of the device. At this level, making a distinction between what is actually (the logic gate) and what is virtually (the logical operation) is superfluous. The problem is that this distinction is too often regarded as a difference in kind rather than the operative convention it really is. Yet the degree to which this conception is defended within the AI project reveals, one suspects, the continuing force of Hermetic panspsychic notions. Like proponents of functionalist theories, the Hermetic philosopher regarded all things as being potentially capable of revealing some degree of mind. A rock or magnet may be besouled just as a more “animate” creature clearly is. A functionalist believes that it is a particular structural organization within a system that constitutes mindfulness; it is an abstract specification (like software) that may in theory be instantiated in any number of heterogeneous materials.

THE INTELLIGENTIAL AND THE ACTUAL

The idea that science presents a model of reality rather than explores its essence is a very recent innovation. Jacques Le Goff believes that the modern period only really began in the mid-nineteenth century; it is perhaps no coincidence that Émile Meyerson states that the final rejection of metaphoricity in scientific descriptions occurred in the same period. It is within this time frame as well that we witness the rise of the idea of scientific “modeling.” The recognition that a scientist “models” reality, rather than discovers its essential nature is the result of the rejection of the metaphysical tradition in philosophy and the beginnings of the attempt to systematize a positivist program for the exact sciences. It is also an explicit rejection of one of the strongest characterizations of the medieval worldview, claiming, in effect, that there is no “language of nature” written into the fabric of things by God, the decoding of which is the natural scientist’s prerogative.

In the contemporary AI project, however, we unexpectedly find a generally unacknowledged return to the premodern natural scientific outlook. A number of interconnected ideas support this reading. First, there is a return to a discussion of the virtual/actual distinction in the form of the hardware/software duality. This is a distinction that, until its recent computer-science “reincarnation,” has been absent from philosophical debate since at least the time of Kant. Not only has the distinction returned as an explanatory principle, but it has returned in its original form.

Under the influence of the Platonic conception of the division of the intelligential and phenomenal worlds, Scholastic philosophers considered that the virtual was more real than the natural, phenomenal world. In fact, the virtual was the Real. Before its quite recent rise in popularity (owing to the hold on the popular imagination of the idea of “VR,” virtual reality), the word virtual meant something not at all real; something approaching the real, but never quite attaining it. This “process” aspect of the virtual was quite absent from its original use: the phenomenal world in all its transitory becoming was an illusion, and the virtual was the unchanging, incorruptible Being of reality. Functionalist theories operate on a similar level when they privilege the virtual, algorithmic software over and above the particular hardware in which the software operates.

In terms of AI, the return of the virtual/actual distinction could not be more dramatic in its striking resonance with Hermetic conceptions. The fact that AI attempts to make or “model” a mind, a psyche, significantly distinguishes it from all other “computer-driven” projects. Just as in the Hermetica, mind, or nous, has once again entered center stage.

Implicit in both the advent of Vinge’s Singularity and the AI project is the idea that the virtuality of algorithmic software is embedded in the actuality of the hardware of computational devices and that an isomorphic relationship obtains between them. This isomorphic relationship is like that obtaining between Platonic conceptions of the relationship of the intelligential to the phenomenal. Without the primary, intelligential objects—the virtual Ideal Forms—the actuality of the phenomenal would be “inoperative.”

The theory of the isomorphism between software/hardware in the AI project imagines a continuum of software complexity and hardware computational power leading inexorably to an artificial mind, a continuum in which the virtuality of computation will somehow transform into the actuality of cognition. After all, an AI researcher will say, when you play a game of chess with a computer and the computer wins, it did not simulate beating you, it actually beat you. This conception underpins the claims of “strong AI” and contrasts with the lesser, positivistic claim of “soft AI” that a machine can only mimic or model cognitive processes. It is the belief in “strong AI,” of a Platonic collapse between the intelligential/virtual and the material/hardware, that most clearly resonates with Hermetic views.

The first extended explication of the relationship between the virtual and the actual is found in the works of Plato. As is well known, Plato attempted to explain the fact of cognition through invoking the idea of an intelligential world behind the world of appearances. In this way he hoped to explain the fact that we can have “ideas” about objects/events in the world despite the fact that the data provided by the senses is always inconstant. How is it that we can have an idea about a dog—know its “dog-ness”—when every dog we see is in actuality different? The explanation is that there are paradigmatic (Plato’s work is the origin of this term) forms behind the ever-varying manifestations of phenomena, determining their “species” (as a later Platonized Scholasticism would say) and our ability to apprehend them. So what we have is a one-to-one correspondence between the intelligential Idea and any given phenomenal object, an isomorphic relationship. Furthermore, the intelligential realm is immaterial and their phenomenal forms “material.” How, therefore, can the one influence—have a causal relationship with—the other?*20 Plato’s answer was that the one “participated” in the other, and ever since scholars have been debating among themselves as to exactly what Plato meant by this term. An involved discussion of the Platonic notion of participation would considerably derail this chapter; suffice it to say that the problem of “participation” has been (usually unknowingly) revived when it is held that there is a parallel obtaining between software information and its embodiment in computer hardware. Where or when does one become the other, or more precisely, how does a logic-gate represent a computation?

Allied strongly with this rejuvenation of the idea of the virtual structure behind the real (the functionalist approach to AI) is the notion that the language of the virtual is binary code. The “bits”†21 of information in a computer are really “virtual information,” just as the helical structure of the DNA molecule is a code, and the replication of this code into the RNA molecule is a process of “translation”—or at least so we are led to believe. What does this plethora of codes, languages, information, and translations in computing science, AI, and molecular biology mean? As a first approximation, it might seem that it represents a return to the metaphoricity of premodern (i.e., pre-mid-nineteenth century) natural science. Are we then witnessing another “slippage”—as with the idea of computation itself—toward a redeployment of seemingly anachronistic locutions, reinstating, as a consequence, certain “confusions” of thought that late nineteenth-century scientists had taken great pains to eradicate once and forever? A strictly positivistic reading of this phenomenon might have one believe that this is the case.

But what does it mean to say that the DNA molecule is a “code”? Strictly speaking, DNA contains no code, in the very same way that a computer does not “compute.” Human beings compute—they add, subtract, multiply, and so on—but a machine “allows” differentials of voltage through electronic “logic” gates that we interpret as the process of computation. Two things are important to note here: First, one must acknowledge how difficult it is not to use anthropomorphic concepts like “allow” and “logic” in the description of mechanical phenomena. Second, a code is dependant on an anterior system—a language, or more specifically, a human language. Without doubt to speak of the “binary language” of DNA, or the “language of art,” is to speak metaphorically. Strictly speaking, language is only produced by and used by human beings. There is no “DNA code.” This is just loose metaphor—a metaphor that has become so entrenched in scientific thinking that some researchers assume its reality. Like the code breakers employed in World War II to break the codes of the opposing sides, DNA researchers see themselves involved in a battle as to who will break the code of living tissue first. In regard to this initiative (the Humane Genome Project), Harvard geneticist Richard Lewontin could not be less sanguine: “First, DNA is not self-reproducing, second, it makes nothing and third, organisms are not determined by it.”13

If this is indeed the case, then why the contemporary international effort to discover—that is, fully “map”—the human genome? The notion that the DNA molecule is a “code,” that this code is binary, and that this code determines the growth of all individual cells in a body is a continuation of the dream of the mechanization of nature. With the rise in importance of the biological sciences in the late twentieth century, the mechanical description of states, events, and processes came to be seen as inadequate: the mechanical jargon—representing the “billiard-ball” determinism of the Newtonian/Laplacian worldview—was giving way to a more “organismic” system of description. The idea of a serial, binary system of orders emanating from the DNA molecule reintroduces the mechanistic/deterministic worldview into biology, making it safe once again for a strictly mechanical, positivistic description.

Even if a strict positivist must accept as a “contingent evil” the fact that anthropomorphic locutions will inevitably slip into any supposedly objective description of phenomena, it is by no means certain that what a computing machine is doing is replicating the process of human computation. This sounds like a deliberately contrary thing to say. After all, a great deal of work has gone into the idea of mechanically reproducing the “laws of thought” (which George Boole once famously ascribed to binary computation), but it is nonetheless true.

The attempt to reconstruct the reasoning process of human beings is fraught with a crippling difficulty: the evident multiplicity of possible logics. A computer scientist may claim to model the process of deduction in a computer, but how may we know that this is in fact the process of our thought? There really is no way of knowing, despite the claims by AI enthusiasts like Marvin Minski to the effect that he and others have already accomplished this task. We can certainly produce a computational system that will give us verifiable answers every time we use it (2 + 2 = 4 is a simple example of such an algorithmic process), and indeed the historical construction of the logical process from Aristotle to today is a fine example of humankind’s desire to do just that, but the number of possible logics debated in this century alone demonstrates that we have several competing explanations for the same determinations. The “logic” instantiated in the logic boards of computer hardware represents one manner of acquiring determinable results, but it is by no means certain that this configuration of electronic circuits mirrors the “laws of thought.”

VORTICES OF THE HERMETIC IMAGINARY

Can the entire project of developing a machine intelligence be undermined by an extended critique of its metaphorical confusions, by pulling apart its dis-analogies? Although this has been the intention of many thinkers, particularly those expanding on Wittgenstein’s lead in regard to the intelligibility of regarding a machine as being able to “follow a rule,” these critiques have had little effect on a project largely funded by the U.S. military.*22 No amount of analysis will halt what Vinge himself recognizes as the “technological imperative” to create bigger and better computing devices.

I am certainly not advocating that such a Wittgensteinian critique should be abandoned. It is just that the terms of this pursuit seem to be missing the most interesting aspect of the entire AI project. The fact that these anthropomorphic “contaminations” seem so prevalent in the discourse of AI research is a sure sign that the researchers are unwittingly engaged in a project that is inextricably linked with an imaginary that includes such seemingly disparate elements as the “electrical myths” of Mary Shelley’s Frankenstein and eccentric eighteenth-century theologian Oetinger’s “electrical theology,” as well as the pneumatic anthropologies of Kabbalistic, Neoplatonic, and Hermetic thought. At the very least the emphasis on psyche or mind at the center of the AI project seems to indicate an unacknowledged desire to transcend the boundaries of a strictly mechanical/material science.

Vinge imagines an apocalyptic catastrophe point on the computational curve that signals the emergence of the Singularity. This point is actually a break, a disjunctive moment when all that has gone before is irretrievably jettisoned. The Singularity, as the name implies, will be utterly unique and unprecedented. Within the AI project the disjunctive moment of the Singularity signals the idea of, and desire for, the form of psychic transport particularly associated with the ecstatic. Deriving from the original Greek ek-stasis, ecstasy literally refers to the experience of “standing outside oneself.” The term is linked with a lexical family of words such as displacement, change, deviation, alienation, délire.14 Couliano suggests that the semantic key to its function is perhaps found in the single term disjunction.15 The ecstatic state implies a radical discontinuity of perception, a breaking off from one world to another. In terms of ecstatic religion, this radical break was between prosaic, quotidian consciousness (termed “Apollonian” by Nietzsche) and that induced by the ecstatic ritual itself. There is, as both Nietzsche and Spengler have averred, a world of difference between the Apollonian and Dionysian consciousness, yet the “moment” when the one becomes the other—in terms of the ecstatic ritual or practice—oscillates around something like the “specious present,” the non-moment*23 where the two fields are united.

It may at first seem quite odd to think of the studious demeanor of the computer scientist as being in some way equivalent to the aspirations of a participant in the Dionysian cults. I agree that it is a monstrous conceit, but it evokes powerful and useful analogies. The ecstatic state can in part be characterized by the experience of the loss of the self, of subjective consciousness. Implicit within modernist science we observe the same desire: the search for an instrumentality that will erase the subjectivity of the observer and reveal the Real. For both the ecstatic and the scientist, reality is that which is revealed when there is no observer. What is missing from the computer scientist or laboratory boffin’s pursuit, however—but ever inheres in the Dionysian’s quest—is simply the experience of the “open” relationship to the phenomenal that characterizes the ecstatic state: the bodily and intellectual sense of “flow” or “streaming” within a sensory economy that makes little distinction between self and world. At a certain level of description the embodied experience is primarily one of moving through a streaming space, attenuated with sensations/communications produced by an essentially open relationship to the phenomenal field. Accordingly, the boundedness of the human bodily experience—the Cartesian paradigm—is really only a secondary, after-the-fact reconstruction. I suggest that the metaphorical contaminations of scientific discourses are the necessarily vestigial traces of this primary experience of the body; they are sites of disturbance that reveal supposedly abandoned territory.

The “smuggling in” of incoherent echoes of this premodern, pre-Faustian worldview is a signal characteristic of contemporary AI research. Much of the bad thinking associated with this project is the result of computer scientists’ ignorance of the proximity their ideas and aspirations bear to the way of thinking that could be of most help in illuminating their work: the Hermetic tradition. An examination of two of Vinge’s tropes will reveal what I mean. Both provide sites of disturbance, small vortices of deliré that lead one to the “other side” of the AI discourse.

The key term in Vinge’s dream of the Singularity, a word that simultaneously defines and undermines his visions, is the word awake. According to Vinge, one day a computing machine will simply “wake up.” This implies, of course, that all machines like it had previously only been asleep. Clearly for Vinge all contemporary computing devices are sleeping, their potential powers lying dormant. This is the metaphorical space of virtuality (revived/revised as functionalism) itself: the intelligential realm of paradigmatic patterns sleeping as potentiality within matter.

What is interesting about the word awake is its initiatic character. The mystery cults, particularly those most influential on Western thought*24—the Mysteries of Eleusis and the Chaldean Mysteries—had as their central symbol the idea that the initiate would undergo some sort of symbolic death and resurrection into a second life. Previous to this death and rebirth, the initiate was considered to be “asleep.”

In the early twentieth century the teacher G. I. Gurdjieff was particularly fond of the idea that most human beings were—despite appearances to the contrary—“asleep” and that the most urgent spiritual task that could confront the seeker was to endeavor to become “awake”: “A man may be born, but in order to be born he must first die, and in order to die he must first awake. . . . When a man awakes he can die; when he dies he can be born.”16 These words of Gurdjieff summarize a long tradition that sees spiritual transformation as depending upon a pivotal experience of awakening or rebirth. Rather infamously, Gurdjieff tried many tactics to shock his followers into this state of wakefulness. According to Gurdjieff, most people had no central, supervenient principle that could be called a “soul”; most people were simply a succession of discontinuous processes (impressions, desires, activities) in time. A central soul could be constructed, however, and the key to this construction was first to become awake: “Attachment to things, identification with things, keep alive a thousand useless I’s in a man. These I’s must die in order that the big I may be born.”17

He qualifies his conception of being “awake” in this manner:

It is impossible to awaken completely all at once. One must first awaken for short moments. But one must die all at once and forever after having made a certain effort . . . [after] a certain decision from which there is no going back. This would be difficult, even impossible, for a man, were it not for the slow and gradual awakening that precedes it.18

Gurdjieff ’s model of becoming spiritually awake is clearly modeled on phenomenal experience. Before fully waking up individuals usually experience a period of hypnopompic activity; before being fully asleep they usually pass through the hypnagogic state. Both of these transitional states are modifications of the waking state, and may be characterized as being a “little bit awake” as opposed to being fully awake. We gradually accede to consciousness, as Gurdjieff says—we do not abruptly attain instant consciousness under any circumstances (the alarm clock may abruptly bring us out of sleep, but no one would honestly say that directly after this event they are “fully conscious”).

Vinge gives no direct indication that his Singularity would go through a similar process of awakening, yet his implicit assumption that present-day computing devices are lesser instances of mindful entities within the Great Chain of Computation, and that their computational abilities are increasingly on the ascent, approximates Gurdjieff ’s model of a graduated progress toward awakened consciousness. All present aspects of the AI project—investigations into linguistic and logical structures, analysis of vision and perception, the creation of heuristic “search spaces” for particular problems—are, according to Vinge’s logic, already producing a (very) sleepy intelligent being; all it requires is the initiatory “break” or disjunctive moment that will reveal the presence of an active intelligence.

This deus in machina can be more fully characterized in terms of the Hermetic tradition by recalling one of the key ideas implicit in Vinge’s model: that of emergent behavior. The concepts of emergent behavior and virtual governance, far from being a decided break with the mysticism (or, at least, mystery) of Maeterlinck’s “spirit of the beehive,” are really only morphological mutations of the underlying mythologem that allows each to be formulated—that of the anima mundi. A hyper-intelligence supervenient on lesser intelligences, all of which both participate in and are modeled on it, the anima mundi is one of the central motifs and operational principles of both Hermeticism and late Neoplatonic thought. From an AI perspective, the operational tenet that an exact simulation of a process becomes that process (after the logic of Leibniz’s “identity of indiscernibles” postulate) can be used to reinterpret the Hermetic outlook: all sublunary intelligences becoming exact simulations of the one hyper-intelligence of the anima mundi.

The AI project, in this light, begins to look remarkably like the twentieth-century equivalent of the alchemical quest, the search for the lapis philosophorum.

Orthelius, a commentator on the works of sixteenth-century alchemist Michael Sendivogius, describes the anima mundi and its importance in the alchemical work in this manner:

The spiritus mundi, that lay upon the waters of old, impregnated them and hatched a seed within them, like a hen upon an egg. It is the virtue that dwells in the inward parts of the earth, and especially in the metals; and it is the task of the art to separate the Archaeus, the spiritus mundi, from matter, and to produce a quintessence whose action may be compared with that of Christ upon mankind.19

Orthelius mentions also that many regard the world spirit as the “third person of the Godhead”20—the Holy Ghost or Spirit. Traditionally the Holy Ghost was seen as the intermediary power between God and Christ, in other words, it was the liminal field of play where one world could be described in terms of the other, the communicative borderland between the discarnate (deity) and incarnate (Christ) aspects of the world. For the Hermeticist, “spirit” and “matter” were not two distinct categories of being—in this they are distinguished from both Plato’s and Descartes’s extreme separation of soma and psyche, res extensa and res cogitans—but rather two complementary modes of being. Yet to make the operation of these complementary modes tractable they needed a tertiary function that could negotiate the field of play. This third term, I would say, is the equivalent of the contemporary idea of complexity—the intersection of chaos and order, matter (chaos*25) and logos (reason, order). Neils Bohr’s famous “complementarity” reading of quantum states is very much in the same spirit as the Hermetic.

HOUR OF THE SWARM

Neurophysiologist Peter Arhem, in discussing the relationship of artificial-intelligence research to neurophysiology states:

The principal question is really a conglomeration of questions. Is it possible in principle to study biological intelligence by studying artificial intelligence? Is it possible in principle to study the human mind by studying the human brain? And at the base of these questions: How is the mind related to the brain; how is the mental related to matter?21

If there is a fundamental question behind the AI project, Arhem’s delineation surely hits the mark. Adopting this perspective, it seems to me that what the AI project unequivocally demonstrates is the surprising longevity of certain (primarily Western) philosophical distinctions and their attendant questions. Each of them attempts to answer a more fundamental question: what is the relationship between thinking and the world, of mind and matter?

The traditional intermediary between mind and world is language. AI research has wrestled with the seemingly intractable problem of constructing not only a mind, but also the “rules and representations” that might make up a human language. So far the project has been driven by the assumption that language is in principle amenable to formalization, that is, that it has a syntactic structure that enables the production of individual speech acts, and this syntax itself resembles logical formalizations. Hence language, thought (logic), and mind are, in the GOFAI scenario, ineluctably linked together.

Jesper Hoffmeyer

While this conception of the AI research project has not exactly come to a standstill, the expected results have certainly not been forthcoming. A recent reconceptualization of the entire relationship between minds, language, and thought has been suggested by molecular biologist Jesper Hoffmeyer in his paper “The Swarming Cyberspace of the Body,” offering perhaps a new approach to the set of interlinked problems encountered by AI research. His reconceptualization can be summed up in one sentence: “If signs (rather than molecules) are taken as fundamental units for the study of life, biology becomes a semiotic discipline.”22

For Hoffmeyer, bodies are “swarming entities regulated through the distributed problem-solving capacity of billions of communicating agents” (that is, cells, tissues, organs, and so on) such that the body can be regarded as a “bio-cyberspace,” similar in form to the cyberspace experienced by millions of users connected to the Internet. As a consequence, Hoffmeyer says that from this viewpoint, “Intelligence . . . is a virtual reality constructed by an animal body.”23

The structure of this virtual reality is language itself, or more specifically, semiotic activity. What is unique to Hoffmeyer’s viewpoint is that semiosis is not regarded as the privileged activity only of human beings. The basic unit of the sign, unlike that of the Saussurian tradition where the triadic relationship of signifier, signified, and referent (with the latter considered almost insignificant in terms of theoretical value) is predicated on a human percipient, becomes a distinction between orders of sound, odor, movement, color, electrical fields, waves, chemical signals, and all manner of possible significant activity among living beings.24 All this activity constitutes what Hoffmeyer calls the semio-sphere, and the semio-sphere is as much a function of the body as it is of the mind.

In particular Hoffmeyer suggests that intelligence is a product of the “swarm activity” that is the body—the body understood as a swarm of cells and tissues, the activity of which is the constant interchange of signals (chemical, electrical, mechanical, and molecular) among all “units” (cells, tissues, organs) of the body.

Hoffmeyer states that a body is in reality a “swarm of swarms,”

a huge swarm of more or less overlapping swarms of very different kinds. And the minor swarms again are swarm entities, so we get a hierarchy of swarms. These swarms are engaged at all levels in distributed problem solving based on an infinitely complicated web of semetic [Hoffmeyer’s term for semiotic interactions that are determined by evolutionary history] interaction patterns.25

This hierarchical swarm-being constitutes a sort of bio-cyberspace: just as the millions of users of the Internet are connected, forming a “swarm” of intelligences, so the body is a sort of collectively constituted reality, “a virtual reality from the point of view of the individual entity. From this point of view intelligence is a virtual reality constructed by an animal body!”26

It should be remembered that one of the possible births of Vinge’s Singularity is that the international Internet and its millions of users may someday become an intelligence greater than the sum of its parts, a “swarm of swarms” as Hoffmeyer has imagined. The notion that “intelligence is a virtual reality constructed by an animal body” again places mind or psyche at the pivotal center of discussion, and aligns Hoffmeyer’s’ conception with the functionalist revival of the Platonic virtual/actual distinction. The notion that the constitution of a living body is a swarm of lesser swarms, these lesser swarms themselves examples of yet other swarms (presumably down to the molecular level), is of the essence of the panpsychic hypothesis, where intelligence is hierarchically distributed among every actuality. In effect Hoffmeyer has further rarefied the imaginary of the machine by suggesting that von Neumann’s serial stored-memory digital computer is an example of a lesser-swarm entity the creation of which was necessary for the appearance of the greater whole—the “cyberspace” of the Internet. This inadvertent revival of the essentially Hermetic notion that sublunary intelligences are all virtual simulacra of the one great hyper-intelligence of the anima mundi is clearly significant. We are observing, in other words, a contemporary morphological shift, a necessary deformation in terms of ideas and imagery concerning the machinic, of the particular ideal object constitutive of the anima mundi.

Hermetic Bio-Semiosis

Despite demonstrating the same sort of rhetorical sleight-of-hand that plagues discussions in AI research—for Hoffmeyer DNA is a “text”; the DNA molecule is “digital,” living cells are “analog,” and so on—his revision of the semiotic economy (or ecology, as he would have it) is a positive step in the direction of reviewing the field of mind/body questions. Of particular interest to me is the (inadvertent, I am sure) recovery, in part, of the Hermetic attitude to semiosis. The Hermetic weltanschauung (strongly influenced by the Stoic view) regarded the phenomenal world as being a sort of general semiosis. Distinctions between bodies, selves, and environment were to a certain extent fluid and open, and were characterized by a constant exchange of signs and inscriptions across what we now would generally regard as the bordered worlds of mind, body, and environment.

Resonances of the Hermetic worldview are uncovered when Hoffmeyer uses a term created early in the last century by Jacob von Uexküll in one of the first works of theoretical biology, Umwelt und Innenwelt der Tiere (1909). The term is Umwelt, which Hoffmeyer interprets to mean “subjective universe.”27 Unfortunately, I think Hoffmeyer has misinterpreted Uexküll’s term. Umwelt refers to the particular surrounding world of an organism—all animals live in the same world, but each has a particular environment in which it interacts. The ways in which the animal interacts with its Umwelt, the responses elicited by this exchange, create the special inner world of the organism, the Innenwelt. It is this latter term that comes closest to Hoffmeyer’s “subjective world.” Uexküll proposed a further term, Gegenwelt, to cover the manner in which higher animals (those possessed of a central nervous system) are able to “mirror” or maintain a “counterworld” of representations of the environment, representations that differ with the constitution of different nervous systems.

The importance of Uexküll’s definitions is that they are not hard distinctions, but interacting worlds that should be read as modes or ways of seeing determined only by the relative situation of the organism. Interestingly, Uexküll saw organisms as cognitively limited by their respective situations, the situation of each allowing the creation of a world of possible experience and “bracketing out” others. By a process of logical extension Uexküll proposed that it was likely that there were higher dimensional worlds that human beings are constitutionally prevented from seeing, just as the unicellular organism is unaware of the stars above. This latter aspect of his thought is in many ways reminiscent of the worldview that held before the advent of the idea of Darwinian natural selection in the mid-nineteenth century—that which Lovejoy has called the “Great Chain of Being.” In this view all living things are arranged in a hierarchical system with the deity at the top, followed by angels, human beings, and so on, down to the lowliest bacterium. In contemporary “communication science” parlance we might say that each creature is locked in its position in the Great Chain according to its ability to process information: the more a creature is able to process greater amounts of information, the higher up the chain its position.

The notion that there is a triadic relationship between worlds of experience is evidently very ancient. The Hermetic schema envisioned as a base model, as it were, the three interpenetrating worlds of the celestial, the terrestrial, and the Gegenwelt of human beings, the latter known as the sublunary mens. In traditional Chinese philosophy we find the distinction between the worlds of tian (heaven), ren (humankind), and di (earth). The most important of these loci is ren, the microcosmic site of interaction between the heavenly and earthly forces. The recognition of the importance of this Gegenwelt was a central mystery of the Gnostic tradition, especially as manifested in later Hermeticism. An early Gnostic text summed up the mystery in this way: “this is the great and abstruse mystery, namely that the power which is above all others and contains the whole in his embrace is termed man.”28 This passage is an early indication of what would become a central tenet of Hermeticism—that the divine mens and the terrestrial mens somehow reflect, interpenetrate, and guarantee the veracity of each other.

Gregory Bateson

The ideas of Gregory Bateson are a strong influence on Hoffmeyer’s work. Hoffmeyer specifically mentions a paper of Bateson’s first delivered in 1970 under the auspices of the Institute of General Semantics. In this paper, “Form, Substance and Difference,”29 Bateson proposes that it is possible to define a “Pythagorean evolutionary theory” predicated on the idea of informatics/cybernetics rather than the vagaries of “substance.” Rather than mind being at the top of the Great Chain of Being, says Bateson, we should, following the pioneering efforts of Lamarck in his Philosophie Zoologique (1809), invert this order, and assume that mind (mens) is the basic ground of life. Of course, Bateson has a particular conception of mind, a conception influenced by cybernetic theory. He first asks what is it in the world (the “territory”) that gets into the mind (the “map”).*26 For Bateson the “thing” that gets into the mind is difference—in terms of a map this could be a difference in altitude, surface, or what have you—and difference is a prototypically abstract matter.

He notes that in the “hard” sciences effects are attributed to forces—impacts, collisions, and the array of mechanist causal explanations that boil down to an exchange of energy. The world of communication, on the other hand, cannot be so characterized. “The whole energy relation is different. In the world of mind, nothing—that which is not—can be a cause.”30

He provides an example: if you fail to complete your tax return, then the taxation department will be spurred into action even though you did nothing. “The letter which never existed is no source of energy,”31 as Bateson says, yet it can cause an energy system (the tax inspector) to be galvanized into activity. Any psychological theory, therefore, that hopes to demonstrate its value on analogies drawn from the exact sciences (that is, on the exchange, at some point, of energy) is nonsense. For Bateson the phenomenal world is an infinitude of abstract, non-energetic differences, only a limited number of which are “selected” by the mind: these are what we call “facts” or “ideas.”

In fact, what we mean by information—the elementary unit of information—is a difference which makes a difference, and it is able to make a difference because the neural pathways along which it travels and is continually transformed are themselves provided with energy. . . . We may even say that the question is already implicit in them.32

Bateson claims that it is the contrast in coding and transmission of difference inside and outside of the body that impels us to speak of an “external” and “internal” world. These two modes are not mutually exclusive, however: “The mental world—the mind—the world of information processing—is not limited by the skin.”33 It is a world, however, that has “jumped loose” (as Bateson says) from the conventional understanding of the physical world.

Bateson’s development of this position is very interesting. Like Hoffmeyer and Vinge, Bateson develops his ideas from within the cybernetic tradition, a tradition that began with extended meditations on the mind/machine analogy. Norbert Wiener’s 1948 book Cybernetics,34 which began the cybernetic tradition, was subtitled Control and Communication in the Animal and Machine, signaling both Wiener’s conflation of machinic and organismic models and his interest in using these as examples of communication or information exchange. But Bateson goes further than Wiener when he places his analysis in the context of traditions seemingly far removed from twentieth-century mind/machine models. These traditions include both alchemy and Gnosticism.

Bateson recounts how Carl Jung, undergoing a period of “epistemological confusion” (Bateson’s description) sat down and wrote his Septem Sermones ad Mortuos, an event from which Jung dated all his later insights into the psyche and “collective unconscious.” In what amounts to an extended piece of psychic automatism, Jung invokes the notion that there are two worlds, two “worlds of explanation” as Bateson describes them: the pleroma and the creatura. Jung’s conception of the pleroma is essentially derived from the originally Gnostic notion of the deus absconditus—that a paradoxically “full” (pleroma literally means “fullness”) void is the ground of our being: “Nothingness is the same as fullness. In infinity full is no better than empty. Nothingness is both empty and full. . . . This nothingness or fullness we name the PLEROMA.35

Bateson recontextualizes Jung’s Gnostic-inspired distinctions in terms of his informatic/cybernetic reading of the mind-body problem. He therefore equates the pleroma with the Real, a world where events are caused by “forces and impacts,” a world without distinctions or “differences”—the world the description and investigation of which is supposedly the object of the hard sciences. Even so, Bateson states that it is oversimplifying matters to say that the hard sciences are therefore exclusively concerned with the pleroma and that in contrast the sciences of the mind deal only with the world of the creatura, a world characterized by the play of difference. It is a matter of how one chooses to look at phenomena. He uses the example of Carnot’s*27 heat engine: from one point of view (the pleromatic), the operations of the engine can be classically described in terms of the increasing entropy incurred with the transformation of heat energy into mechanical work. From another point of view (that of the creatura), the system may be described as a sense organ the operations of which are determined by temperature differences—differences that we may call “information” or “negative entropy.”36 In the case of Carnot’s engine, it “is only a special case in which the effective difference happens to be a matter of energetics.”37 It is possible, in other words, to look at the world as if it were a living organism, a giant system of systems, each of which can be described in terms of information exchange rather than simply the exchange of energy: “The creatura is thus the world seen as mind, wherever such a view is appropriate. And wherever this view is appropriate, there arises a species of complexity which is absent from pleromatic description: creatural description is always hierarchic.”38

This hierarchical ordering is to do with the classification of differences, the difference between difference. Bateson is quite aware of the vertiginous road down which this conception could lead; he mentions in passing that the notion of set theory devised by Bertrand Russell early in the last century to “contain” (as much as is possible) paradoxes of infinite regression might help in this regard—but some sort of notion of interlinked systems is always necessary when invoking the thermodynamic/cybernetic model. Thermodynamics is, after all, concerned with the dynamics of closed or semi-open systems. An open system, characterized by maximum entropy, is a system exhibiting no information at all. In fact this state of affairs cannot be described as a system at all.

Michel Serres

In what may be regarded as the extreme amplification of Bateson’s ideas (although he makes no reference to Bateson’s work), Michel Serres has examined the notion of systems, cybernetics, and levels of description (the hierarchies of difference, in Bateson’s conception) in his Origin of Language.39 For Serres, “matter, life and sign are nothing but properties of a system,”40 and an organism, situated inextricably within this nexus, is characterized not by homeostasis (since stability, in organismic terms, means death), but by that which he calls homeorrhesis. A neologism derived from the combination of the Greek words for “same” and “flow,” homeorrhesis is used by Serres to emphasize the idea of the organism as being in continual movement, constantly exchanging information within and without its “system”—a term that Serres, like Bateson, considers to be misleading.

Both a syrrhesis*28 (rather than a system) and a diarrhesis, the organism is henceforth defined from a global perspective. Not actually defined (the word means in effect the opposite of open), but assessed, described, evaluated, and understood . . . [an organism] is the quasi-stable turbulence that a flow produces, the eddy closed upon itself for an instant.41

Serres proposes, in a manner similar to Bateson, that the differing levels of information exchange operating between cells, tissues, organs—all homeorrhetic systems—are like a series of interlocking Chinese boxes, “and this series is the organism, the body.”42 More importantly, “Each level of information functions as the unconscious for the global level bordering it, as a closed or relatively isolated system in relationship to which the noise-information couple, when it crosses the edge, is reversed and which the subsequent system decodes or deciphers.”43

This series, the units of which have as their “outside edge” the chaotic din of energy/signal transformations, are filtered, level following level, by the “subtle transformer” of the organism until, finally, we reach the level of “eros and death.” For Serres the Freudian unconscious is this last “black box,” “the clearest box for us since it has its own language in the full sense. Beyond it we plunge into the cloud of meaningless signals.”44

The idea of an organism being a “quasi-stable turbulence” (Serres) temporarily “coming together” within the flow of semiotic differences (Bateson) recalls a fundamental conception of the creator of cybernetic theory, Norbert Wiener: “We are not stuff that abides, but patterns that perpetuate themselves.”45

If we substitute the term “self ” for “organism” in the above determinations, then we also get a pretty good approximation of the central tenet of Buddhism: that there is no abiding self, but only a temporary aggregate of sensations and perceptions subject to the laws of karma. None of the above-mentioned theorists (to my knowledge) explicitly acknowledge the influence of Buddhist thought on their work, so I will assume that this particular manner of describing the relationship of beings to the phenomenal world is but further evidence of the periodic return of certain cognitive schemas across cultures and epochs.

HERMETIC INFORMATICS

The final upshot of the envisioning of the organismic within the thought of these contemporary thinkers is that observer and observed become varying “nodal points” within the same field of information exchange. Observer and “dispatcher”*29 are merely two functions supporting a single equation.

The observer as object, the subject as the observed, are affected by a division more stable and potent than their antique separation: they are both order and disorder. . . . I do not need to know who or what the dispatcher is: whatever it is, it is an island in an ocean of noise, just like me, no matter where I am.46

In the thought of Serres, Bateson, and Hoffmeyer the distinction between the semiotic level of description (the classical province of the mens) and the description of the physical world in terms of nineteenth-century thermodynamics collapses. By insisting on the foundational structure of communication and information exchange in any description of subject-object interactions, their determinations recapitulate the interpenetrative communicative order operating between the divine and the terrestrial foregrounded in Hermetic thought. Furthermore, this is no historical accident, no anomalous atavism in the progressivist account of the accumulation of scientific knowledge. Bateson, Hoffmeyer, Vinge, and Serres all emphasize imagery and structural relationships that display clear resonances with Hermetic notions of the interrelationship between individual minds and the anima mundi.

There is a strong ethical drive behind Bateson’s conception, something that he makes no effort to conceal. The combination of technological mastery and the Darwinian concept of “survival of the fittest” has brought the world to the brink of ecological catastrophe. If the organism and its environment are not seen to be ineluctably connected as a single site of exchange, a place of “virtually stable turbulence within the flow”47 rather than the organism attempting domination of its environment according to the Baconian command/obedience couplet, then the scientistic ideology of mastery will simultaneously see its closure with the inevitable destruction of all negentropic systems.

While it would be going too far to ascribe some sort of understanding of the destructive effects of technology and Darwinian evolutionary theory so far before their historical arising to Hermetic thinking, it is certainly true that a kind of “global heuristic” structured much of its ideas. The idea that the world was a gigantic living organism was central to Hermeticism: trees and vegetation were the hair of this great being, rivers and streams the arteries and veins that carried its blood. It was a worldview celebrated by poets and natural scientists alike. And it was an envisioning that was clearly aware of the twin modes of description open to an enquirer: the one in terms of signs and words (information, difference), the other in terms of physics. “And indeed what are the heavens, the earth, nay every creature, but Hieroglyphics and Emblems of [God’s] Glory?,” asked Francis Quarles, his sentiment echoed by Donne: “The World is a great Volume, and man the Index of that Booke; even in the Body of Man, you may turne to the whole world.”48

In this light we can readily discern the reclaiming of the idea of the Hermetic microcosm in Bateson’s and Serres’s conceptions. The province of signs, language, and difference, squarely placed on the other side of the Cartesian divide in terms of the investigative territory of the physical sciences (at least since the formation of the Royal Society), has been reinjected into the realm of the scientific imagination. The book of nature (pleroma) and the body of humankind (creatura) become one enfolded network. Far from being a new and radical approach, I suggest that this is really only a reclamation of an abandoned worldview. Perhaps it was inevitable that a reconnection should be made between “energy” and “mind” in cybernetic theory, considering that the coupling of the two was an idea inherent at the beginnings of mechanical science, as the ideas of Johannes Kepler demonstrate.

In his Harmonice Mundi Kepler devoted an entire section to “The Earth as a Living Being,” a section doubly significant in that it contains notions that directly lead to his statements describing the “dynamic power” (vis) of matter as “energy.” Conventional accounts of the history of scientific conceptualization credit Kepler with replacing the concept of matter as being besouled with that of the notion of physical energy. But as M. H. Nicolson points out, this new notion is intimately connected with Kepler’s belief that the world was a living being. According to Kepler, the Earth itself was possessed of a soul, and this world soul reflected the anima mundi, the cosmic soul: “The earth-soul reflects in itself the image of the zodiac and of the firmament, evidence of the interrelation of the homogeneity of terrestrial and celestial things.”49

Kepler himself cast horoscopes while being simultaneously engaged in natural-scientific theorizing. The concept of the zodiac is one of the most ancient of sign systems, a system with variants over many different cultures and epochs. I believe that it is not too much of a conceit to maintain that the system of astrology is akin to the modern conception of cybernetics: both attempt to describe the interrelationship of the pleromatic and creatura worlds in terms of a systemic set of differences and relations, finally positing the “homogeneity of terrestrial and celestial things.” On the level of information theory, Kepler held that one can formalize the activity of the deity with recourse to a set of (geometric, arithmetical) calculations that map the relations between the stars and terrestrial events. In this regard Kepler is the contemporary of Bateson. Both see a set of complementary descriptions operating at the thermodynamic and informatic levels. In terms of the thermodynamic world, God is the essence of energy says Kepler, and as “the essence of the flame is in its burning, so the essence of the image of God lies in its activity, its energy.”50 Rather surprisingly, centuries later the “father” of cybernetics, Norbert Wiener, would use an analogy strikingly similar to that of Kepler: “The individuality of the body is that of a flame rather than that of a stone, of a form rather than a bit of substance.”51

Anachronistically glossing Kepler’s analogy through contemporary cybernetic theory, one may say that that which was significant for Kepler (that which revealed the essence of God) was energetic differences, just as it would be for Sadi Carnot and Wiener in terms of the modern scientific tradition centuries later. It is difference that marks the presence of a communicative order of description over and above one that solely relies on descriptions of mechanical and material interaction.

It would be possible, one should imagine, to make explicit the morphology of the concept of energy from Kepler’s time until our own. This morphology would include, naturally enough, Kepler, Galileo, Newton, Carnot, Clausius, Lord Kelvin, Norbert Wiener, and Claude Shannon. It would end (with the proviso that this “is the story so far”) with Bateson, Serres, and Hoffmeyer’s ideas. It is of course far beyond the scope of this chapter to attempt such a description, but this point needs to be stressed: there is a continuity that takes the historical form of the occasional appearance and disappearance of certain ideas that we may call Hermetic. They constitute many-sided forms that have their own objective existence as ideal objects.

Negentropy and the Intelligential World

Serres’s conception of bio-cyberspace as a hierarchy of levels, each of which represents the unconscious of the one anterior to it (a system, therefore, of serial unconsciouses) finds an extension in the work of theoretical physicist Olivier Costa de Beauregard. In his conception, there is a negentropic psychic “underside” of the physical universe—the unconscious, as it were, of the pleroma. This parallel universe, coextensive with the Minkowski-Einsteinian space of four dimensions, is a “source of information,” in fact the source of all possible knowledge—an anima mundi, no less. Indeed Costa de Beauregard regards the individual consciousness of animals and humans to be crystallizations of this hyperconsciousness, just as in the Hermetic view.52 In regard to this theory, Marie-Louise von Franz hypothesizes that the time may not be far off when physics and depth psychology (of the Jungian persuasion) will “join hands” and the full significance of Jung’s alchemical studies become apparent.53

What does Costa de Beauregard intend by his inadvertent revival of Hermetic ideas? It is clear, not only in Costa de Beauregard’s work, but many others’ as well, that a decided ambiguity surrounds such terms as entropy, probability, and information. I think it fair to say that this ambiguity is keenly exploited by many cybernetic theorists, as well as scientists whose work relies on the linked concepts of entropy and probability. Shannon’s definition of information as being a probabilistic function of a transmission system is the perfect example. Following von Neumann’s advice, he equated this probabilistic function with the thermodynamic concept of entropy. In expressing the wisdom of this equivalence, von Neumann evidently assured Shannon that “no one knows what entropy is, so in a debate you will always have the advantage.”54

The problem with the concept of entropy is that it cannot, on the surface, be an objective quality. Disorder is really in the eye of the beholder—what may seem chaotic to you may not appear to be quite so chaotic to me. A late work by Jackson Pollack exhibits a great deal of “order” to me, but to another it may well appear to be a totally chaotic field of drips, splashes, and blotches. Is entropy then an objective state of the world or a subjective reading of it? Costa de Beauregard’s answer is that entropy, read as an aspect of probability, is both. For him probability “operates as the hinge between matter and mind, where one is knotted to the other, and reacts on the other.”55 It should be remembered that “probability” in Costa de Beauregard’s use of the term is equivalent to Shannon’s characterization of “information” as a probabilistic function. Although expressed in slightly different terms (yet which are, as I read it, conceptually equivalent), Costa de Beauregard’s notion is strikingly similar to that of Bateson: probability (information) operates within the field of difference or the semio-sphere.

Costa de Beauregard further aligns his depiction of a parallel negentropic intelligential world with the apparent “atemporality” of the unconscious. This subliminal world demonstrates similar characteristics to that of the unconscious (incorporating, as it must, the experimental confirmation of Bell’s theorem of non-locality, the “telepathy” theorem of quantum mechanics) in that it operates on the threshold below observable phenomena, that locale where the molecular world of classical physics phases into the subatomic, quantum world. We should note the positioning of this threshold: it demarcates the macroscopic world of “classical” Newtonian physics from the probabilistic, indeterminate and invisible world of quantum mechanics.*30 One world is aligned with that of macroscopic matter, the other (the quantum world) with mind. Yet Costa de Beauregard breaks the tension between these two thresholds when he states that probability—that is, the laws of Shannon’s entropy/information couple†31—is the “hinge” that in itself inclines us to interpret reality either way. Similarly for Bateson the choice is simple: describe the phenomenal in terms of the creatura, in terms of differences/information, and we not only “save the appearances” but gain a more wholistic understanding of the structural integration of mind in nature.

As Bateson demonstrates, information has no energetic component. Yet it can have causal effects just as at the pleromatic level of description. A scientist committed to an all-embracing mechanistic weltanschauung will say that at some level there must be a causal property to thought (difference) that takes the form of an energy exchange between “material” components. (This, incidentally, is the essence of Landauer’s concept of the thermodynamic limits to computation.) Both Bateson and Hoffmeyer revision mind-body dualism by imagining bio-cyberspace as a hierarchy of relationships, and rather than a causal, mechanical process, this constitutes something like a process of symbolic/logical entailment, in as much as logic can be loosely defined as a systemic set of relations.

For Serres, both the subject and its environment/world are but modes of the one object, itself a moment of density in the entropic stream: “Nothing distinguishes me ontologically from a crystal, a plant, an animal, or the order of the world; we are drifting together toward the noise and the black depths of the universe, and our diverse systemic complexions are flowing up the entropic stream, toward the solar origin, itself adrift.”56

An essential element in Hermetic philosophy, as clearly demonstrated by Kepler, was the realization that everything was interconnected and, to a certain extent (recalling Bateson’s “wherever such a view is appropriate”), penetrable by the coursing energies of all other objects, both sublunary and celestial. In contemporary terms, the Hermeticist saw what we would regard as inanimate objects as possessing “information processing” capabilities: rocks, metals, and precious stones were transmitters of complex celestial influences (recalling Serres’s “diverse systemic complexions”).

The only thing really missing from Bateson, Hoffmeyer, and Serres’s conceptions is what may be described as the essentially “erotic edge” to considerations of the phenomenal world as being mindful and somehow “alive.” This erotic undertow was well acknowledged by many Hermetic thinkers, particularly Ficino and Bruno. We find a trace of this, however, in Vinge’s idea that the mutual interaction of human operators and their cyberspace counterparts (both human and computer) may produce a new besouled human/machine—the Singularity—an entity like Hoffmeyer’s “swarm of swarms.” Vinge’s conception suggests an act of intimacy that is quite out of place in regard to his other much more sober alternatives for the birth of the Singularity. We may interpret this scenario even more hyperbolically perhaps: it presents an “erotic encounter” between the network and its human users. This should be seen as nothing less than an (unintentionally) transgressive metaphor that reveals the dream of the essentially erotic and intimate interconnectivity operating between the Hermetic philosopher and the world.

CONCLUSION

In this chapter I have attempted to examine certain specific ideas produced by both cybernetic theory and its later “offshoot,” artificial-intelligence research. I have endeavored to demonstrate that part of the dream of AI—and specifically Vinge’s conception of the Singularity—seems to exhibit an atavistic yearning for the Hermetic idea of the anima mundi. The transgressivity of Hoffmeyer, Bateson, and Serres’s interpretations of cybernetic theory—by which I mean the fact that they all attempt to erase the “classical” border between “mind” and “matter”—furthermore reflects what appears to be a return to premodern conceptualizations of the micro/macrocosmic couplet. Recognition that, at the very least, similar conceptualizations were fundamental to the Hermetic worldview would perhaps prevent contemporary explorations from being too chauvinistic about their “radical innovations.”

Isaac Newton’s use of the representative theory to underpin the modern scientific project, in which mathematical entities stand in a supposed strict relation to forces and bodies in the phenomenal world, has become the canonical method by which scientific research has authenticated itself. The universal acceptance of this manner of theorizing within the scientific community has staunchly avoided exploring the implications of extending its purview into that of the mind—that is, until the development of the cybernetic and AI projects. No doubt Newton, “the last of the Magi,”57 would have greatly approved of this use of his method, but it is of course far from certain that more recent scientists—less close to the eclipse of the Hermetic worldview—would similarly approve.

Ernst Mach held to the conviction that all great contributions to science, all great theories, were not so much closer approximations to a final description of reality as they were insights into the psychology of the scientists who produced them. In this view, the history of science is really the history of a succession of individual psychologies writ large, as it were, a notion suggesting perhaps that some scientific theories at least might be considered psychopathological. A worldview that idealizes the “pleromatic” as composed of dead, mindless matter has become reified into a world in which the reality of the death of ecosystems has become all too apparent. One hypothesizes that the thinking of Hoffmeyer, Bateson, and Serres represents a reaction to what they see as the psychopathology inherent in the excesses of modern science. As opposed to this way of thinking, each of the foregoing thinkers has attempted to provide a rational (in the Platonic sense) redescription of the grounds on which future scientific theorizations should develop. This perhaps represents the beginnings of an (unknowingly) rediscovered imaginary that supplants one of actants over and against objects and ideals of algorithmic certainty of outcome, with one of intersubjective exchanges governed by the mystery of probability, negentropy, and difference.