Søren Brier
Come, let us hasten to a higher plane,
When dyads tread the fairy fields of Venn,
Their indices bedecked from one to n,
Commingled in an endless Markov chain!
(Lem 1985: 52)
In Stanislaw Lem’s Cyberiad, the cybernetic super-engineer Trurl, robotic inventor and robot builder, constructs a robot-computer that can produce poetry in all its forms. He realizes that in order to produce the first-ever specimen of such a machine, he needs to model the whole of human mythological, social, and cultural history. So, he creates a cybernetic model of the Muse, a Cyberbard that can produce an infinite amount of heart-touching poems. Writing day and night, this immense poetic force ends up disturbing social as well as galactic order. The verse above – the first in a mathematical love poem with cybernetic feeling made by Trurl’s electronic bard – is a fitting place to start a discussion of cybernetics and literature.
The mathematician Norbert Wiener, who coined the term, described “cybernetics” as the science of control and communication in the animal and the machine (Wiener 1948), a definition taken as foundational by one of cybernetics’ major developers, Ross Ashby (1956). The etymology of cybernetics goes to the art of steersmanship. In Greek, kybernetes means pilot, steersman, and cybernetics is a theory of control of the behavior of machines, organisms, and organizations by the way of feedback circuits. Most of all, cybernetics studied machines with built-in devices, such as regulators and thermostats, for seeking and maintaining set goals. Thus, fundamentally, the science of cybernetics focuses not on being but on behavior: it does not ask, “what is this thing?” but instead, “what does it do?” Or, “how can we make a thing that does this?” Especially in its initial or first-order form, cybernetics is a transdisciplinary engineering thinking. It is not about subjective individuals or any form of individual consciousness, because one of the basic requirements for being an autonomous individual is to be autopoietic. As developed in second-order cybernetics, autopoiesis refers to systems, such as living cells, that are self-referential in that they are self-maintaining: they are themselves the product of their own operation (see Clarke 2008, and his article on Systems Theory, Chapter 19, this volume). In contrast, the machines studied in the first cybernetics are allopoietic, that is, created and made by something else, some other system.
The combination of a cybernetic machine and an organism, human or otherwise, is called a cyborg, from which term derives the fantastic cybernetic vision of the Borg in Star Trek: The Next Generation (TNG). When Captain Jean-Luc Picard is captured by the Borg, he is transformed into a cyborg. The indeterminate status and behavior of this combination of machine and organism may be why in all the movies and TV episodes of TNG, Data, an android constructed by the chief cybernetician Dr. Noonien Soong, is such a mystery. Nobody knows exactly what it/he is or how to construct another. The Borg manifests as a juggernaut of cybernetically enhanced humanoid drones of multiple species, an interconnected collective communicatively linked through a subspace domain and integrated into a hive or group mind/ego, a single consciousness occupying many bodies. A type of organization inspired by bees and ants, as in a perfect dictatorship, the individual is sacrificed to the social whole. The Borg captures the uneasy social affect in many of the fantastic narratives cybernetics has inspired.
The cyborg fantasy of the Borg combines the autopoietic and sign-producing abilities of living systems with machines’ powers of memory and computation, making all members of Borg far stronger than humans. Furthermore, as a massive cybernetic and homeostatic system, the Borg is always adapting and consuming new technology and information, making organic agencies into cyborgs with implants, and integrating those cyborgs and their spaceship into a sentient autopoietic system. The Borg travels through space assimilating those races and their technologies that can improve its efficiency and survival into its system: “We are the Borg. You will be assimilated. Resistance is futile!” In “The Best of Both Worlds,” it captures and renders Captain Picard into the collective by surgically altering him, creating Locutus of Borg to use as a weapon in the battle against the Enterprise. One wonders if his recapture by the Enterprise crew and the operative restoration of his former individual self is as realistic as his submission to the Borg system!
Lem’s genius in the Cyberiad is that, while he sees the immense possibilities of cybernetics, at the same time, with tongue-in-cheek irony, his exquisite science-fiction fables explore its dubious relations to human consciousness. In a fable on the problem of will, “Trurl’s Machine,” a stupid and stubborn robot refuses to learn anything beyond its original program. Unfortunately, that program had some elementary flaws, such as thinking that two and two equals seven, and Trurl’s stupid machine prefers to kill the messenger of its faults rather than to correct them. A fundamentalist robot! Isaac Asimov has a comparable story where robots on a space ship will not accept that they are created by the humans, pointing out how much more perfect they are. The movie I Robot uses the same kind of irony when the machines realize how irrational human beings can be, and so interpret Asimov’s laws for robots about protecting their creators by trying to take power, thus bringing themselves and the whole globe into danger. As too often happens in these science fictions, the robots are given self-conscious minds, without due consideration of the flaws in cybernetic thinking – its blind spots when it comes to matters of will, emotion, the qualia and agency of first-person consciousness.
Cybernetics started by being closely associated with physics, in particular, thermodynamics and statistical mechanics. But it depends in no essential way on the laws of physics or on the properties of matter. It gets its transdisciplinary scope by viewing the materiality of a given system as irrelevant to its organizational properties. Rather, it works with those circulating differences and relations that we have come to call information. What cybernetics is concerned with is the scientific investigation of all varieties of goal-oriented systemic processes, including such phenomena as regulation, information processing, information storage, adaptation, self-organization, self-reproduction, and strategic behavior. Cybernetics deals with all forms of behavior insofar as they are regular, determinate, or reproducible.
Thus, at its origin cybernetics lays down its own foundations as a science of self-regulating and goal-seeking systems. Lem underscores this special independence in his Cyberiad again and again by showing how Trurl and his friend and competitor, Klapaucius, are at the same time both inventors and problem solvers for the problems their inventions create. For instance, they invent a prey robot for King Krool so perfect that not even his predator Saint Cybernards and Cyberman pinchers, nor even the king’s high-fidelity cybersteed, could follow it! In essence, Trurl and Klapaucius introduced self-organizing principles into their invention and, as such, attempted to move the behavior of robots up to the next step in cybernetic development, namely Heinz von Foerster’s second-order cybernetics, for which the goal is not only to observe the behavior of systems but also to observe the way that observing systems observe. Thus the construction recurs on the constructor itself.
Lem’s refined ironic tales, then, play on how cybernetic devices – from ther-mostats, physiological mechanism for the regulation of body temperature, and automatic steering devices, to economic and political processes – are goal-seeking systems studied under a general mathematical model of deviation-counteracting feedback networks. Cybernetics is transdisciplinary and requires some knowledge of neurophysiology, mathematics, philosophy, psychology, but proposes on this basis a general theory of information processing and decision making based on a computational framework. Laying the foundation for what we now call cognitive science, cybernetics’ algebraic information thinking permeated even its attempts to model linguistics as well as emotions and consciousness.
This algebraic-computational orientation makes it clear why, in The Cyberiad, Klapaucius attempts to test Trurl’s Cyberbard by asking for “a love poem, lyrical, pastoral, and expressed in the language of pure mathematics. Tensor algebra mainly, with a little topology and higher calculus, if need be. But with feeling, you understand, in the cybernetic spirit” (Lem 1985: 51–52). As cybernetics, especially in Wiener’s hand, was also the development of a special interdisciplinary mathematical apparatus, the love poem given by Trurl’s electronic bard, in Michael Kandel’s astonishing translation, provides such a unique ode to mathematical beauty that I have to cite a few more lines.
Come, every frustum longs to be a cone,
And every vector dreams of matrices.
Hark to the gentle gradient of the breeze:
It whispers of a more ergodic zone.
In Riemann, Hilbert or in Banach space
Let superscripts and subscripts go their ways.
(Lem 1985: 52)
Because numerous systems in the living, social, and technological world may be translated into these mathematical and behavioral idioms, cybernetics cuts across many traditional disciplinary boundaries. It developed a metadisciplinary language of information and goal-oriented self-organized behavior through negative feedback that works on differences and uses feedback/feed-forward mechanisms to home in on the target. Cybernetic theory was derived to some extent from the new findings in the 1930s and 1940s regarding the role of electric signals in biological systems, including the human nervous system. Wiener (1948) connected information and entropy with organization and therefore evolution. Wiener defined information as a probability, describing the amount of information mathematically as an integral, a measurement of probability. Specifically, he defined information on the model of negentropy, as inspired by Erwin Schrödinger – the amount of entropy a system exports to keep its own entropy low (see Chapter 13 on Information Theory).
Wiener’s view of information is thus that it contains some form of structure, order, or pattern. Many researchers attempt to build up a concept of meaning on this foundation. But such a concept of meaning does not have much to do with what living systems and human beings do to operate semiotically in cognition and communication. Lem underlines this gap when his hero, Trurl, realizes that, to be able to make poetry that actually touches human emotions, you have to possess some knowledge of the history of human culture, some sense of humanity’s world horizon. In the most humorous way, Trurl works hard to squeeze the whole history of biological evolution and human development into logical programming and hardware construction. After that comes of course a little programming of semantics, grammar, pragmatics, and all the forms of poetry previously developed, plus an anti-cliché program – no problem for a great cybernetician! Clearly, Lem is mocking the technological hubris of some cybernetically inspired engineers.
Star Trek: TNG brings up Data’s inability to understand jokes, including practical jokes, many times. Finally, he gets an “emotion chip.” But this throws him into a chaotic reality, a life-world of feelings he cannot control, and in one episode he actually requests to have it removed. Trurl’s Cyberbard, on the other hand, learns to manipulate human feelings by the composition of words and meanings. Thus it is able to prevent its own destruction by the policemen and soldiers sent to pull its plug, overwhelming them with sentimental emotions, making them unable to do their grim deed. But of course the great question underlying Lem’s short story is whether it is possible to teach a machine, if it lacks those emotions itself, to improvise the manipulation of human feelings. Even psychopaths, in order to manipulate others, operate from a minimum of feelings themselves. However, the therapeutic AI program Eliza and Paro, the robotic baby seal used to comfort demented elderly, seem to indicate that such interactions are possible to a certain extent. Taking cybermanipulation to the next level, the Cyberiad presents a “femfatalatron, an erotifying device, stochastic, elastic and orgiastic, and with plenty of feedback” (Lem 1985: 108). While, in Lem, the description is thick with irony, sex with female androids is featured in both Philip K. Dick’s Do Androids Dream of Electric Sheep? (1968) and its movie adaptation, Blade Runner (1982). Today a whole industry is working on developing cybernetic sex machines.
Inspired by the original cybernetic work on neuropsychology and the brain as a logical programmer, the theory of a universal “language of mind” has been developed. Such a language of mind, it is hypothesized, would be what the brain computes in and then later translates to humans’ culturally influenced natural language. It seems that this is what the TNG writers were inspired by when they imagined the Borg to use alphanumeric code as their written language, consisting of circular symbols with geometric shapes cut out of them, written in horizontal and vertical lines, for encoding and transmitting data throughout the Collective. Some parts of cybernetic cognitive science take it to be their role to unravel this brain-internal language in order to make artificial-intelligence, automatic decision-making systems, as well as an automatic universal language translator. One prototype is now called Babelfish. This term was coined by Douglas Adams in The Hitchhiker’s Guide to the Galaxy. Here the Babelfish is a small, fishlike creature from another galaxy, which you insert into your ear, with the result that you understand all other languages as if they were your own, no matter from which galaxy in the universe they come. But these days, it is the name of a free and automatic translation program on the Net, amazing in itself, but also, in its deficiencies, demonstrating the lack of robot language processing in relation to human meaning and understanding. The Babelfish has not yet reached the level of Trurl’s Cyberbard.
The ideas that our brains are organic hardware or wetware – that our intelligence and cognitive abilities as well as our personalities are the informational programs that run on that wetware – have given rise to the vivid fantasy of transferring the human self out of its body and into other media. Most famous is William Gibson’s groundbreaking cyberpunk trilogy Neuromancer, Count Zero, and Mona Lisa Overdrive (the so-called Sprawl trilogy), starting in 1984. It promotes the idea of “jacking in” to the Net. As you fasten the electrons of your Web-connected computer directly onto brain switches in your skull, your mind travels into the three-dimensional informational world of cyberspace. In this virtual reality, the mind of a deceased important person – for instance, the previous leader of a huge international concern – is present, together with huge artificial intelligences that evolve and expand their domination of this cyber-world, intricately connected to our material world through all the computers connected to real-world machines. Here is a new form of earthly paradise, as you can live forever on the Net without your body. This is in contrast to the Matrix movies: to be present in the Matrix, your mind has to have its base in a sleeping body kept alive by machines.
The Japanese cartoon Yoko Tsuno plays, in several issues, with the idea of transferring the human mind to robotic systems. The famous Japanese cartoon movie The Ghost in the Shell further explores this gray territory between living systems and independent artificial intelligences similar to human subjects but without a body. But most surprisingly, in what one supposes to be his alternative to those grand narratives of modernity he had previously mocked, Jean-François Lyotard, in his Postmodern Fables (1999), imagines the human species to escape when the sun burns out, departing from the Earth in space rockets as pure informational intelligences.
As Winograd and Flores (1987) pointed out, cybernetic developers in the 1980s expected to have intelligent robots managing human social tasks within a decade. The body was considered a machine in which the brain functioned by algorithmic computation. The modern computer was a perfect rational being. All we needed was to develop it into a still more powerful machine with more computing power. The funniest, most sarcastic and most penetrating critical evaluation of AI’s relation to the human world of existential meaning ever written, in my opinion, is Douglas Adams’s The Hitchhiker’s Guide to the Galaxy. After seven and a half million years’ cybermeditating on “the ultimate question of the meaning of life, the universe and everything,” a supercomputer by the name of “Deep Thought” comes up with the answer: “42” (Adams 1996: 120). Computers, having no sense of human existential meaning, function in a mathematical universe.
Filled with scientific knowledge about the universe on one hand, and reflections – in the form of humorous events – on the role of emotion and meaning in human rationality on the other, this unique book purveys humorous and sarcastic scenarios about what could happen if developments in technology made it possible for us to install these “features” in computers and robots. We have seen similar discussions in Star Trek: TNG around the android Data, who gets an “emotion chip” from his human handlers. A number of scenarios play out the attempts of this rational android to deal with these new sensual aspects of reality – including the affect of humor. A comparable theme takes a different turn in Dick (1968) and Blade Runner (1982). The novel is deeply psychological, asking questions such as: What is empathy and human emotion? Is consciousness an emergent quality? What defines intelligence? The movie is a little cruder, of course, but its focus is the question of when the artificial life of androids stops being merely mechanical and becomes the feeling and willing of subjects with existential and aesthetic needs.
Dick’s novel imagines that our culture has developed a cybernetic emotion-regulating machine capable of securing for its user a good mood all the time. But people start to get bored with happiness. To render happiness more enjoyable, they begin to play with inducing depressive states for longer and longer periods, some, unfortunately, for so long that they commit suicide. The Star Trek writers are aware of the related problem of the need for experiental embodiment in order to produce and sustain consciousness. In the Star Trek movie First Contact, the Borg Queen wants to persuade Data to join the Borg by tempting him with the addition of living flesh to his previously insensitive android body, thus transferring the feeling of a living body to his “brain.” The experience is as overwhelming for him as when he had an emotion chip inserted. But after all, the question ignored by these fantasies is how he could have experiences of any sort with a computer for a brain. We do not even know how living brains produce experience, nor if that is what they do.
In The Hitchhiker’s Guide, Adams’s deeply depressed super-robot, Marvin, deals in a sarcastic way with unreal techno-optimist expectations by showing some of the absurd consequences of emotions in supercomputer robots. Marvin can cause any cybernetic control system to break down completely just by connecting to it and sharing his melancholy view of the world and his own situation, and actually saves the book’s young hero once by doing just that. But I also cherish the part where Marvin causes the cybernetically regulated high-speed elevators going up 100-storey houses to get anxiety neuroses and hide in the cellar, not daring to go up into the heights!
In the novel This Perfect Day (1970), Ira Levin plays out the question of the superiority of artificial to human intelligence and the consequences of improving nature and culture, through the control of a central computer. In contrast, Adams’s Hitchhiker’s Guide portrays the ruler of the universe as a radical skeptic antirealist and disbeliever in the control of complex systems. Where Levin’s novel is a heroic science fiction of rebellion against a technocratic dystopia, Adams’s ruler of the universe is located in a little shack on a remote, insignificant planet and, when asked if he rules the universe, he says, “I try not to.” This attitude of relinquishing control for autonomy is more in line with the development of second-order cybernetics and autopoiesis theory. These developments followed classical cybernetics and brought in a constructivist viewpoint that encompassed classical cybernetics as only a special case of systems theory. Where Ira Levin sees humanity making a violent revolution against the all-benevolent but tyrannical central intelligence that rules the length and form of people’s lives based on an overall computation of what the system can sustain, Adams makes people complain that there is not a more consistent homeostasis and cybernetic equilibrium in the government of the world.
In the 1980s, building on cybernetic foundations, new theories of dynamical systems – chaos theory with its strange attractors, fractal geometries, and self-similar iterations – and complex adaptive systems (CAS) were developed and introduced. Popularizing some of these developments, Michael Crichton’s novel Jurassic Park – and to a lesser degree the movie made from it – dramatized the consequences of dispensing with cybernetic thinking in the effort to control complex non-linear systems by computers. Reflecting the contrast between classical deterministic science and the new cybernetics of complexity and unpredictability, Jurassic Park also addresses the limits of scientific knowledge and focuses on how the complexity and self-organization paradigms change our views on scientific prediction and control.
Set on an island, Jurassic Park, which exhibits live dinosaurs grown from pre-historic remnants of theirs genes, illustrates the problems bound up with the attempt to gain full control of an ecosystem. Controlled by a huge computer system, the park should be perfectly safe, but little by little, the irregularities and inadequacy of the system are revealed, until its final collapse and catastrophe. It is an extremely powerful renouncement of the deterministic control paradigm: simplicity, linearity, determinism, and control are contrasted with a new realization: nature is non-linear, fractal, and complex. The chaos researcher, Malcolm, plays an important role as the all-knowing reporter of complexity and chaos in this most dramatic setting of wild dinosaurs hunting children. His experience beseeches us to understand that we are at the end of an epoch, and so we had better hurry up and learn some new tricks if we wish to survive. In a later novel, Prey, Crichton imagines what can happen when a self-organizing genetically engineered swarm intelligence is combined with communicating nano-computers. A new, lethal, infectious super-organism emerges, completely out of control, but also competing with an initially benign one that turns out, in the long run, to be much more dangerous. The novel gives a superb illustration of the discrepancy between the current cybernetic knowledge of complexity and self-organization, and the failure of control engineers to grasp the dynamics of chaotic systems.
Within the field of cybernetic anthropology, this tension was especially expressed in Gregory Bateson’s Steps to an Ecology of Mind (1972). Here Bateson described modern technoscience as a culture of hubris. His major project was to explain the relation of mind and nature – or more precisely, mind in nature – from a modern scientific basis, avoiding the metaphysical dualism of Descartes as well as the mechanism of Laplace. Bateson provided a new delimitation of the concept of information: “In fact, what we mean by information – the elementary unit of information – is a difference which makes a difference” (Bateson 1972: 453).
For Bateson, even when that system does not include living organisms, “The elementary cybernetic system with its messages in circuit is, in fact, the simplest unit of mind” (459). Matter and energy are already imbued with informational circular processes of differences. Mind is synonymous with a cybernetic system comprised of a total, self-correcting unit which prepares and processes information. Mind is immanent in this wholeness, because mind is essentially the informational and logical “pattern that connects” through a virtual recursive dynamics of differences in circuit. He sees life and mind as coexisting in an ecological and evolutionary dynamic that integrates the whole biosphere. In sum, Bateson explained mind as a function of complex cybernetic organization, and incorporated his concept of information into a universal cybernetic philosophy. Bateson believed that his version of cybernetics provides an understanding of mind that is neither subjectively idealistic nor mechanically materialistic.
This cybernetic mind also rules our emotions as a relational logic. It shows up in our perception as aesthetics. It is the learning pattern in evolution. Wisdom is to know and live the pattern of evolutionary and ecological wholeness in cultures as well as in individual awareness. The pattern that connects can be understood as a metaphor for what many nature-religious or spiritual types of ecological thinking see as the sacred or the immanent divine. In Bateson a holistic cybernetic science verges on the sacred:
The cybernetic epistemology which I have offered you would suggest a new approach. The individual mind is immanent but not only in the body. It is immanent also in the pathways and messages outside the body; and there is a larger Mind of which the individual mind is only a sub-system. This larger Mind is comparable to God and is perhaps what some people mean by “God,” but it is still immanent in the total interconnected social system and planetary ecology.
(Bateson 1972: 461)
In The Cyberiad Lem places Bateson’s vision into literary praxis in “The First Sally, or The Trap of Gargantius.” Trurl and Klapacius land on a planet with two countries planning to wage war against each other; they go to opposite sides of the conflict. Anticipating the consequences of Bateson’s theory – that all systems in which bits of information or transforms of difference circulate have cybernetic minds, and the more systems are coupled together the greater the mind will be – Trurl and Klapacius both suggest to the kings and generals on either side to connect their soldiers into closely organized systems by way of a specially constructed armor. When they reach the level of companies, however, the greater mind that is created starts to absorb the soldiers’ individual minds:
they took to chatting, and later, through the open windows of the barracks one could hear voices booming in chorus, disputing such matters as absolute truth, analytic versus synthetic a priori propositions, and the Thing-in-itself, for their collective minds had already attained that level.
(Lem 1985: 40)
When they reached the level of battalions, they developed a higher aesthetic sense, and some became sidetracked from warfare into chasing after butterflies: “Among the artillery corps the weightiest metaphysical questions were considered, and, with an absentmindedness characteristic of great genius, these large units lost their weapons, misplaced their equipment and completely forgot that there was a war on” (1985: 41). When the officers attempted to bring the soldiers back to common sense, they too got absorbed in the collective mind or corporate identity and forgot their original mission. “Consciousness, it seemed, formed a deadly trap, in that one could enter it, but never leave” (41).
Finally, on either side totally united, two armies with one mind apiece attack each other, but as soon as they touch are brought into a single system:
There was absolute silence. That famous culmination of consciousness that the great Gargantius had predicted with mathematical precision was now reached on both sides. For beyond a certain point militarism, a purely local phenomenon, becomes civil, and this is because the Cosmos Itself is by nature wholly civilian, and indeed, the minds of both armies had assumed truly cosmic proportions! … Both armies went off hand in hand, picking flowers beneath the fluffy white clouds, on the field of the battle that never was.
(Lem 1985: 42)
I do not think that cybernetic theory, be it of first or second order, could imagine a more beautiful outcome of its theory.
Adams, D. (1996) The Ultimate Hitchhiker’s Guide to the Galaxy, New York: Wing Books.
Ashby, W.R. (1956) An Introduction to Cybernetics, London: Chapman & Hall.
Bateson, G. (1972) Steps to an Ecology of Mind, New York: Ballantine.
Blade Runner (1982) dir. Ridley Scott, Warner Brothers.
Clarke, B. (2008) Posthuman Metamorphosis: narrative and systems, New York: Fordham University Press.
Crichton, M. (1990) Jurassic Park, New York: Random House.
——(1995) The Lost World, New York: Random House.
——(2002) Prey, New York: HarperCollins.
Dick, P.K. (1968) Do Androids Dream of Electric Sheep?, New York: Del Rey, 1996.
Gibson, W. (1984) Neuromancer, New York: Ace Books.
——(1986) Count Zero, London: Victor Gollancz.
——(1988) Mona Lisa Overdrive, London: Victor Gollancz.
Lem, S. (1985) The Cyberiad: fables for the cybernetic age, trans. M. Kandel, New York: Harvest.
Levin, I. (1970) This Perfect Day, New York: Random House.
Lyotard, J.-F. (1999) Postmodern Fables, trans. G. Van Den Abbeele, Minneapolis: University of Minnesota Press.
Star Trek: The Next Generation (1987–94) prod. G. Roddenberry, Paramount Television.
Wiener, N. (1948) Cybernetics: the science of control and communication in the animal and the machine, Cambridge, Mass.: MIT Press, 1961.
Winograd, T. and Flores, F. (1987) Understanding Computers and Cognition: a new foundation for design, Reading, Mass.: Addison-Wesley.