VI

PHILONOUS Hello, Hylas. A lovely day, isn’t it?

HYLAS Oh, it’s you. I was reading and did not hear you approach.

PHILONOUS What book has managed to engross you like that?

HYLAS Notes from Underground by Dostoevsky.

PHILONOUS Just the right reading for someone who is immersing himself in the mysteries of cybernetics.

HYLAS You think? It has in fact raised many doubts in my mind, and I would like to share them with you. Those complex networks of yours obviously are intelligent, can reason, possess goals and freedom of choice, yet what do they have to do with the world of human desires? We human beings, created as we maintain, by evolution, which has built into us many safeguards and skills, should avoid suffering, use induction, pursue progress, and exercise our abilities to the maximum—but the reality is much more complicated. We deceive ourselves. We take a perverse pleasure in suffering, even in our own. We like to destroy. We make thousands of excuses; are full of tricks, locked doors, secret chambers, and odd corners; fall victim to mind-numbing addictions; become slaves of desire, emptiness, and the dark craving for significance, pretense, and dominance. Can you draw a schematic of a self-deceptive network? A network that sacrifices others to the Moloch of “obligation”? A network that finds pleasure in the inflicting of pain? My constructor friend, can there be networks that are blind and fanatic, networks whose magnificent, complicated structure has no other purpose but to abuse and defile itself and the world? If cybernetics cannot help us here, we had better dismiss it.

PHILONOUS I see, Hylas, that you have indeed been reading Dostoevsky. I understand that your anger is mixed with sorrow, that you are fighting the despair that tends to overcome us when we ponder the human species. But cybernetics is not the right target for your reproach. It is strange to use the language of physics or engineering to discuss the tragic or noble aspects of human existence and psyche. This has not been done before and may well border on the ridiculous. But I rise to the challenge.

Today we were to talk about the kind of immortality that cybernetics might provide in the future. It is not an immortality that we would want; it may be grotesque, awkward, and disagreeable, but there may be no other. First, let us return to the networks. Consciousness, as you know, is only one of many processes in the brain. What we call conscious is injected by internal feedback into the network’s special circuits. But the contents of what becomes conscious only partly depend on the will, that is, on an arbitrarily selected system of preferences, and both consciousness and the preference system are carried by the flow of all cerebral processes, as the Earth’s globe is supported by Atlas’s shoulders.1 But not all information is admitted to the gate of consciousness. Some processes are fished from their ocean, others not. Some achieve more than a fair representation in consciousness (with respect to their share in the total of the network processes), others are inhibited and suppressed. Some preference systems can be modified during their admission into consciousness. It is easy to change a food preference when we are given information that the food is harmful; it is not easy to part with a worldview when it is shattered before our eyes by proving it logically inconsistent. A network can “fake”—not only its own contents and proficiency but also its relation to the world. An appropriately “raised” automaton will manifest “irrational faith” and “superstition,” will perform symbolic gestures to ward off “bad luck” when it sees a black cat, and will engage in mystical and metaphysical disputes, for its behavior is determined by its past. If we place a twentieth-century newborn in the middle of a Neanderthal tribe, the child will grow up not into an engineer or a pilot but a mammoth-hunting raw-meat eater. So much on the subject of a network’s “self-deception.”

As for “perversity,” this feature may be explained to some degree by the pursuit of a surrogate goal, which we already mentioned. Also, it is the price paid for having great ability and talent. Regardless of the number of safeguards, a network with sufficiently high complexity is subject to “deviations” with varying mechanisms. A human being tends to revolt against the society that formed it. Destructiveness, cruelty, masochism, and other vices were not “planned” by evolution, but neither was our love of beauty, music, and art. Keep in mind that in networks there is nothing but co-oscillating sets of processes that reflect and interpret the world. Certain processes or certain frequencies of biochemical transformations, whose task is, say, to look for congruence between shapes, can unexpectedly, in response to some particular signals, undergo a change which causes a sudden drop in a network’s internal imbalance. From the cybernetic perspective, bliss, peace, and satisfaction—all those things that art provides—represent a lowering of this imbalance in the course of processes related to the inflow of such information as music or the sight of mountains in the snow. Unfortunately, other combinations are possible too: a network may also create configurations in which the decrease in imbalance is accompanied, for example, by killing. Rashevsky mathematically predicted which geometric shapes a human observer would consider beautiful.2 One might design a system of processes, a formal network connectivity pattern, that leads to pleasure derived from destruction. That design would be an explanation, not a justification, of course, because, as we mentioned, a network is basically free in its behavior. I have responded to you not as an engineer but as a very inexperienced apprentice of network science who is no match for Dostoevsky either in language or the ability to make an impression.

HYLAS You are right that we should not blame cybernetics, and it should not be a target of our reproaches (if we should have any in the first place). But what of the prospect of achieving immortality?

PHILONOUS Well, the continuation of conscious existence, you already know, is inseparable from the processes occurring in the network. Therefore, it would be impossible to “isolate someone’s sorrow” so that we could put it into a glass jar—this is as impossible today as in the most distant future. Because the feeling of sadness results from a constellation of processes taking place in a given system, and to obtain it we would need to create the entire system. Please bear in mind that the system can be built from any material and its energy transformations can occur in a much larger interval of temperatures than those in the human brain. We will seek a solution to the problem of continuation by going through a series of stages or consecutive experiments. In the first experiment, we surgically connect the peripheral nerves of two people. This has already been done on lower animals. It will enable one person to experience what another’s sensory organs perceive. So one person will see through the eyes of another if the peripheral part of the optic nerves of the first person is connected to the afferent nerves of the second. The next experiment, far more difficult to realize, is joining the neural paths of two brains through a link, which can be either biological (a bridge of living nerve fibers) or any device that can collect the stimuli running through one brain and pass them on to the respective neural paths of the other.

HYLAS Are you sure that even if this experiment were successful, the other person would make any sense of what he feels? I am afraid that the result might be an impression of total chaos and confusion.

PHILONOUS You are absolutely right. Specific stimuli have a “meaning” only for a given network and even there only for the parts of the network to which they are addressed. Simply injecting a random train of stimuli from one brain into another would surely result in “mental cacophony.” This is one of the biggest obstacles on the path to the functional joining of two brains. Yet a brain can cope with procedures that are much more drastic, such as the excision of entire slabs of the cortex or even of a whole hemisphere. This kind of brutal surgery does not inevitably result in the breakdown of mental functions: the brain’s ability to restore them, even in networks that have been significantly damaged, is enormous. No doubt experiments of the kind I am describing will be cautious for a long time—they will be performed on animals, whose behavior and reactions after the joining will be carefully monitored. But since there are no fundamental, principial obstacles in sight, success will eventually come.

Initially, two joined brains (joined in one or more neural paths, subcortical associative bundles, etc.) will only interfere with each other. But observations that have been made in neurological clinics point to the next stages of our experiment. We know the symptoms of massive brain damage. In the majority of cases, even the most severe perturbations recede after time and are compensated, provided the irreplaceable regions of the cortex remain intact. Functional recovery sometimes happens spontaneously, but more often it occurs after long, conscious efforts under the guidance of expert trainers. The damaged brain replaces the lost functions by repurposing its other parts. Functions lost due to the damage in one region of the cortex are taken over by cortical regions that previously had little or no role in them. For example, after a loss of muscle proprioception, a person loses the feedback informing him about the position of his limbs, which renders movement, especially walking, impossible. But once he learns how to replace the muscle proprioception with visual cues, far-reaching restitution of kinetic ability ensues.

A nice example of a more subtle transfer to a new mechanism is when a person with brain damage, exhibiting symptoms of motoric aphasia, could not pronounce simple words, like “neighbor”: having articulated the “neigh,” he could not automatically block the innervation of the muscles of speech and started persevering, mechanically repeating the syllable “neigh.” What was for him absolutely impossible when he was supposed to say the word that denotes a person living next door became easy after he was instructed to think first of the sound of a horse and then of the sound of a sheep (“neigh-baa”). The problem was solved because different network mechanisms, serving different purposes, had been activated. Damage to large parts of the brain causes impairment, helplessness, and confusion in a person, but in time a lost function can be learned from scratch, and the symptoms diminish or disappear. Therefore, we can expect that if a brain mechanism is not lost but, on the contrary, added—by attaching one brain to another—after the initial chaos, a new modus operandi will develop, process coordination emerges on the adaptive learning path, and after some time the complete functional union of the two brains will be achieved. Obviously success will depend on what is connected to what. Joining lower-level parts of a network, those that only transmit information from the sensory organs (e.g., the optic nerve fibers with radiatio optica), will cause less disturbance than joining higher-level systems. Joining systems at the highest level of organization, those that integrate and form consciousness, will elicit the greatest and longest lasting disruption (perhaps even madness), because each of the joined parts operates with its own unique “coding method,” in which different frequencies correspond to different symbols, different processes influence one another in different ways, and information synthesis is accomplished by different means. Even so, I believe that a unified functioning of joined brains will be possible. It goes without saying that we should join only equivalent systems, the anatomical and physiological units (fibers, paths, brain regions) that correspond with each other. And we should avoid joining just one part of a brain to an entire other, because then, after the initial period of mutual disturbance, the whole would functionally dominate the part. But when we sever the great commissure in both brains and join left hemisphere with left and right hemisphere with right, we can expect that both brains will eventually fuse into a new functional unit. This functional union will be based on emotions and experiences that we cannot imagine, since the subjective perception will be that of a functionally single brain having two separate bodies joined only by a bridge that carries the nerve impulses between them.

Consequently, “plugging into someone else’s consciousness” with the aim of subjectively and directly observing its processes is impossible, because the plugging itself at first severely disrupts consciousness (in both the “plugger” and the “pluggee”), and later, when mutual adaptation has taken place, a unified consciousness arises that is not a mechanical sum of two parts but an entirely new functional unit. Direct observation is therefore out of the question; the only possibility is a “participation” in the other consciousness by becoming its “functional part.” It follows that the successful functional union of two brains will amount to an end of their previous individual existences: consciousnesses A and B disappear, and the new AB is qualitatively different from either. This rather gloomy statement forewarns us that some kind or some degree of “personality destruction” cannot be avoided here. And we should realize that this applies for the reverse procedure too: renewed separation of the joined brains back into two would mean the end of that newly emerged functional personality. It would entail a new period of disturbances and the subsequent phase of learning or rather “unlearning” of what had been gained in the joining; the two separate brains resulting from this complicated and risky operation would most probably be different from the original brains A and B, because the intranetwork changes and in-depth process reconstructions could not be mechanically reversed: the new recoveries would take place through internal modifications, done for the second time, owing to which we would end up with brains Ax and By, not the original A and B.

The best chance of success should be expected in the case of a unification undertaken in earlier developmental stages, when the processes of network formation are still under way and the plasticity and adaptive abilities of the cortex are at their peak. The brains of children would be the easiest to join, and most definitely the brains of fetuses. First experiments, no doubt, will be conducted on monkey fetuses.

HYLAS How macabre! But what would be the purpose of such terrifying operations which bring us no gain? To produce monsters? Or to prove the impossibility of the “direct connection into someone else’s brain”? And what does this have to do with “cybernetic resurrection”?

PHILONOUS The proof of impossibility that we obtained was incidental, not the purpose of my argument. The goal we are after is to go beyond the individual boundary of life. You will understand this when I tell you that the next, crucial step is to graft or transfer a human mind onto a brain prosthesis.

HYLAS Oh my! How would that work?

PHILONOUS Essentially we connect a living brain, that is, a neuronal network, to a network of a different kind—electronic (or electrochemical). Obviously people will first need to learn how to construct networks with a complexity in the order of 10 billion functional elements, which is that of the human brain. A general theory of feedback networks will facilitate this. The grafting itself would be done in a great many consecutive stages.

HYLAS Why?

PHILONOUS Because connecting a brain to an electronic network at once would lead to a complete collapse of its processes. The primary circuits of the neuronal network must first be equipped with “shunts,” because each of them must be represented in a corresponding circuit in the prosthesis. Attaching all the shunts to all the circuits at the same time would perturb the entire network, and the consequences could be fatal. The brain’s network is a closed and integrated functional unit; opening it to an outflow of impulses to another, empty network would be equivalent to shorting it out. I may be overstating the danger, but in an operation of this kind caution is wise. Because the “personality” of the human brain must remain intact, a prudent procedure will be to connect the neuronal network to the prosthesis step by step, region by region, so that the living brain can “functionally absorb” or “assimilate” the electronic network. The objective is for the attached network to take over a significant part of the mental processes in the living brain. The next stage, once that has happened, is to gradually reduce the neuronal network. We are not destroying it but “unplugging” it, as is done, for example, in a lobotomy, where the fibers connecting the frontal lobes with the rest of the brain are severed. If we do this in sufficiently small steps, sequentially unplugging only small regions of the neuronal network and taking care not to act prematurely, that is, before the prosthesis network has overtaken the respective function, our functional unit, the combined neuro-electronic network will assume its functions without any significant disturbance, while those functions will gradually disappear from the neuronal side. Eventually, when the neuronal network has been dispossessed, the electronic network will carry the full burden of mental processes, and we will have transferred a human personality into our prosthesis. The electronic network will now contain all the memories, preference systems, impulse traffic rules, and internal feedbacks that previously constituted the personality of the living brain. This “electronic graft” of a living consciousness will be able to exist for an arbitrarily long period, as the material of the prosthesis is thousands of times more durable than the substance of a living brain. Also, any parts that wear out can be replaced. This is the prospect of “eternal life” in electronic or electrochemical brain prostheses.

HYLAS Hold on. What about the body, the living organism to which the living brain “belonged”?

PHILONOUS The problem is significant but not on the technological level: there may be moral opposition to the next, and the last procedure. Having replaced the brain, we need to replace the body too . . .

HYLAS I see. A prosthesis again?

PHILONOUS To secure longevity, it seems unavoidable.

HYLAS So the “immortality” that you offer means transferring a person’s mind into an apparatus of dead metal? If I were to take your proposition seriously just for a second (which surely does not come easily), I would never agree to that. To exist forever in the form of a thinking metal cupboard? Maybe you’re joking, Philonous.

PHILONOUS I am rarely more serious than I am now, my friend. The totality of mental processes can be excised, extracted, separated from the short-lived, impermanent living body only through its slow transfer to another substrate that will endure.

HYLAS All right, if we put aside for a moment the moral objections here, what is the guarantee that a mind transferred from a living neuronal network to a bunch of metal wires will not be deformed, mutilated, and dehumanized? Can this be considered with any seriousness at all? The prospect is ridiculous, insane—a world in which people are replaced by metal boxes equipped with electronic sensory organs . . .

PHILONOUS You were supposed to withhold the emotional judgment of the issue for a moment, if I understood correctly. My task was to show you the only real, or at least probable (as of today), path to immortality in the future, not to make value judgments about that path.

HYLAS Fine. Still, what is the guarantee that this procedure, even when done as gradually as you say, will not damage the living brain (wires stuck into its living tissue?) and turn it into something nonhuman?

PHILONOUS The procedure need not be bloody at all. Replacing the 10 billion neurons of the cerebral cortex with vacuum tubes is impossible, of course. Even with transistors, solid-state devices, which are 90 percent smaller and 90 percent more efficient in energy consumption, it would be impossible. To support the operation of an apparatus equivalent to a brain, about 100 million watts would be needed, whereas a living brain uses barely 100 watts and thus is a million times more efficient—as well as almost a million times smaller than a hypothetical “solid-state brain.” On the other hand, owing to the enormous difference in the speed of signal propagation between the electrical and nerve impulses, the thought processes in a crystalline brain would be about 100,000 times faster.

HYLAS Are you saying that during the “grafting” operation a person would be chained for years to a giant machine-building?

PHILONOUS Von Neumann has calculated that in theory an artificial brain could use 100 billion times less power than it does now. The first step on the path to improving its efficiency, from a vacuum tube to a transistor, has already been made. This step will undoubtedly be followed by others. An artificial brain of the future will certainly be smaller and more efficient; theoretical limit even allows for artificial brains that are hundreds of times smaller than the human brain, though possessing an equivalent number of functional elements. Science, then, considers it possible to bound “Hamlet’s personality” in a nutshell.

HYLAS But what about that awful “grafting” procedure itself? It sounds like vivisection.

PHILONOUS Today it is thought that the number of central neuronal groups, that is, closed circuits in the brain that play an essential role in the emergence of consciousness, does not exceed 10,000. Each group contains a number of circuits, closed loops of impulse circulation, which were discovered by the brilliant researcher Lorente de Nó.3 It is entirely possible that the functional joining of a neuronal network with a nonneuronal network can be accomplished without the subject’s discomfort. Keep in mind that such an operation will not be available for another thousand years; by then, medicine, neurophysiology, and neurosurgery will have the means at their disposal. Also, this operation is in one respect not that different from what occurs in a living brain all the time: its building material is regularly replaced through metabolism. Except that our replacement of material substrate is much more radical—from protein-based to nonproteinaceous. In any case, continuity and integrity of the transferred processes should be preserved in every step of the transfer.

HYLAS Even if everything goes smoothly, the idea of “thinking metal boxes” as the next stage of human development is unacceptable to me. But we might avoid this macabre vision by transferring the mental processes of one living brain to another, equally neuronal, protein-based, and alive, only created synthetically. What do you think of that?

PHILONOUS I see no principial impossibility there, but, paradoxical as this may sound, it would be much more difficult to do than grafting a mind onto a nonliving prosthesis. Constructing an entirely passive receptacle without any trace of memory and “personality disposition” would be straightforward compared to creating an artificial, fully developed, alive, but at the same time “empty” brain. And there is another important issue: a person’s new living brain would begin to experience, soon after the operation, various ailments and defects, and would quickly come to the end of its existence.

HYLAS Why is that?

PHILONOUS Every network has a limit in “information capacity,” which includes both its memory and the total amount of information, from outside and inside, that can circulate in it. Experimental and clinical data indicate that the human brain is not far from this limit, particularly as it ages. (This is one of the reasons why older people cannot remember recent events but have no trouble remembering the distant past.) In an overloaded brain, even a small hormonal perturbation that lowers the neurons’ excitation threshold just a little could totally block the transmission of impulses, causing insanity, personality disintegration, irreversible damage. Not long ago a substance was discovered that appears to inhibit synaptic excitability, and when administered to healthy people, causes symptoms of schizophrenia. The substance was isolated from the blood of schizophrenics.4 So we may conclude that after the mental processes of an old man have been transferred to a new brain, it will be close to the limit of “information capacity” and be able to function for only a relatively short time.

HYLAS Very well, but what about the electronic brain prosthesis?

PHILONOUS We can build one with additional “stores” or “functional reserves.” But as you can guess, that still does not promise any kind of “immortality,” since only an infinitely large and infinitely complex brain would be able to store an infinite (or at least enormous) number of memories, not to mention anything else.

HYLAS So this whole project of “transferring” a mind is a fantasy?

PHILONOUS No. Nothing in science rules it out. As I have already said, using electric current and improved functional equivalents of neurons, we should be able to build a network that is ten or a hundred times more capable than the human one.

HYLAS Great. Creating a “synthetic genius” then?

PHILONOUS A general theory of networks, once we have it, will enable us to construct networks with whatever characteristics we like, as long as they are allowed by the laws of nature. The great English mathematician Turing provided a theoretical proof of a network that can “do anything that is possible.”5 Thus in the future it will be able to construct a network that can compose a symphony or figure out all possible paths of evolution on other planets.

HYLAS Philonous, you’re laughing at me!

PHILONOUS What, is your human dignity offended? If you are not bothered by the sight of a crane 10,000 times stronger than you, why resent a machine 1,000 times smarter than you? As an energy machine augments human power, so an information machine augments human knowledge! Scientific progress makes us face more and more difficult problems. Twentieth-century mathematics is far more complicated and requires much more mental effort than tenth-century mathematics, yet our brains today are the same as they were in the year 1000, because evolution in mathematics is a million times faster than the evolution of the human brain (i.e., its structure and function). If we cannot lift a weight, we build a machine that can. If we cannot solve a theoretical problem, we build a machine that can. Where is the insult to human dignity here? After all, if we ever put together a “synthetic Einstein,” we have put together it, not it us!

HYLAS I guess it is the sense of the superfluousness of human beings that concerns me. A “synthetic genius” does not need us, our cooperation, or our control the way cranes and steam hammers do.

PHILONOUS What is the problem as long as they work for us?

HYLAS So you believe that someday machines will exceed human beings in all respects?

PHILONOUS Someday? It is already happening, Hylas. Every electronic calculator solves problems that the best mathematician cannot in a lifetime. There is a silly and naive myth about factories of the future as bright halls full of automata among potted palm trees, with a man in a white coat at the central console supervising the production. But it is nonsense. Take a chemical plant today, in which reactions in hot gases occur at lightning speeds. To harvest a valuable compound that appears in a gas stream for a fraction of a second, one needs to maintain its source reactions. The steering and control must be on a timescale of milliseconds, which is impossible for a human being to do because our nerve impulses are not quick enough. The man in the white coat therefore has nothing to do in the factory; the production is run by an electronic brain.

HYLAS But when that brain breaks down, he is there to fix it.

PHILONOUS Another electronic brain, connected for that purpose, will fix it faster.

HYLAS And when the other brain breaks down too?

PHILONOUS And when the man falls ill? There is no regressus ad infinitum here, just a hierarchy of automata that mutually control themselves, a closed circle. Obviously anything may break down. Today it is people who fix; tomorrow it will be machines.

HYLAS Yet your argument shows that so far an electronic brain beats a human being only in speed.

PHILONOUS True. But let us consider an example where the issue is not speed but a higher-level, integrating characteristic of the network. As you know, a thought process grows more difficult the more elements (notions) need to be taken into account at the same time. It is easy to perform elementary arithmetic operations from memory, but difficult to calculate the fourth root of a number with ten figures. Yet it is just an issue of “short-term memory,” that is, keeping track of the results of each partial step we make in the mathematical reasoning where the operation instructions (multiply, store the result, then divide, etc.) are fixed from beginning to end. But when a task is one of generalizing many facts into a theory, in the course of the generalization process each consecutive stage also modifies the instructions, which are not predetermined and fixed but are the outcomes of consecutive transformations. If we attempt to develop a theory of gravity that is more general than Newton’s from the data of astronomy, physics, and mathematics, such a huge number of factors must be considered at the same time, that only an extraordinarily capable network can manage that. Einstein was in possession of such a network. Of course, not everyone can be an Einstein, but in the future everyone will have at hand a thinking machine with unlimited abilities.

HYLAS This does not bode well. For a while, increasingly powerful electronic brains will work on tasks that people can still understand, at least roughly. But then a gap will open and widen: our thinking machines will provide us with solutions to problems, solutions that we will be able to use but not understand. Automata will spread their dominance until people shrink to the level of thoughtless servants and begin worshiping the iron geniuses like gods . . .

PHILONOUS Just think, my friend, your prophecy regarding the human species has already come true, and in the far past at that.

HYLAS What are you saying? I don’t follow.

PHILONOUS The emergence of electronic brains has indeed started the evolution of the tools for artificial thinking. And machines can potentially gain independence from the human race, just as other consequences of human social and manufacturing activity did in the past. Take division of labor, which resulted from the emergence of society, or novel tools and methods of production: all of this created machinery that, having gained independence from the human will, started to influence the lives of individuals so much that sometimes this machine—the state—has become an object of worship. This analogy is neither accidental nor superficial, Hylas. People ought not, now or ever, lose control over the work of their hands and brains. They ought not surrender to placid thoughtlessness, intellectual laziness, and rosy optimism, eager to believe that this or that invention or this or that social organization automatically guarantees the coming of the golden age. No indignant “Man is still the crown of creation” will change the facts—and the rise of ever better electronic brains, which no one can banish from our lives once they have entered it, is an undeniable fact. Unless people consider carefully all—the good, the bad, and even the worst—consequences of the development of electronic brains, the computer evolution may be more ravaging than crises, economic catastrophes, joblessness, and the chaos of the capitalistic free market. For this very reason we are talking so much about cybernetics, trying, often in vain, to understand what it has to say about phenomena that are apparently so distant from one another, such as evolutionary biology and psychology, or the general theory of information and sociology.

HYLAS Don’t forget eschatology, the science of final things, since its subject is life eternal and therefore includes your grafting of a living human brain onto an inanimate prosthesis. Do you believe that people will ever attempt to realize such a transfer of the psyche from a living human body to the dead metal of a machine?

PHILONOUS The system of privileged rules of thinking, that is, the system of preferences, applies not only to individual neural networks, my friend, but also to societies. In this sense culture is a system of historically formed preferences that channel people’s responses to external and internal stimuli. Today we are well aware of the conventional (that is, history-dependent) and therefore relative nature of most ethical norms, moral imperatives, and established rules. The idea of transferring a living human psyche into a dead machine appears to violate a number of our fundamental habits of thought; it appears humiliating, improper, inhumane, and unacceptable. But we cannot rule out a profound shift in our norms and preference systems in the future, after which our view of this issue may drastically change. Keep in mind that we are talking about an operation that will become possible only thousands of years from now. The evolution of electronic brains is hiding in its bosom many powerful challenges to contemporary worldviews. Suppose electronic brains equal to us in intelligence are brought up as religious believers or even bigots. Can you see how terrifying opponents would they be for all religions? What sophistries would theologians have to spin when confronted with the manifestation of the “spirit” in electrical wires and vacuum tubes? However, a problem far more significant and difficult to solve than the conflict between spirituality and cybernetics is, how should people spend their time in a society where absolutely all the production of goods is automated? In an exclusively consumption-oriented, passive society, how will people who live in great material luxury and great mental sloth face the fact that every human activity will be rendered absurd by the availability of its superior actualization by thinking machines? These are the problems for the human mind to address! You are demanding from me answers that I do not have, Hylas. In our history it has always been that the unknown—occasionally even a product of our hands—appears first, that questions are raised first, and only later, in sweat and labor, answers emerge. Often spread over many generations, the answers are imperfect or partial, but while the problem is being clarified and solved, new unknowns and new question marks rise on the horizon. Let us end today’s discourse with this. We have just one more problem to discuss, but it is most complicated: sociology cybernetically understood.