V

HYLAS My friend, I spent all night thinking about what we have considered so far and would like to hear your answers to a few questions before you continue to show me how the discoveries of cybernetics have relevance to the philosopher. If I understand you correctly, you make the distinction between “subjective” and “objective” according to the way information is “inputted” into the nervous system. Because my stomach is “plugged” into my brain by my nerves, I can experience hunger directly. Because objects outside me are not “plugged” into my brain, I can perceive them but not “experience” them directly. But things get complicated when my own brain is the perceived object. I can experience it both “from inside” (directly) and observe it in a mirror after a hole is drilled in my skull (under local anesthesia). Only I have the first kind of access to my brain; everyone else has only the second kind. How to reconcile this duality?

PHILONOUS There is no duality because your brain “experienced from inside” as you put it, does not exist. Thinking and feeling take place in your brain, but they are not your brain. Your brain “is,” that is, it “exists” only in the way that outsiders observe it.

HYLAS Could you be more precise and define consciousness in objective terms?

PHILONOUS But of course. Consciousness is a characteristic of a system that one can recognize if and only if one is that system. This statement contains both necessary and sufficient conditions for calling something consciousness in an entirely objective manner. If you listen to another person on the phone, it is only you who hear the person, unless we plug another listener into the line. In the same way, if you could plug yourself into another person’s brain, you would directly participate in that person’s circulation of information, that is, his mental life. We will talk later about how such an experiment might be done. What is your next question?

HYLAS Despite all your explaining, I still don’t know what consciousness really is.

PHILONOUS To tell someone what something is means to equate that “something” to “something else” and build a model of that “something else,” mathematical, mechanical, or another, and that’s all. We have no other way of knowing or understanding in this vale of ours. Therefore, it is not clear to me what you mean by “real” understanding.

HYLAS I suppose an extraordinarily complex electronic brain could manifest behavior indistinguishable from that of a human being: it could perceive and study not only its environment but also itself, it could think, it could express its thoughts, it could reason—but we still would not know if all this activity was accompanied by consciousness. As you say, the only way to find out would be to become that electronic brain.

PHILONOUS You are in bondage of superstitions and obsolete prejudices to a degree that drives me to despair, Hylas. You keep saying that you don’t know what consciousness is, and then suddenly you claim expert knowledge of the topic, which has undoubtedly sprung from some kind of heavenly revelation.

HYLAS What are you talking about?

PHILONOUS What you said indicates that you still believe that consciousness is not a generalization of processes like thinking, perceiving, and feeling but a kind of absolute that oversees and nurtures all those processes but cannot be reduced to them, and therefore is a superphenomenon or epiphenomenon that hovers over mental activities like the Spirit over the waters. You have turned out to be an epiphenomenalist, Hylas. In vain have I repeated that consciousness is the seeing, is the hearing, is the feeling, perceiving, remembering, learning—and nothing more. You yourself reached that conclusion when you agreed that consciousness consists of mental processes as an army consists of soldiers, but now a mystical revelation returns with a fresh blush of metaphysics on its cheeks, in full strength and health.

HYLAS You are right. I did not express myself well. But . . . if you equate consciousness with an organism’s (or electronic brain’s) reaction to stimuli, you are erasing any difference that may (and in my opinion must) exist between a lifeless thinking machine and a living human being. My perceiving, my thoughts—that is my consciousness, fine. But does it necessarily mean that an electronic brain’s perceptions and thoughts are its consciousness? In the jump from a human to a machine, my mental processes somehow lose their inner quality.

PHILONOUS The answer to this quandary lies only on the path of experimentation and empiricism. You acknowledge the inner quality of your own mental processes and do not deny other people the same quality, because they are constructed like you and from the same material. I fully understand your resistance in the case of an electronic brain, but perhaps that can be overcome if I show you how you (or any other person) can become an electronic brain. Then we will see whether or not the inner quality of mental processes disappears during that transition.

HYLAS But this is absurd, impossible!

PHILONOUS Form that opinion only after we have collected enough data and have sufficiently penetrated cybernetic science to propose an appropriate experiment.

HYLAS Very well. Are you going to speak about the set of systems called networks?

PHILONOUS Exactly. This set contains systems whose degree of complexity is equal to or greater than w, which signifies the minimum complexity that a system must have to belong to the set.

HYLAS And all these systems possess consciousness?

PHILONOUS If we define consciousness as a system’s feature that we can directly recognize only when we are the system, then we would have to ascribe consciousness to the brains of reptiles, birds, fish, and even to the “abdominal brains” that are the ganglia of insects. However, such broadening of the meaning of consciousness is inappropriate.

HYLAS So your definition fails?

PHILONOUS No. We just amend it: consciousness is a system’s feature that we can recognize only when we are the system, and the system’s complexity approaches that of the human brain. This judiciously narrows the scope of the term. As for the brains of other animals and the networks that are not brains of living organisms, we can only assume that an equivalent of human consciousness appears to various degrees: the higher the network’s organization, the “higher” or “clearer” its consciousness. The fuzziness of this formulation stems from the fact that as yet we cannot measure consciousness by physical means. Such measurement is possible in principle or in theory, but its practical realization is still far off.

HYLAS How do you envision it?

PHILONOUS It will undoubtedly measure the amount of information, that is, the “opposite of entropy,” but it must also take into account all the transformations that this information may undergo in the given network, as well as the network’s effects on its environment and vice versa. If the transformability of information is a function of a network’s complexity, and there are many indications that this is true, then we will be able to derive mathematical equations for the relation between the complexity of a system and the degree of consciousness that it can possess. In our set there is a hierarchy of networks, from the simplest, which barely reach w and manifest very “weak” consciousness, to the most complex, with a complexity of wn, whose consciousness is the “highest” and “clearest.” With such mathematical tools we will not have to resort to such imprecise terms for consciousness as “clear,” “dim,” “low,” or “high.”

HYLAS Wait. Your statement has one curious implication. You are saying that in this set of networks, the simplest are at the bottom of its hierarchy, while the most complex are at the top. But complexity can be arbitrarily increased, therefore consciousness has no limit. An infinitely complex network would have “infinite” consciousness. The mathematical expression of this thesis would be equivalent, I believe, to a formula for God, a being with an “infinite” consciousness.

PHILONOUS What you say is amusing, but it is not like that. In addition to the threshold of the minimum complexity there must exist a limit of the maximum complexity.

HYLAS What determines this limit?

PHILONOUS When consciousness increases beyond a certain point, most likely it will cause regress and degeneration.

HYLAS How so?

PHILONOUS Exceeding the optimum complexity, a network begins to functionally deteriorate: its separate parts begin to liberate themselves from the central integrating forces; autonomization tendencies lead to internal conflicts among the processes; and finally, the excessively complex network breaks up into quasi-independent units engaged in mutual battles, to the detriment of the whole.

HYLAS This sounds like a fantasy.

PHILONOUS It is not. Of course, we do not know if the human brain approaches this limiting value, that is, if it has already passed the level of optimum complexity, but under certain conditions it exhibits clear tendencies toward autonomization of its parts. In the functional, not material, sense.

HYLAS What are the symptoms of that, and why do you say functional and not material?

PHILONOUS The disintegration of processes and the related degrading of a network’s functional integrity (that is, its “personality”) do not require a physical division in the system. A personality split in a person may reach an advanced stage without any morphologically or anatomically detectable changes. There are probably many factors involved in the determination of the optimum complexity, such as the speed of impulse transmission or the number of “degrees of freedom,” which, in turn, is a function of the number of possible transmission pathways.

With the functional autonomization that complex networks exhibit, we enter the vast realm of mental phenomena related to the so-called subconscious. It is a field of psychology that, more than any other, is plagued by murky terminology and a flood of unverifiable hypotheses whose originators typically are methodically incompetent or untrained followers of Freud. For this reason, the cybernetic analysis of the subconscious and research in this field based on the directives of information theory are especially needed.

Attacking the problem of the subconscious with the tools of cybernetics, I will address only human mental development, not comparing its neuronal network dynamics with that of other network types (e.g., electronic), saving what I would call the engineer’s approach for later. Psychoanalysts say our mental life contains two parts and is a resultant of two kinds of processes: conscious (personified by the ego, that is, the conscious “I”) and subconscious (the so-called id, a collection of mental phenomena that a person normally cannot access).

Conscious processes are distinctly and obviously adaptive and purposeful, that is, they are manifestations of the causatively expressible and biologically rational adaptation of the human organism to its environment; they arise on the path of learning all the network activities necessary for survival. The processes that are ineffective or do not lead to the achievement of this goal are, thanks to negative feedback, inhibited and eliminated from the organism’s behavior.

The subconscious processes lack such an objective- or goal-oriented, rationally causative character. They appear oriented toward principially unattainable goals that are, moreover, irrational and lacking any biological purpose. They are typically expressed by the perseveration (a circular repetition) of activities characterized as obsessions, phobias, neuroses, and so on, which do appear in the behavior of so-called normal people but are manifested with greater force and clarity in the behavior of mentally disturbed people.

What is the origin and mechanism of these processes? A newborn has at its disposition a network in which the majority of processes are scattered, random, and aimless. The result is disordered muscle movements, chaotic variability of responses, and the inability to act in an integral, purposeful way. As the baby gathers experience, it starts eliminating aimless activities (i.e., those that do not lead to the achievement of any goal). The initially chaotic system whose behavior reflected “statistical randomness” of its network processes begins to organize itself in distinct functional sets corresponding to particular tasks. The child learns how to look, that is, to turn the eyes in a desired direction, to walk, to speak, and so on. The “statistical randomness” of the “newborn network” must be understood cum grano salis lest we fall into a kind of “physical absolutism” and treat the functionally unorganized network like a random collection of atoms, since clearly the network has some “centers of functional crystallization” from its inception and its activity is not as disordered as the motion of a swirl of Brownian particles in a drop of water.

The progress from random activities to those that exhibit a group character, from thoughts that are foggy and equivocal to concepts that can be precisely expressed in words, and from newborn behavior to that of an adult takes place through learning and the elimination of purposeless processes, through specialization, organization, and dynamic structuralization, in which the selection criterion is the adaptive success of a given mental activity or its effects when transposed into action.

Each new skill requires the full concentration of consciousness, that is, all those higher-level network processes that the psychologist calls the “personality” or “ego.” Once a skill has been mastered, it moves from the conscious to the subconscious region as an automatism. No longer must the entire network build a dynamic model of the activity to achieve the adaptive effect, no longer is it necessary to focus the attention at each step of the new action—because a special functional subsystem has been formed that is always ready to initiate it whenever it is called for. This manner of automatization and transfer to the subconscious applies to all functions without exception, from the aiming of the eyeballs (not easy for a newborn!) to the most complex activities, whether kinetic (acrobatics and juggling) or mental (abstract mathematical reasoning may be beyond a layman but is executed automatically by a professional mathematician).

All automatisms of this kind share one characteristic: they can be retrieved into consciousness at will. For example, the automatism of breathing, riding a bicycle, or acrobatics can be moved to the center of the subject’s attention, which then allows for an introspective study of parts of the activity.

The automatisms of the subconscious, however, differ from those of the unconscious in that they are not freely accessible, and any attempts to trace their sources meet with considerable difficulties. The reason is that a special dynamic barrier separates them from consciousness.

Both conscious and subconscious phenomena are based on a symbolizing (symbol-generating) function of the network, but the application of this function is fundamentally different. Conscious symbol formation uses symbols as abbreviations or “call codes” of large sets of impulses to create “situational models” of the external world or of particular states of the network itself. The creation of such models and operating on them (e.g., the transformation of a thought into words or into a mathematical formula) are necessary for the human organism’s adaptive functions. Such symbols mainly address the surrounding world and serve the purpose of communicating with other people; they also help create a “model” of the external world within the network. The biologically regulated, rational, causatively conditioned, and adaptively indispensable function of this group of network processes—note that all this is conscious—is self-evident and well understood.

A substantially similar symbol-generating ability exists in the subconscious, which encompasses mental processes that do not constitute automatisms in the sense described above (because of the “dynamic barrier,” there is no access to them), and their adaptive value is questionable. Granted, their inaccessibility is not absolute, since we know of them. They manifest themselves in dreams, hypnosis, and many disease states of the mind. They also can be uncovered by tests, particularly by the method of “free association.”

The symbolic functions of the subconscious are “forced” onto consciousness when the network is impaired; they take the form of obsessions, tics, and phobias and display great persistence, along with perseveration tendencies and a resistance to arguments of experience or rational persuasion, whether they come from the subject himself or from other people. These symbolic functions have all the characteristics of “normal,” purposeful network activities, namely: (1) an initial motivation, (2) an ensemble of activities, and (3) a goal. But the elements, when taken together, form a completely irrational whole that does not serve the subject or anyone else and is the very opposite of adaptive—it torments the person who is driven to such actions by the inner imperative.

We have here a reversal of the hierarchy of the mental phenomena that we consider rational. We know that consciousness is to some extent conditioned by many unconscious mental processes—for example, an articulation of a thought is enabled by a considerable number of mental automatisms, such as the internal memory feedback connections that supply the necessary words, maintain the articulation’s directional gradient (we always speak “about something,” but at the same time our mind is moving “from somewhere” to “somewhere else”), eliminate from the field of perception and thinking all stimuli that might interfere with this articulation, and so on.

The phenomena of the unconscious (which can always be retrieved into the conscious) are, so to say, the foundation of the edifice of consciousness. One expects the relation between the two domains will be one-way, that consciousness will retain full control over the unconscious automatisms that support its functioning and that those automatisms will be retrievable as needed. The reverse, that is, the control of consciousness by the unconscious automatisms, would at first glance appear to be impossible. But the relation between them is much more complicated.

First, let us recall the random behavior of the “newborn network.” The network is an “apparatus for generating hypotheses” about the surrounding world and about the organism’s relation to that world. It generates—by “trial and error” and elimination, as we know—models of the environment and dynamic models of purposeful behavior. The main point is that not all “incorrect models” are eliminated completely; some leave traces that can last a long time. The factor that enters the game at this point is the organism’s (the network’s) emotional engagement with situations that, because they are incorrectly modeled by the network, should be removed from the conscious. Yet they remain in the network as subconscious processes, hidden behind the “dynamic barrier,” from where they can affect in a harmful way the course of other, conscious processes for years.

A phenomenon that I am going to introduce next may shed some light on these issues. I mean memory and the mechanisms of remembering and, more importantly, of recalling at will. The human neural network is not as efficient, reliable, and subordinate to consciousness as some think. It has been shown experimentally that human memory contains about a million times more stored information than the recollection mechanism can access. Under hypnosis a person can remember things that he could not while awake. American psychologists have shown that hypnotized bricklayers can describe in detail a few bricks, out of the tens of thousands, that they laid at a building site eight to ten years ago. A man described a particular brick in the wall of a building that he put up—the sixth from the end in the eighth row on the third floor—as having a reddish spot at its edge and a small chip at the left corner, and this was confirmed! These amazing results show that the memory of an average person contains about 1015 elements, of which no more than a millionth can be consciously accessed when the person is awake.

Today we can only guess what the connection is between these facts and the phenomena of the subconscious. It is possible that the subconscious represents a set of phenomena that inevitably appear in a network that has crossed the complexity threshold. It is a side-effect that evolution did not intend, so to speak, yet it is indispensable for the functioning of sufficiently complex systems that possess consciousness.

As we can see from the experimental data, we have very limited control over the stores of our memory. In those stores, various processes of which we know little—for example, the spontaneous formation of connections and the transformations of specific engrams (memory traces)—take place. This gives rise to certain sets that, though hidden behind the “dynamic barrier” of the subconscious, affect the totality of consciousness in complex ways. Today we can sketch hypothetical maps of neural connections that seem to correspond to the movement of engrams from the retrievable region of recollections to the sets that are the substrate of the subconscious. These are the first steps of cybernetic analysis in this difficult terrain. The method of free association allows us, so to speak, to “send a probe” into the subconscious and obtain “samples” of its raw material, whereas the analysis of the conscious content is limited to what has been already filtered through the dynamic barrier and undergone organization, for which reason it tells us little, if anything at all, about the phenomena occurring outside consciousness.

To conclude, consciousness would not be possible without the unconscious, automatized processes. And without the symbol-generating function there would be no subconscious that manifests itself in our heads in dreams, in hypnosis, and in neurasthenic but also normal states—in the form of mistaken actions, the forgetting of a name or word, and so on.

HYLAS So we don’t understand the symbol-generating function of the subconscious or the purpose it serves but suspect that it may be a side-effect of the functioning of a very complex network-type structure?

PHILONOUS Correct. With this, let us finish our digression on the topic of psychoanalysis. We used it to underscore the fact that the relative freedom or autonomy of certain mental phenomena is the fundamental functional characteristic of a large neural network. Nevertheless, cybernetics cannot tackle this problem with sufficient rigor, because as a science it is still in an embryonic stage, and a general mathematical theory of automata that are arbitrarily complex networks does not yet exist.

HYLAS What theory do you mean?

PHILONOUS The most complex networks that we construct today are computers. They contain 3,000 to 4,000 elements (crystalline or vacuum relays). The largest networks that we can assemble using the current technology and knowledge are limited to about 10,000 functional elements. The number 10,000, or 104, is thus their complexity coefficient. In contrast, the network of the human central nervous system contains 1010 elements (neurons), which makes it a million times more complex than the largest “artificial” network. Why are we unable to create automata larger than 104 elements? The first difficulty is technical, the size of the vacuum tubes and their relatively high power consumption: a vacuum-tube electronic brain 100,000 times larger than the existing ones would require the entire Niagara Falls to cool it.1

But there is progress on this front. Using transistors instead of vacuum tubes, we could cut the size and power consumption of an electronic brain by ninety percent. The second difficulty is our ignorance of a general theory of automata. The first airplanes were built empirically, by the method of trial and error; but the further development of aircraft would not have been possible without the theories of flight, aerodynamics, material strength, induced vibration (flutter), and so on. We still construct networks essentially by trial and error today, as there are no generalizing theories of how they function. The theory of digital automata, such as computers, must become a part of formal logic. The first problem is that formal logic does not tell how many elementary operations are needed to solve a task; it only says if the problem is solvable or not. For formal logic it is irrelevant if a solution would require so many operations that performing them would take ten billion or a quadrillion years. But in automaton construction one must know the number of operations needed to complete a task. The second problem is that a network can make a mistake in any of its elementary operations. When the number of operations becomes enormous, the probability of error increases. To minimize the effects of errors, an organism employs corrections that we cannot: the tissues of an organism are self-repairing, but technological systems do not have this ability. A future theory of automata therefore must consider the length of the reasoning chain, which in turn must take into account the factor of time, and it also must consider and predict the rate of production of reasoning errors. In order to do that, the theory must, first, join logic with thermodynamics (which includes the time factor in the processes of entropy increase, for the opposite of entropy is information), and second, include the data from biophysics.

HYLAS I do believe that such a theory will enable us to determine the optimum complexity, beyond which the wholesome functioning of a network begins to break down. But I still cannot see how a theory that is physical in nature and therefore expressed in the language of mathematics can measure the degree of consciousness in a given network.

PHILONOUS It will determine the efficiency, scale, and rates of all those processes that constitute consciousness, but nothing more. I will give you an example of what such a theory might do. On the basis of their analysis of certain neural networks, McCulloch and Pitts concluded that if one briefly touches a person’s skin with a cold object, the person will sense heat.2 Their prediction was confirmed by an experiment. We already can find mathematical equivalents of the functioning of simple networks; for very complex ones, this is still impossible.

HYLAS Yet none of this addresses the issue of the “inner quality” of mental processes.

PHILONOUS Why do you think so? Surely you know that human consciousness is variable and not always equally “clear.” Certain chemical compounds may “sharpen” it, while others blunt it. The consciousness of a person awake is different when he falls asleep, and still different when illness or fatigue affect it, there are “states of obtundation,”3 and so on. The theory will predict all these possibilities. But perhaps this is not your point? Perhaps, by “inner quality,” you mean what the brain of a fish or the ganglia of an insect exhibit, that is, metaphorically speaking, “what it feels like to be a carp or an ant.” A theory of automata will, of course, be unable to put us in the shoes of a carp or an ant.

HYLAS Here’s another objection, which concerns all of cybernetics, but especially its philosophical fundament. “Feedback networks,” which are the subject of this science, are mechanisms. So cybernetics, reducing neural and even mental phenomena to mechanical phenomena, is merely a new reincarnation of the nineteenth-century mechanistic materialism, which held that everything, including all the processes of life, could be expressed in the language of mechanics. But haven’t advances in biology and physics torn down the edifice of that naive idea?

PHILONOUS You are saying that cybernetics is a continuation of the old mechanical philosophy. But consider this. Philosophy is always a reflection, an abstraction, of the practical activity of human beings. At early stages of their existence, people, having already coalesced into groups, the nuclei of society, and having acquired language, attempted to affect the surrounding world and explain it in the same way that they used with respect to other people. Their personification of natural phenomena, heavenly bodies, stars, and so on, was the first general model of those phenomena. It was both anthropomorphic and animalistic. At a much later stage, in the seventeenth and eighteenth centuries, a new model of phenomena started to form. Its basis was a mechanism—a human artifact in the form of a clockwork, a machinery—together with its theoretical underpinning in Newton’s celestial mechanics. Physics started to treat matter as a collection of tiny elastic particles subject to laws of mechanics. The laws of mechanics succeeded in solving the mystery of the heart and the circulation of blood. Mechanics helped to create the first steam engine. The underlying concept of a mechanism deduced from all these fields had the following characteristic properties: the whole could be reduced to the sum of its parts, every process could run equally forward and backward, and the mechanism was ahistorical, that is, unaffected by its history. You can take a mechanism apart and put it together again without changing its function. You can reverse its operation and it will return to its starting point. When you know the position of all its particles and the forces acting on it, you can predict its arbitrarily distant future. The problem is that these statements are true only when applied to systems like clockwork or a steam engine; they are not when applied to biological or quantum phenomena. Experience has refuted all the postulates of mechanical philosophy: an organism is more than the sum of its parts; the processes occurring within its boundaries are irreversible; its history does affect its future, and its future states cannot be completely predicted by knowing its past states. Mechanism therefore had limited value as a model of the phenomena that take place in nature, especially those in living systems (or more generally, in organisms, whether living or nonliving, as defined by our threshold of the minimum complexity). This, of course, does not mean that this concept did not play a positive role in scientific progress in its time. We only must take care not to fall into the trap of mechanical philosophy. Cybernetics rejects that model and offers a new one: a system that is not reducible to its parts but instead represents a distinct, unified whole; a system that is shaped by its past developmental history; a system that actively coexists with its environment; a system whose future behavior cannot be precisely predicted from its structure. If you insist on calling this new system a mechanism, then you must apply that term to living beings as well.

HYLAS So you are saying that a so-called complex network is not a mechanism?

PHILONOUS It is a matter of word choice. We shall not go into details of network technology or network evolution, that is, the causes of their emergence. We only say that regarding their material aspects, networks can be made of tissues, electrical conductors, mechanical blocks, or coupled chemical reactions. Thus we can have networks that are called neuronal, electronic, mechanical, chemical, or even combined, whose individual parts are made of different material. As for their evolution, let us note that the networks of living organisms emerged in biological evolution, but all other that we know are results of human engineering.

HYLAS In your opinion, then, an electronic network could form consciousness? But how can we prove this if consciousness is an inner quality of a system that we can perceive only when we are that system?

PHILONOUS The question of whether or not a network possesses the “inner quality” of conscious perceiving shall be left open. This is irrelevant to cybernetics. The purpose of cybernetics is to construct networks that manifest phenomena such as “memory,” “learning,” pursuing “goals,” “recognition,” satisfying “needs” (which can be either “real” or “substitute”), forming “habits,” having “obsessions,” and establishing “values” such as “free will,” “personality,” “freedom of choice,” “initiative,” “creativity,” “character,” “temperament,” but also “neurosis” and “addiction,” and others.

HYLAS These properties would emerge in the devices that you call networks?

PHILONOUS Obviously not in all of them. We will consider several different kinds of networks.

HYLAS Go on. I am all ears.

PHILONOUS The fundamental feature of all living beings is their teleological or goal-oriented character. Every life function of every organism is subordinate to a goal, which is the continuation of life, of both the individual and the species. This statement is obvious and banal, because by living, human beings, lions, hippopotamuses, or flies serve no purpose beyond themselves. It is a different story with machines, the products of human beings. The goal of a machine’s existence relates not to the machine but to the realm of human activities. Thus a microscope is an amplified human eye and a steam engine or atomic reactor an amplified human arm. A machine also enables human beings to perform actions that they could not perform without it, such as flight, by an airplane. In each case, a temporary or permanent human intervention to correct and regulate the machine’s action is necessary. All machines are connected to the human nervous system, which steers them (the pilot in the airplane, the engineer in the train, the physicist at the atomic reactor). Human beings can also make use of the life functions and properties of a variety of other organisms (both animals and plants) for their purposes. It is no different when they build feedback networks. The networks created by constructors have no goals of their own that would relate exclusively to themselves: an electronic brain helps people do calculations; an autopilot helps steer an airplane; and so on. Every such network, being a creation of its constructors, imitates the activity of the human nervous system in a specific, very narrow field, not the activity of the system as a whole but only one of its parts, separated from the rest. For this reason, analogies drawn between a computer or other network of that kind and the human brain will not take us very far. But that may not apply to the artificial networks of the future. Here I intend to preempt the accusation of paying too little attention to the networks that exist and operate today. The reason is that we are interested not in the practical uses of specialized networks but in the properties that we may find in networks in the future, properties that today we detect only in the activities and operations of the human brain.

After this clarification, we can finally get to the point. Every network must be in contact with its environment, and to this purpose it is equipped with devices that allow information input and output (which also can include an effector that acts upon the environment, as we see below). The information leaving a network may or may not be translated into physical action. The electronic brain designed for calculation receives information from its environment (the operational instructions and the assignment), and its output releases information “transformed,” a result of mathematical operations. But a device for shooting down airplanes, consisting of a radar (input), a network (the steering unit), and cannons (output), also possesses an effector (the cannons).

Since the computer is like an isolated part of the brain into which we must feed information and then take it out, transformed, and since the computer performs no physical activity directed at the environment, let us focus on the operational principles of the antiaircraft apparatus. It can “perceive” the airplane through the radar, “recognize” it as an airplane (as opposed to a leaf in the wind), then, on the basis of previous experience stored in its “memory,” predict the probable position of the airplane several seconds later, and finally, aim the cannons at that position and shoot the airplane down. The network can make a mistake—in “perception” (taking another flying object, such as a kite, for an airplane), and it also may “miscalculate” when the airplane performs an improbable maneuver to change the direction of its flight abruptly. If the first round does not hit the airplane, the network recalculates and shoots again, repeating this cycle until the goal is achieved (the airplane is hit). When two airplanes appear at the same time, a “conflict” ensues, and the network must “decide” which airplane to shoot at first. If it works as expected, the network will “decide” and start firing in the described manner. For each subsequent shot, the error (the deviation between the line of fire and the airplane’s position) is corrected by feedback and is smaller and smaller until the target is hit. If the network does not have a device for making decisions in the event of a conflict, it will not be able to “decide,” will “hesitate,” and there will be a series of alternating processes (making a decision, changing its mind, changing its mind again, and so on). So I ask: What is the fundamental difference between such an apparatus and a living organism?

HYLAS The organism is living; the apparatus is not.

PHILONOUS This difference is indisputable but for us unimportant.

HYLAS The organism’s activity aims at continuing its own existence, while the activity of the apparatus that you describe does not.

PHILONOUS Precisely. Feedback enables the network to approach the target by continuously correcting its behavior on the path toward the goal, but it does not set that goal. The goal is given: for an organism, it is set by evolution, and for our network, by its constructor. One can say that in general the “primary cause” for directed action is provided to a network by its inner imbalance. The imbalance can be biochemical, electrical, or mechanical. The goal for the network to achieve is to reach equilibrium. Giving an electronic brain a task, starving an animal, or flying an airplane through a radar field disturbs the inner equilibrium of the network (computer, animal, antiaircraft automaton), and this disturbance initiates certain patterns of activity. When the electronic brain has solved the problem, the animal has obtained food, and our automaton has shot down the airplane, they all reach equilibrium, which will last until a new stimulus, external or internal, disturbs it again.

HYLAS You are talking about objective goals, not about those that only subjectively exist in consciousness, right?

PHILONOUS Yes. I am talking only about teleological processes that are objective. Our antiaircraft network, the computer, and the animal are not “aware,” in the human meaning of the word, of the goal they are pursuing. But there always exists an event or series of events that eliminate or minimize the imbalance and restore equilibrium. Such an event is usually external to the network (the animal finds food or a sexual partner, the antiaircraft system shoots down the airplane) but also can originate in the network itself, as when the network recombines its own elements to lessen its imbalance.

HYLAS Can you give an example of such a pursuit of a goal that only exists within the network?

PHILONOUS Such regrouping of a network’s internal elements occurs when a poet writes a poem. The goal is “internal,” as is the action—putting word symbols into a new structure which lessens the network’s imbalance.

Sometimes achieving the goal is beyond a network’s ability. It can happen that a network, failing, will start to pursue a substitute goal. This phenomenon shows in an electronic brain as a short circuit and in a human as drug addiction; in other words, a surrogate activity takes over. These are pathological states. In the extreme, they lead to the self-destruction (a suicide) of the network that cannot solve the tasks it faces.

HYLAS Since you are not using moral values in your analysis, why do you call the poet’s poem an achievement of the goal and drug abuse a pathological phenomenon?

PHILONOUS A very good question. The drug’s effect will lead to an internal recombination of the network that accomplishes a decrease in imbalance. But the poem represents an increase in information, the drug use does not. In general, one can call pathological any internal transformation of a network that lessens its ability to achieve real goals.

Let us now consider the second fundamental feature of a network: the ability to learn. This ability presupposes the possession of memory. Learning is a modification of behavior based on data from previous experience—thanks to feedback. In the language of cybernetics, it is an internal regrouping of a network’s elements for the purpose of pursuing a goal more effectively. The simplest mechanism of learning is the formation of temporal associations (conditioned reflexes). Each network has “rules of internal traffic” or “a system of preferences.” At a street intersection, certain vehicles have the right of way, and the total traffic is conditioned by the configuration of the intersection, vehicle density, and the state of the traffic lights. The circulation of impulses in a network is likewise regulated by the “right of way” of certain stimuli, the network’s structure (configuration of the intersection), and its current state (the traffic lights). Which impulses (that is, which information) have preference over others and also where they go is decided by the network’s “system of preferences.” A network that cannot alter its system of preferences under the pressure of new experiences is unable to learn. A change in the preference rules means that a new conditioned reflex has formed. Feedback either amplifies the links between specific stimuli (the bell always rings when the dog is fed) or weakens them (there is no bell before the dog is fed).

An overload of stimuli causes crowding, hinders the circulation of information, and thereby lowers a network’s effectiveness. In left-handed people (whose right brain hemisphere is dominant), the speech center develops in the left hemisphere, and speech-related impulses must run from the left hemisphere to the right and then back again, which leads to “congestion” in the subcortical and commissural pathways.4 Therefore these people often suffer speech impairments (such as stuttering). When the opposite happens, when there are fewer stimuli than available pathways, “hesitation” occurs, because of the necessity to choose.

A network capable of learning can become conflicted when an old preference system clashes with a new one created in response to new experiences. The simpler the network, the more easily it solves the conflict by an arbitrary choice of which stimuli should have priority. The more complex a network, the more preference systems it contains, and therefore the more possibilities exist for inner conflict. Because a network assembles its preferences without comparing them in terms of logical consistency, preferences formed at various times in its past may be inconsistent and cause a vicious circle in which trapped stimuli circulate in vain. Such a network may manifest signs of neurosis (e.g., compulsive thoughts). A simple example of conflict: a man from a country where people drive on the right side of the road finds himself in a country where people drive on the left side of the road. A higher level of conflict might be a clash between a scientific worldview and a religious worldview. The question of preferences is a question of values because deciding which signal or which information is more important than another means making a value judgment. In the language of cybernetics, the question of “value” reduces to the question of “sorting,” that is, how to differentiate between impulses and direct them to different pathways.

So far we have discussed two kinds of networks. The first are simple ones that have feedback but are incapable of learning. These are found in devices such as an “automatic pilot” or a command center in a self-guiding torpedo. Their preference systems are permanent. Networks of the second kind can learn because they possess memory (an antiaircraft radar system). All living organisms have memory. Even unicellular organisms can form conditioned reflexes and thus modify their preference systems in response to changes in environmental conditions. Both kinds of networks have feedback between the network proper and its organs of input and output.

The third kind of network contains not only this external feedback but also internal feedback, which makes it capable of symbolic activity, a fundamental condition for the emergence of consciousness. Simpler networks circulate information about their environment or about limited parts of themselves (sensory organs) that are located at a periphery of the network’s body proper and can therefore be treated more as parts of the environment, except that they are permanently attached to the network. In more complex, higher-level networks, besides these “primary” signals there are “secondary” signals that circulate “information about information.”

A set of potential internal feedback links contains everything of which a given network may become “aware” or “conscious.” The secondary signals, the information about information, are “abbreviated” labels (symbols) denoting the states of specific parts of the network. What is abbreviated here are not electric impulses but a particular current state of a set of the network elements. When a given symbol appears in the network, it will be acted upon, and internal feedback puts parts of the network into corresponding states. Consciousness is not symbols but processes that “extract” symbols from the network and put them back in for operating purposes.

The “meaning” of a symbol is the set of signals that are potentially connected to the symbol. Such symbolization allows various processes of the network to be generalized. A symbol can denote a set of primary signals originating in the environment (“a tree”) or a set of signals originating in the network itself (“sadness”). The more a network can utilize its previous experience (stored in its memory), the more effective it is. This depends on the utilization range of memory data. Psychologists call this utilization a “transfer” of an acquired skill from one field of activity to another.

An animal can be trained so that when it is shown two black dots, it eats just two pieces of offered food, and three pieces when there are three dots. But it cannot transfer this skill from the visual field to another. It will not react to two sounds of a whistle or two touches of a hand; it will react only after it is trained in the field of the other sense. But a human being, understanding the meaning of the signal “at once,” has no problem making that transfer. This ability to symbolize or generalize can be expressed by the formula, “If you perceive n signals through any of the five senses, perform n actions.” Of course, the formula need not be verbal. Words are attached to the symbolizing process only when the task is so difficult that it requires operating with large sets of signals, both primary and secondary. A simple transfer of an acquired skill (without symbolization) occurs directly through forming links between all sensory fields and the generalized directive, that is, the system of preferences that applies to any kind of stimuli. This system then becomes the network’s operating rule until new experiences inactivate it.

A network’s “intelligence” is a function of the maximum transfer rate of which the network is capable and not of its size. A network can have an enormous memory and still be unable to utilize it in the pursuit of a goal. In that case, the memory becomes useless ballast. By studying its own behavior, analyzing its past, and assigning symbols to responses, a network can “recognize” its operational rules, whereby it becomes “aware” of them and so can modify them “at will” (by recombining or supplementing their elements or by establishing altogether different directives). This “willed” change takes place quickly, unlike the change of a preference system due to external stimuli, which requires replacing one group of conditioned reflexes with new ones, which always happens in stages. Psychologists usually consider a quick change in a preference system a sign of intelligence, in contrast with the gradual change that is observed during training. But under some conditions a sudden change may take place that is not a result of the network’s “conscious” activity based on symbolization. It happens when the network attempts to reach a goal without any plan, by trial and error, and accidentally comes across the right combination of stimuli that, because of the success, is immediately preserved. A monkey, given the difficult task of reaching a banana that lies outside its cage and provided with two sticks inside the cage, each too short for the purpose, finds that inserting one stick into an opening at the end of the other—is a good example.

HYLAS What is a symbol that is not a word?

PHILONOUS A network usually works with operational units well before it can use verbal symbols. For a predator, for example, such an operational unit is the entire sequence of behavior that culminates in securing the food (catching the prey).

HYLAS I see what an “operational unit” means in animal behavior, but what is its equivalent in a network?

PHILONOUS A sequence of consecutive motor directives that form a higher-level whole, as notes form a melody. It is even called “a kinetic melody of movement.” This sequence is projected onto a changing background of external stimuli that continuously affect the network and are systemized by its priority systems. Thus a “kinetic melody” is not a constant, given once and for all, but is constantly shaped by feedback and the stimuli that originate from inside the network and form hierarchical sets—that create the “dynamic spatial map” and the “map of temporal links” between consecutive actions. Even in a highly organized network like the human nervous system, similar units of object action (action performed with tools) appear well before the development of language. A symbol here may be a pose, a gesture, or an entire situation (pantomime). We should keep in mind that the role of language in an individual’s life, which is to facilitate adaptation through high levels of generalization and systemization of both internal and external information, is secondary to its primary role, which is to communicate with members of the same species. In this latter sense, language is a system of abbreviated stimuli containing sufficient information for one network to send to another clear “action directives” so that processes in both networks become similar (getting “in the same mood”).

HYLAS Why do you use this extremely complicated way of describing things that psychology already knows very well?

PHILONOUS Because I am forced to describe instead of providing mathematical formulas which do not yet exist to express the high-level organizational forms of network processes. Note that any behavior of an organism can be described either in words or by a corresponding neural network (a connection map, a “formal scheme”). Theoretically the two are equivalent. But if we want to describe all possible actions of a network—for example, how it can figure out that different triangles are all triangles—our verbal explanation will become extraordinarily long. The recognition of what makes a triangle a triangle, whether large or small, equilateral or scalene, is a tiny part of the larger issue of the “congruity of geometric forms,” which in turn is a tiny part of the still more general issue of shape similarity (visual congruence). Psychology’s description of animal behavior works well, because the responses there are relatively simple, but with the kind of tasks we have been discussing this method fails, for the reason that what is happening in a network when n processes take place simultaneously to produce the outcome, that is, the organism’s behavior, is a very complex resultant of the whole ensemble. For “visual congruence” to be explained, a description of the connection pattern in the visual center of the brain would probably serve better than any attempt at a verbal definition. It thus appears that the most practical description of an object is not the catalog of all its possible states, for that could take forever to assemble, but the object itself—in our case, the network itself. For this reason, the search for a logical definition of visual congruence may be futile. A map of neural connections in the network would be fully adequate and equivalent. This method is completely new, never seen in science before.

HYLAS How so?

PHILONOUS Logical reasoning is one of the functions of some networks. We can build models of networks that reason logically. A network whose function is to formulate a relation of the type “If p, then q” is more complex than the verbal function “If p, then q,” its formal-logic equivalent. In simple cases like this, a logical description (the sentence “If p, then q”) is always simpler than the equivalent network; in complex cases, it is the opposite: the logical description becomes much more complicated and incomparably longer than what is being described, that is, the network itself. Thus we find ourselves in a curious situation: the simplest logical description of a network is the network itself, and logic begins to turn into—or accrete with—neurology.

HYLAS Why neurology?

PHILONOUS Because neurology is, or at least thus far has been, the study of the neural networks of the brain.

HYLAS So you think that by the mathematization and mapping of network processes we can find an answer to the question regarding the difference between conscious and unconscious processes?

PHILONOUS That is the only rigorous way to proceed. I have already mentioned how difficult it is to study the mechanism of “visual congruence.” But that is child’s play in comparison with the processes of symbolization.

HYLAS What do we already, and really, know about this visual congruence, that is, about how we recognize shapes, objects, or letters to be the same when they appear in a great variety of sizes and forms and can be affected by perspective, lighting, and so on?

PHILONOUS I can only offer a hypothesis. The number of fibers in the optic nerve is smaller than the number of elements that the nerve is connecting, that is, the light receptors in the retina and the cells in the visual cortex (area striata). Similarly, there are fewer sensory fibers that project to the corresponding cortex analyzers than there are cells (acceptor neurons) in them. So a relatively large amount of information must be transmitted through a relatively small number of channels. How is this possible? There is an analogy with a television set, where we have only a single electron beam, which is so narrow that if it fell on the screen without movement, it would draw a point. The beam moves across the screen at high speed, covering its entire surface in a fraction of a second, zigzagging horizontally through every point of the screen’s surface. Because our eye cannot perceive changes occurring in less than one-sixteenth of a second we see the TV picture “all at once.”5 The spatial receptor of the brain “scans” with a circulating “beam” the afferent sensory fields.6 In this way it is possible to transmit a large amount of information through a relatively small number of channels (to transmit two stimuli simultaneously we need two channels; to transmit two stimuli consecutively one is enough). The amplitude of the rotating “beam” is largest when there are no signals at all. This corresponds to the so-called “alpha” rhythm in an electroencephalogram, the regular, sinusoidal peaks and valleys in the electric potential of the cortex, much as the electron beam of the television set fills the empty screen with white noise when no information is being sent. When a figure appears in the visual field, its spatial elements that remain fixed within one period of the scanning beam (one period of the “alpha” wave) are transformed into a time series, and this way a spatial series of dots (the figure elements) is transmitted as a time series of impulses following one another. Thanks to this mechanism, a one-dimensional channel can transmit, via pulsating signals, an image with an arbitrary spatial complexity.

However, such transmission has its drawbacks. First, the speed of reception is limited by the time of the cycle of the scanning beam. Signals shorter than this period give the illusion of movement, because they fall on different sections of the “alpha” sinusoid. Second, the scanning process requires a constant activity of the cortex, an endless circular wave of processes. This spontaneous activity is detected as the essentially constant “alpha” rhythm of the brain’s biocurrents. If a signal lasts less than one-tenth of a second, it will fall into the “insensitivity” phase (similar to the “refraction phase of the nerve,” a temporary loss of excitability in a nerve fiber after transmitting an impulse). When perception is taking place, currents with various frequencies are summed, the “alpha” rhythm disappears, and fast-changing (“beta”) rhythms appear. This is confirmed by the observation that the time interval between stimulus and response in the visual cortex is not always the same, since the impulse arrives at random: if it arrives just before the scanning beam comes to the cortex reception field, the interval is short; if it arrives a moment after the beam has left, it must “wait” until the beam returns after the entire “round.”

HYLAS But what precisely is this beam? In the television set it is a real stream of electrons in a vacuum tube.

PHILONOUS Obviously there is no such physical beam in the brain. The point is that the number of neurons is higher than the number of afferent fibers, that is, the channels through which information enters. What circulates in the brain is the pattern of sequential connections between the afferent fibers and the analyzer and the related changes in the excitability threshold. Imagine a man in a circular room who must read instrument displays on the wall. He cannot read all of them at once, so he walks from one instrument to another. He can notice a change in any display’s reading only if the change persists longer than the time it takes him to make a circuit of the room.

HYLAS Who is the reader in the brain?

PHILONOUS The higher-order processes. They are documented by changes in higher-frequency potentials in the cortex during the perceiving and thinking. But it is difficult to isolate these processes because each of them involves large areas of cerebral cortex. There is “a little bit” of each process everywhere, whereas the more elementary processes tend to concentrate in the region of the sensory analyzers. So far, we have described the impulse transmission to the cortex, but to transform the train of stimuli into a perception requires a number of other processes. When we shine a light into a person’s eye, we first notice the jump in potential and a perturbation of the “alpha” rhythm in the ocular center (area striata), which then spreads into the surrounding secondary sensory fields of the cortex. This is where the network elements are located that enable visual “recognition” based on “visual congruence.” It is an extremely complicated process. Consider the recognition of a hexahedron. We can observe it from various angles and distances. From a single-perspective projection of it, we can easily extract a system of equations that show how it would look when viewed from any other angle. The network stores these equations in its memory. The arriving impulses are then compared with the system stored in the memory, and the moment they coincide, a “resonance” occurs, and the network “sees the hexahedron.”

HYLAS Once you speak about the appearance of a hexahedron, then about some equations. . . . What exactly is stored in the memory?

PHILONOUS Nothing except the ability to transmit a specific series of impulses. An equivalent of this ability is a set of connections that, when activated, reproduces the series of impulses. The mathematical expression of this is an equation, which can be written down, but logically one is equivalent to the other. As we mentioned when we were talking about “visual congruence.”

HYLAS But can such a serial comparison between afferent impulses and memory data work in reality? Recognition that way would take awfully long.

PHILONOUS What I described was very much simplified. In reality a small, partial match is already sufficient to initiate the organization of the visual field according to the directives of the memory. The visual memory is active and always tries to “impose” its “concept” on the processes in the area striata. Or you might say that it is “guessing” what is seen. This is clearly seen in optical illusions, when the incoming information is insufficient, for example, at dusk. The visual memory, attempting to organize the field of vision, keeps “proposing” to the area striata various “possible alternatives,” which is why, during an evening walk in the fields, you are startled by a person, which turns out to be a bush. What is taking place in the neural network is not just the simple comparison of impulses but a comparative overlaying of extensive processes with the strong tendency to self-organize into specific dynamic structures. These processes occur rhythmically, in the form of neuron firings, but the electroencephalogram registers only the resultant of all the overlapping biopotentials.

Specific processes have characteristic frequencies. In the brain, some rhythms (such as “alpha”) are privileged, which is why stimuli with a certain frequency may resonate with the cortex rhythms and increase their amplitude to such a degree that a person suffers an epileptic attack. Luckily, normal physiological stimuli do not have such periodic character. Before the onset of convulsions, the person experiences emotions, most often unpleasant. This demonstrates a connection between emotion and the frequency of oscillations in the neural network. Because aural stimuli are uniquely capable of affecting the periodic processes in a network, music is important to human beings. During perception or spontaneous thinking, the main processes in a network are usually accompanied by others with higher frequencies, like harmonics. The tendency to form these harmonic frequencies depends on the “subjectively perceived situation” (the emotional mental state). When we stimulate the neural network with impulses that have a frequency of twelve per second, frequencies of twenty-four per second may appear in the frontal lobes and those of six per second in the lateral lobes. If a person’s mood changes, the ratio of these harmonics changes too. When the frequency of six per second dominates, a person experiences feelings so unpleasant that they are difficult to endure; at twenty-four per second, calm intellectual analysis of the optical illusion that the stimulus elicits becomes possible. When the subject of the experiment is told not to resist the light impulses, the lower harmonics of six per second begin to rise, and the subject is soon unable to continue the experiment. But if an analytical, intellectual atmosphere is maintained during the experiment (with the same periodic light stimulus), then the high-frequency harmonics increase. The slow rhythm usually appears only in dangerous or threatening situations and is associated with the corresponding feelings. It is important to note that the same stimulus frequencies evoke different emotions in different people, but always the same emotions in the same person. This demonstrates a marked individuality of the network processes which is explained by the historical formation of the network’s “personality”—its preference systems, memory data, habits, and so on. One hypothesis on the meaning of these rhythms suggests that the “alpha” rhythm seeks visual perceptions and the “theta” (six per second) “pleasant” feelings.

HYLAS Do we know the purpose of the harmonic frequencies?

PHILONOUS In principle, yes. The higher and lower harmonic frequencies establish “long range” connectivity—between distant parts of the cerebral cortex. That is why they appear during thinking, especially hard thinking. They are a component of that process, just as the signals from the optical regions secondary to the area striata are a component of visual perception. We cannot say that they “mean” anything or have an equivalent in a person’s mental life. If we consider a mental process to be a whole, then the harmonic rhythms are part of that whole. We can record them but do not understand their significance, nor do we know what information or what directives they carry from one part of the brain network to another. Yet experiments might shed light on the meaning of these signals. If we intercept an encrypted message that one part of an enemy army sends to another, and we cannot decipher it, we can send it to its addressee and observe the reaction. The reaction will reveal to us the content of the message. Similarly, researchers are trying to record the “cipher” that one part of the brain uses for communicating with another; then they will send that “text” into the neural network at some other time and observe the network’s reaction to it.

HYLAS Have such experiments been conducted?

PHILONOUS Not yet. It is extremely difficult to isolate a single “encrypted message” between parts of the brain from the huge number of processes taking place at the same time.

HYLAS I see an even bigger challenge here: the “brain cipher” must be sent to a particular part of the network—and we do not know the addressee.

PHILONOUS That is not a problem, because, as far as we know, the parts of a neuronal network are not connected like telephones but rather like radio stations: the impulses of a message are propagated in all directions, but only the addressee of the message can make use of them. The “encryption” itself together with its harmonic structure unequivocally determines not only the content of the message but also its sender and destination.

I think that this should end our excursion into the field of the cybernetically interpreted neurophysiology. The phenomena here are so enormously complex that just listing them would require many extensive, multidisciplinary investigations. Let us proceed to the next characteristic of a synthetically (holistically) considered network: free will and selfhood, issues so fundamental in philosophy. By “free will” we commonly mean the subjective feeling of liberty in action in response to a stimulus or a set of stimuli. In this process it is only the act of making a decision, of choosing a behavior, that is always conscious. Once a decision has been made, the resulting action can unfold automatically. It is enough just to have an intention, and its “initiation” is already a task for certain subsystems of the network, which are grouped into operational units. To pronounce a word, we do not need to consciously move the muscles of the tongue, larynx, and lips. It is enough to make the decision—“to push a button mentally,” so to speak—and the articulation takes place on its own. With such hierarchic “centralism” the field of consciousness does not need to control its subordinate network processes rigidly; it intervenes only when it is informed by internal feedback that an action is not unfolding as expected.

“Will” is therefore a combination of decisions with their anticipated (expected) results. These decisions, “proposed” by the network and its memory, represent an act of “comparing” the current situation with past ones (again, not in a literal and static sense but dynamic, as we saw in the processes of cooperation between the primary and secondary visual cortices). A choice is represented by the “initiation” of a specific operational unit when it is injected into the network as a traffic rule (a preference system). It is equivalent with suppressing or blocking all information that clashes with the decision, thus steering the organism’s behavior. The decision begins at the moment when the cumulative charge of previous information in the network begins to interfere with the inflow of information that clashes with it. After the act of choice, it serves to prop up the selected preference system against the incoming information. Even the simplest networks manifest thusly defined “will” when they, like an automatic pilot, compensate for whatever diverts them from their goal. However, in such networks, “will” is built in once and for all; complex networks can choose among various preference systems. It is possible for them to behave as a person who, “out of a sense of duty,” overcomes every pain, fear, and doubt in the pursuit of the goal.

In what sense is “will” free? First of all, it is free from the immediate pressure of the environment, thanks to countering it with previously accumulated data and personal experiences introduced by feedback for the purpose of making decisions. Without feedback of its past the network’s behavior would be solely determined by the present situation and its external pressures. Such a network would be unable to choose an “independent” course and steer “against the flow” of events; it would drift with every current. Only inanimate objects drift, passively yielding to the influences of the environment. The ability to seek goals autonomously and choose and pursue them is considered a value in the “moral” sense. The material premise of such autonomy is made possible by the continued possession of one’s past, a past that can be activated as needed. This is why the inviolability of the internal feedback links, understood as the full functionality of the systems of connections with one’s memory stores, is so important. If a network becomes overloaded with an excessive inflow of external information and a sudden increase in the rate of learning to accommodate it, the network’s past becomes negligible compared with the present. It follows that a network acquires “personality” only when the learning process is spread out in time. When there are too many new experiences at once, a network begins to behave as an inanimate object: its past ceases to play an important role in its decisions. It loses the ability to make choices, and begins to passively drift in the stream of events.

Clearly, critical rate of learning, the amount of information that a network can absorb per unit time without being overwhelmed, is an individual variable that depends mainly on what we termed the network’s wholesomeness. When the previous life experience becomes unusable, it may lead to an internal functional breakdown unless the network can counter it with robust and resistant preference structures. People in the death camps were in this situation. Forced by circumstances to adapt and having lost the ability to counter it with past experiences, people are capable of committing acts that are, in the terminology of various systems of ethics, “immoral,” “sinful,” or “inhumane.” This leads to the formation of new habits and behavioral patterns that may contradict a network’s entire personal history. This is a “pathological learning,” in that it occurs at the cost of a network’s internal continuity or wholesomeness. Similar phenomena appear, with less intensity, when a network loses its ability to learn freely. Freedom of learning is characterized by the rate of the assimilation of new information, which must be such that a network can preserve, at every stage, its past and its wholesomeness. When the inflow of new information is too large and the network loses its freedom of learning, it becomes incapable of assimilating new experiences and making connections with the previous ones, because of which it is no longer able to respond adequately to changes in the environment. Then we often say that a person “lags behind” or “cannot keep up” with new times.

The set of preference systems that decide a network’s behavior constitutes its “personality.” “Personality” may remain intact even when, as a result of an external force, a network’s effectors are temporarily blocked. Enslaved by external forces, such a network maintains its internal wholesomeness or personality, like a ship that, carried off course by the storm, still has a rudder, or like Hamlet, who, imprisoned in a nutshell, nevertheless “counts himself a king of infinite space.”7

A network’s personality changes a little with each new experience and each new decision. Thanks to its past, the network is not absolutely subordinate to the current, transient situation; thanks to the preserved possibility of further learning, it is not entirely dependent on its past either. Its internal transformations in response to new external needs thus occur through the push and pull between its present and its past. “Internal freedom” manifests itself precisely in this relationship. A network that has formed a rigid preference system and does not confront it with new information exhibits in its behavior “rigor,” “fanaticism,” and “obstinacy.” It too loses its freedom, like the one that is subjected to excessive external force, except that the affecting force comes from within instead of from without. Every network typically passes through a period of optimum learning and assimilation of new information, after which that ability diminishes: there is less memory capacity, and its preference structures that dominated most often tend to ossify. This is the origin of “conservatism.”

HYLAS What about the predictability of the network’s behavior? Do you think that having at hand your proposed mathematical network theory and knowing a network’s present state (having data on all its past experiences and preference systems), we could predict how it will act in a new situation?

PHILONOUS Not in the case of a neural network.

HYLAS Why? Heisenberg again?

PHILONOUS It is true that incalculable (in the Heisenbergian sense) atomic fluctuations may in some circumstances affect behavior (through a momentous change in transmitability of a particular stimulus), but we need not look here for a quantum cause of this indeterminism. Keep in mind that the factors that influence processes in a neural network are innumerable. The excitability thresholds of synapses, that is, the rate of impulse transmission, depends on, among others, body temperature and blood chemistry, both of which fluctuate constantly. These negligibly small oscillations can together exert a nonnegligible effect on the course of a mental process. A microscopic deviation at the beginning of the path of an impulse might grow significant by the end of the path, which might mean a different decision, an unexpected association, a “spontaneous” reaction. This is impossible to predict. Moreover, a network’s decision may depend on the “time when the idea occurred,” on an accidental, “random” memory record that suddenly “resurfaces in consciousness,” and so on. And some networks can manifest an unexpected violation of their habitual preference system. In some fields a network’s “value” may actually depend on such violations, for example in the creative work of an artist, engineer, or musician. But spontaneity understood as the ability to abandon old preferences and create new ones is not the only criterion of creativity. Spontaneity must also serve to maximize the richness of new configurations of internal elements. Unexpected behavior in itself does not make an artist—the person may simply be a crank. Someone with a vast memory may be erudite and nothing more. The breaking of an old preference structure plus “internal richness” plus the ability to organize its internal elements into an entirely new structure in a wholesome way—all three conditions must be met to call a network creative. In any case a new configuration can arise only from the elements that are already present in the network. Information is thus acquired in two ways: first, externally and second, through the internal recombination of symbols to make a configuration that never before appeared in the network. “Internal richness,” “wholesomeness,” and “free will” together manifest “character.”

Being “free” in the course of its decision making, a network is responsible for all its actions and for its personality that formed by all past decisions since the network’s inception. No decision was fully predetermined, and each could have been different (inter alia, due to “chance”). In each case, the probability of such “chance” is small but nonzero. Because a network consists of a huge number of elements (about 10 billion in the case of a human being),8 it never happens that all the processes and elements participating in two instances of the same situation are exactly the same. This increases the element of randomness. Tradition holds that an actor is responsible for each of his “free acts,” yet absolute freedom does not exist. It is a limiting value and therefore unattainable. What is constant in the network is its past, its personal history. Every decision, every step on its life trajectory, arises from the friction between its history and its present. Its history is the accumulation of all its previous decisions and it makes its individuality. Its present is a choice determined partly by individuality and partly by chance, due to the statistical nature of the network processes. In this sense, every decision contains some randomness and may be less or more predictable but never certain, which makes it free. The longer a network exists, the greater the pressure of its past and the less the freedom of its decisions. Yet its individuality is frozen only in the final experience that is death. Before that, a network is always free in its behavior, albeit in an ever-diminishing degree, and loses this freedom at the moment of death. And this, my friend, is all I have to say about life and death observed from the cybernetic point of view.

HYLAS You have given me much food for thought. But what about the prospect of cybernetic resurrection that you mentioned?

PHILONOUS I meant not resurrection but the continuation of personal existence after an organism’s death.

HYLAS Aren’t they the same?

PHILONOUS You will see that they are not, Hylas—but not today.