Biology and Values*

Introduction: Values and Aims

When we speak about values, we understand them either as facts or parts of relations. In the first case, we say that something is a value; in the second, something has value. The first way of speaking makes the value absolute. If I say that X is a fact, and X is the distance from the Earth to the Sun, I am claiming a truth that is independent of circumstances. Similarly, if I say that X is a value, my claim does not depend on anything else. If a value is a fact in this sense, its confirmation is nonempirical. There is no procedure that can determine whether the statement “X is a value” is true or false. An assertion that establishes a value in absolute terms therefore does not belong to the language of empiricism because it is undecidable: we can neither falsify nor verify a statement like “Virtue is a value” or “The human is a value.” On the other hand, if I say that X has a value, I am saying not a whole sentence, just a part of it: because X is some relation—of subordination, suitability, usefulness—with respect to what is not X. The value that X has is given by the extent of its connection to the occurrence of some Y.

Values are autonomous and absolute in the first sense but not in the second, when they are usually called instrumental. Many statements about autonomous values can be reformulated such that they become falsifiable statements about instrumental values (in considering how society benefits from the virtue of its members, we can reformulate the statement “Virtue is a value” to “Virtue has a value”—for example, the value of stabilizing social relationships). Both kinds of statements imply duty: the first does it categorically, peremptorily, in the form of commands regarding attitudes or behavior consistent with a given axiological thesis; the second does it conditionally and relatively. Because only someone who believes that social processes ought to be stabilized will want to implement virtue in communal life; only someone who intends to build a house will want to scrutinize instrumental values of various construction plans.

As can be seen from the above, autonomous values are treated as if they were ahistorical facts or even eternal truths. No axiologist would say, “Virtue has been a value since the year 1456” or “Justice was a value between 1366 and 1890.” But if someone did say that, he would actually mean “Some people behaved virtuously at that particular time,” and it is immediately clear that he committed hypostasis when he objectified behavior that occurred in a specific, closed time interval in the past.1 Instrumental values are time-dependent and historical: the instrumental value of flint quartz used to be considerable, while today it is zero. If some things seem to have an instrumental value that is unrelated to any technology (e.g., the atmosphere), it is because they have been “permanently included in the technology” of the human organism and are necessary to support life. Yet it is clearly an instrumental value: if all life on Earth were annihilated, the atmosphere would lose its instrumental value, since there would be nothing and nobody to whom it would serve as a supporting medium.

Because the values of the first kind are established and those of the second kind are uncovered, the former, being dependent on resolutions or agreements, are firmly anchored in ontological viewpoints, whereas the latter can be reduced to a form of pure ontological neutrality, which is typical for empiricism. Ultimately, autonomous values are also relative—but only with respect to a given ontology: they are what it says they are.

Every axiological argument is in some way connected to teleology. By teleological behavior we mean that a system inevitably (but also facultatively) proceeds from the current state to certain future states, which thereby become aims. Every description of teleological behavior translates into the language of regular causal description, assuming a complete determinism. The reason is that there is no empirical method that can differentiate between what occurs inevitably because it is a casual consequence of a past cause and what is an aim or final state at the end of a defined path that also had a specific starting point. We can say either that material systems proceed to states of higher entropy because disorder increases or that the aim of every system is a state of maximum entropy. Yet using the term “teleology” in the latter case is not appropriate because of the principle (nota bene empirical) that teleological terms are used only in the former sense, and only an aim whose achievement is not tautologically identical with causal determination can be an empirical term. In other words, an aim is real only when the possibility exists that it will not be achieved. If there are processes that seem to progress toward specific final states even against opposing forces of the environment, and reaching those final states can be confirmed by experiments, then the physically measurable deviations from the path that leads to the goal and the distance from the goal are subject to instrumental evaluation. What increases deviation from the path acquires a negative instrumental value; what helps the process stay on the path acquires a positive instrumental value.

Measurable values thus appear where real aims exist. By real aims, I mean only those that are not always achieved. A shooting target is a real aim, but the entropic finis mundi is not.2 Axiology can be empiricized only to the extent it can be ontologically neutralized.

I. Axiology and Physics

From Laplace’s deterministic point of view, there is no informational difference between knowing the past and knowing the future: both knowledges can be equally perfect, that is, complete. Laplace’s demon, possessing such knowledge in terms of actual states of all the atoms, can describe in the language of physics the behavior of stars, amoebas, and human beings, whether retrospectively or predictively, while totally omitting all axiological terms. The shooter may not know that he will hit the flying pigeon, and Romeo may not know that he will meet Juliet, so for the shooter and Romeo the pigeon and Juliet are real goals, and it is the difficulty to achieve them that gives them a value. But Laplace’s demon knows that the shooter will hit the target even before he takes aim and also knows what fate awaits the two lovers even before they set eyes on each other. The values inherent in human choices and the goals that human beings strive to achieve are fictitious for the demon. The difference that the demon sees between the goals of the shooter and those of Romeo is just that relatively little knowledge about the atomic distributions is needed to predict the outcome of the shot, but a large amount of knowledge is needed to predict the outcome of the star-crossed love. Yet in reality whether the target is hit and whether the lovers meet is not determined by any specific sequence of events that was initiated at the shooting range or on the balcony. The cause of these and all other events in the universe is the original atomic distribution in the primeval nebula; everything that happens afterward is the completely predetermined consequence of that distribution. For the demon, possessing knowledge that is ultimate because no greater can exist, there are no other goals and no other values than fictitious, because, as I have already said, a real goal is nothing but a future state, the achieving of which is indeterminate. The pigeon is a real aim for the shooter because the shooter does not possess the optimal knowledge. If full determinism rules, then everyone who believes he is taking aim is like a streetcar passenger who thinks he is making decisions about the car’s destination. Of course, the destination does not depend on his “decisions” at all, and so his chosen goal is not real. When we say that Mr. X has chosen a certain goal, we understand that he has selected it out of all possible goals because of its value. Laplace’s demon uses no teleological or axiological terms in his description of Mr. X. Mr. X is thus like the streetcar running on the rails to all his future states, totally predetermined, except he is not aware of that. The values he is talking about are nothing but a manifestation of his lack of knowledge about states of himself and of the environment. Values are then, from the demon’s viewpoint, false hypotheses about the causes of Mr. X’s behavior. The physicist will know that Mr. X must behave as he does, that Mr. X does not really have a choice, and that his decisions are as fictitious as those of a streetcar going through a preset switch, although he believes that he has a certain amount of freedom to make decisions according to his system of values. In extreme determinism, goals and values are just putty filling the gaps between the consecutive states that constitute a diachronic trajectory. Values and aims are epiphenomenal illusions, like the daydreams of the streetcar that it is not a streetcar at all and does not run on tracks. Physical knowledge is therefore the only possible kind, and at the same time is ultimate, because by filling the gaps in the description of a system’s successive states it erases every trace of value or valuation.

But this “streetcar” determinism is itself an illusion, with no equivalent in reality, even for physics itself. Following modern physics, I posit that the complete determinism is just an ideal limit of real states and that the links between those states have a stochastic character, which often may be disregarded in practice. It can be omitted in the prediction of a lunar eclipse but not if we study electron diffraction. Mr. X’s behavior, like that of any stochastic system, such as a die in a game of dice, can be described using the mathematical language of probability (e.g., a Markov chain).

A homeostat is a system that maintains its stability despite perturbations. It is therefore a system whose equilibrium is stable, although it does not have the highest thermodynamic or statistical-mechanical probability. The difference between a homeostat and a nonhomeostat is the same as the difference between the deterministic and indeterministic behavior of an object. Descriptions of atomic particles need to take into account their quantum-indeterministic properties to be of any predictive value. As the number of particles increases, such that they finally constitute a macroscopic object, we are increasingly justified to omit the quantum aspects and utilize the apparatus of classical physics. The extent to which we can omit an object’s quantum properties depends on its dimensions, which, in turn, depend on the number of elementary particles participating in the “event” of their variably stable “encounter” that constitutes the object, and on the degree of order the particles exhibit while forming the object. The set of all possible arrangements of the particles contains a subset of systems that we call homeostats. But that subset does not have clear boundaries, because, depending on what parameter coupling arises as a result of a given order, the object will exhibit various degrees of stability with respect to various kinds of effects. The more stable its equilibrium and the higher the number of perturbations that will not push the system out of balance, the higher the extent to which the system is a homeostat. Stability in itself is a necessary condition but not sufficient. An isolated, cold celestial body is in an equilibrium that is stable both mechanically and thermodynamically, but it is not a homeostat; any mechanical or thermodynamic action will push it out of that equilibrium, because it does not resist perturbations but “embraces” them. A planet with oceans and an atmosphere will behave like a homeostat in terms of surface temperature, because an increase in insolation causes an increase in water evaporation, and the resulting clouds increase the planet’s albedo.3 As a result, more of the incoming radiation is reflected into space, and the surface temperature does not rise to the degree it would if this parameter coupling did not exist. In astrophysics, we could, in principle, speak of planets as better or worse homeostats; that it is not done is not because it is physically incorrect but only because such thermal compensation work of a planet is inconsequential for its physical evolution. (Unless we study planets with regard to their ecospheric adaptation to produce life: in that case it makes sense to say that some planets are better or worse in their suitability to becoming cradles of biogenesis.)

In a given environment at given values of pressure, temperature, chemical composition, and so on, there exist states of material ensembles that can persist without significant changes with no energy input, but there also exist states that do require energy for that purpose. The ensembles that manifest permanence or “self-preservation” without input of energy are not considered homeostats. A stone block, a cast-iron sphere, a stool, or a diamond are not homeostats even though they can maintain their structural and material identity for a long time. An oil droplet swimming in alcohol is also not a homeostat. The energy input that a homeostat requires indicates that to maintain an invariant structure, that is, to keep the values of a group of parameters within a specific range, requires work. Yet a steam engine is not a homeostat either, because it requires energy that it cannot obtain on its own. A steam engine that could search for fuel and repair itself when it breaks would be a homeostat. It would have to have various information sensors, since searching for fuel requires good orientation in the environment. If instead of the steam engine we built a photosynthetic machine, most of the sensors would be superfluous, and the machine could be stationary, because the sun shines everywhere on earth, albeit intermittently. Any green plant is such a homeostat.

Artificially constructed homeostat models differ from natural homeostats in that they do not function in the full self-preservation range that a regular terrestrial environment requires. The models are narrow-range homeostats, perhaps just quarter-homeostats. These apparatuses model only selected functions of the systemic parameter stabilization, like Ashby’s homeostat, whose equilibrium is not truly self-preserving. A truly self-preserving homeostat will respond to attempts to damage it as a dog does, either by fleeing or biting its attacker. Ashby’s homeostat can be destroyed in a number of ways, and it will not defend itself, lacking the appropriate structural-functional features.

If a system is a homeostat, some states of the environment will facilitate its maintenance and others make that maintenance more difficult or even impossible. Being beneficial, the former environmental states will have a positive value for the system, and the harmful states will have a negative value. Here is where the theory of homeostasis diverges from physics. A physicist does not care whether the system being studied is destroyed or not; he observes and describes its physical states—with stable or unstable equilibria—but does not assign them any value. He will not say, for example, that stars that spend their nuclear energy more economically and therefore shine longer are better than stars that exhaust their energy quickly. Instrumental axiology arises in the description when the processes taking place in the homeostat start to be compared with a state that is set as an ideal that ought to be preserved.

When a comet’s head enters the Earth’s atmosphere and, as a result of the increasing temperature, the ice in it forms a gas pillow that has a braking effect so that the rocks from the comet’s nucleus land on the ground unshattered, we do not say that the comet acted “appropriately” and “saved itself.” But when an astronaut in free fall, before he enters the atmosphere, uses a foam plastic to create a shield that allows him to land safely, we do say that he acted “appropriately” and “saved himself”—because we know that the comet could not have behaved differently while the astronaut could have. What is the operational difference between the necessity of behavior and the possibility of behavior? The “soft landing” of a comet is highly improbable. To increase this probability the comet should be modified. All its frozen gas would have to be shifted to the front. Except that we do not know the comet’s orientation at the moment of entry. Also the rate of transformation of the comet’s ice into gas is important for a soft landing. Taking all of this into account, specific pieces of the cometary matter should be rearranged in a nonrandom manner. In other words, the comet would have to be turned into a system with higher order than it had previously. The chance of a soft landing could be increased further if we equipped it with a sensor, a radar that could compute its distance from the atmospheric envelope, and the comet would orient itself based on the data from the sensor. The computation could be done by a computer on the ground and transmitted to a receiver on the comet by radio. Or the computer could be on the comet itself. This is how we could restructure the comet into a self-orienting apparatus that optimizes its flight trajectory.

But it is still a deterministic system, albeit one with a probabilistically selected range of behavior. After the comet’s soft landing, nothing would prevent people from smashing it to pieces, say, with a hammer. To protect it against that destruction, we would need to amend its landing program and equip it with additional sensors and effectors. Since not everything around the landing site would present a threat to it, the information from the sensors would need a discrimination filter, which means that the “comet” would need a kind of perceptron. After many additions like this, we would eventually obtain an apparatus able to make decisions, on the basis of preprogrammed instructions and personal experiences, to preserve itself. The system would gradually, step by step, change from deterministic into probabilistic and ultrastable. We have just described a series of transformations that change the comet into an object that is increasingly more like the astronaut. Obviously, unlike him, it could not talk or have children—but this is just a matter of additional reconstructions to bring what used to be a comet to still higher levels of complexity.

The questions of when and how instrumental values arise can be answered like this: the difference between the presence and absence of axiology, and equivalently, between the presence and absence of a real goal, is no other than the difference between a bald head and a mane. When a rock falls in a gravitational field, we do not say that it has chosen to accelerate the speed of its fall over time. But when a virus approaches a cell, our interpretation is not as straightforward. On one hand, the virus’s behavior is nothing more than ordinary catalytic reactions between large protein polymers, yet we say that the virus is “attacking” the cell, parasitically harnessing its energy and structural material for its own reproduction. We may accept that the virus definitely does not make decisions in an axiological sense, and that a bacterium doesn’t either, but we may hesitate at the level of an amoeba or, if not there, at the level of an annelid, or still further up. Because basically it comes down to this: if we understand the model of a homeostat’s functioning as completely as we understand a functional scheme of, say, an electric doorbell, then for us the “making of decisions” will be replaced by feedback and causal relationships, and “functional aims” will be replaced by probabilistic chains, which in limiting cases (a mouse, a monkey, a human) will acquire a status of models of the homeostat’s environment. “Values” will then be specific relations between physical states that statistically determine a system’s behavior. What are these relations? They are not material or energy relations, as the relation between a supporting foundation and a supported wall, but informational relations, which are not objects but facts (just as the distance between a prey and its predator is a fact but not an object).

Information in the physical sense, as a measurable quantity, belongs to subjects studied by thermodynamics and statistical mechanics. But how does physics view information in the logical sense? As a representation. And what is, physically, the operation of representation? Suppose a tribesman A eats spoiled goat meat and dies. Tribesman B eats unspoiled goat meat—and dies too. We can describe what happened to the first person in a completely physical way. But what happened to the second cannot be described in the same language, because that language does not have terms permitting the statement that goat meat is taboo for tribesman B, and when he discovered that he had accidentally eaten goat meat, he died from the shock. An objective, physical description would note shock as the cause of death but would not include the relationship between the eating of the meat and the death.

Tribesman B died because he assigned an extremely negative value to the consumption of goat meat. What in the physical description could correspond with the assignment of a negative value—a taboo? It would seem at first that nothing could, but in fact this assignment corresponds with a series of physical events that resulted in tribesman B’s being suitably preprogrammed during his personal history. It is a question of coupling semantics with physics. No available physical test may be able to find any difference between tribesman A and tribesman B. Yet the two are not truly isomorphic, since one died because the consumed meat was poisonous and the other died although the meat was not.

The propensity of dying from the consumption of the taboo meat is a trait of tribesman B that we could predict with a certain probability if we knew all his past states and, in addition, all the past states of his cultural group, going back to the first anthropogenic transformations that led to the rise of language. The idea that improvements in the physical description of the brain of a person who spoke a certain language would ever enable us to uncover the semantics of that language is wrong in principle. A perceptron is a much simpler device than a brain, but no matter the level of detail to which we disassemble or otherwise dissect it, it will never tell us what geometric figures it has learned to recognize. At the start of the learning, the perceptron’s elements were connected randomly and only chance determined which elements in what configuration were excited by which stimulus; the excitation pattern was random at first and only after a series of repetitions (“learning”) gradually became an invariant behavioral feature of the device. Yet there is nothing either in the device or in the geometric shapes it is learning to discriminate that would make all perceptrons that recognize the same shape assume an identical structure. Similarly, there is nothing in objects or in the names given to them that could tell us why a particular word represents a particular thing. The reason is that linguistic representation is logical, not physical. As we saw in the perceptron example, however, the process that actualizes this logical representation is a totally ordinary physical process. The phenomenon is always the same, although at different levels of complexity—in perceptrons, in the communication of bees, in the linguistic anthropogenesis. At the beginning, we have unconnected distributions of possible “designates” and possible “names” for them; through multiple stochastic processes of “reaching out” and “mutual fitting,” nonrepresentational randomness turns into nonrandom representations. Because these are probabilistic phenomena that eventually arrive at states that are stable (a working language or a perceptron), this is an ergodic process.

We thus arrive at the following picture: As long as the “mutual fitting” between the ergodic of the facts and the ergodic of the behavior (of a perceptron, a bee, or a human) has a purely physical character, it is not a logical representation. When eventually the representation becomes logical, the physical link that actualized it disappears. Meaning as a physical state is determined logically, but it arises through successive filtering of a set of possible names—as specific configurations “fitted” to the set of possible designates. What “supplies energy” to this mutual fitting of the two ergodics and narrows the initial random distributions of events (behaviors) to names is a derivative of the process of adaptation. (The naming is only simulated in the perceptron, since the device “is not interested” in recognizing geometric patterns, which is precisely why such a device must be constructed artificially. But the bees are definitely “interested” in having a signalization system for communicating the localization of a food source, and the human group that has undergone socialization is interested in having a code of societal regulation.)

When randomness becomes regularity, semantics arises as an invariant. It follows that the meaning of a “taboo,” as with any linguistic meaning in general, cannot be uncovered by dissecting a brain, because it is a search for something that has not had the form of a physical phenomenon (a linguogenic ergodics) since prehistoric times. What we can observe is only the dynamically stabilized effects of causes that are long gone; it would be as if a physicist claimed that a rock falls from his hand not because he released it in the gravitational field but because the rock fell like this before and is just “remembering” that behavior now.

So it appears that “physicalization of culture” will forever remain a utopia. But if it were ever somehow accomplished, values would become “superfluous” entities (entia praeter necessitatem) in the understanding of the Laplace’s demon.4

The following statement by Rosenblatt, the perceptron’s creator, is well known: a perceptron whose elements are connected totally at random can start functioning (discriminating shapes) right away only if the number of its elements is infinite.5 Thus only an “infinite perceptron” can dispense with the process of organization. The higher the degree of its initial organization, the fewer elements it will need, and therefore the simpler it may be. (This is called the self-organization theorem.) This enormously important assertion, however, cannot be directly used in biology because neuronal systems are neither simple nor hierarchic perceptrons, and the (genetic) preprogramming of their subsystems is subject to more than a single rule. But the theorem, if suitably reformulated, can be used for the classification of typical evolutionary decisions regarding the construction of systems through the narrowing of their “informational preregularity” into the “limiting range of patterns” that either an individual—using his brain—or a species—using gene populations—can learn to recognize. Speciation too is a kind of “learning.”6 The self-organization theorem can shed light on the limits of information resolution of a brainlike system. I am hinting here at the possibility of using perceptron technology in the realm of epistemology.

The axiological evolution as a correlate of biological evolution is hard to simulate, since what we are currently able to model is usually trivial. One might say that we get from our models only what we put into them, so epistemologically such experiments smack of tautology. We are unable to reproduce self-organization that would turn an axiologically neutral system into an axiogenic one. A homeostat that searches for electrical outlets when its batteries run low has not yet emerged in a kind of electrical evolution; its behavior, determined by the preprogramming built into its structure (the schematic of its circuit connections), is therefore as deterministic as that of the streetcar, except the homeostat carries the “restricting tracks” within itself. A homeostat that would be a true axiogenerator is one that could proceed from a state of axiological zero, which only recognizes ordinary physical objects, to a state of a specific, “self-centered,” self-preserving knowledge (of what “needs to be done” to maintain activity). A homeostat that would find out “on its own” and “quickly” what it needs to do to survive is precisely Rosenblatt’s infinite perceptron. But for a living homeostat the entire world is the screen on which various shapes are displayed, and so its environment is extraordinarily complex. (It is a matter of complete chance which one of the infinite perceptrons learns to distinguish triangles from squares, which one the green of grass from the red of wild poppies, and which one self-preserving behavior from self-destructive behavior.) But it is enough if a single one of those perceptrons out of a quadrillion or sextillion survives—and with it the principle of its functioning—and we have the beginning of a regular, natural evolution understood as homeostasis. As we can see from this limited example (which does not attempt to re-create the whole biogenesis), all the “half,” “quarter,” and even “three-quarters” homeostats quickly die out in random environmental perturbations; only the truly “self-centered” and universally surviving systems remain, thanks to the “learned” procedures of self-preservation. This is the essence of the principial gap between physical objects, which are neither value-producing nor subject to valuation, and homeostats, which are “doubly axiological”—doubly because they demonstrate values (in their selection behavior) and because they themselves are subject to valuation (as better or worse homeostats) according to the criteria for self-preserving instrumental efficiency.

Organisms that would have to learn from scratch a certain minimum of functions to survive cannot exist in nature. The biosphere learned this important lesson through its emergence. Species differ mostly in whether or not their information capacity is distributed throughout their bodies. All physiologically active tissues in insects are informationally almost invariant, which means that at birth their hemocytes and neural ganglia already “know what to do.” In contrast, mammals are anisotropic in the distribution of this knowledge: a suckling’s white blood cells are equally saturated with information as those of a philosopher, but a suckling’s brain is practically a vacuum when compared with that of a philosopher (or any adult, for that matter). Yet all organisms are homeostats from the moment of their formation—even a suckling’s ignorance is not so great that it would fail to distinguish milk from gasoline as a source of nutrition. In this sense the axiological minimum, identical with the homeostatic minimum, is built into every organism. The survival directive is preprogrammed in every element of the set that biology studies.

The wide popularity of the term feedback should not overshadow the fact that feedback coupling is the main pillar of a system’s self-preservation and the dynamic determinant of learning processes—but principially only in fully matured, true homeostats, not in systems that are still on the probabilistic path toward homeostatic equilibrium. If a system is exposed to perturbations that occur periodically in a sequence of repetitions that are identical, like playing a gramophone record, that is, if the environment behaves according to a “predetermined pattern,” a system without feedback, having only sensors that trigger unidirectional reactions, can function as a homeostat in this environment. The only requirement is that the environment sends a single signal marking the beginning of the series of perturbations, to which the system responds appropriately. Obviously such a “deductively invariant” environment is possible only in special circumstances, for example, in a computer, which is why natural homeostats—living organisms—must utilize corrective feedback mechanisms even when their behavioral program is totally determined by genes (e.g., insects). Every stage of the realized program must fit the concrete and a priori unforeseeable variations in the environmental conditions—both initial and boundary. The great progress that cybernetics has made in studying animals, and a lesser success with plants, which lack a nervous system, suggests—precisely because of this difference—that the stability of homeostasis cannot be reduced to the informationally maximized effectiveness of regulation, if by this effectiveness we mean the universalism of the typically “orientational” behavior manifested as the network of connections between the homeostat and its habitat. Plant systems utilize the strategy of a slow, generalized response to large classes of stimuli (which do not necessarily have to be specific), lacking any tactical generalizations, especially short-term ones. As with animals, the strategy is minimax, allowing the environment to change some parameters of the system while maintaining the values of parameters essential for survival within a range required for life. One could call it a “fuzzy” or indeterminate axiology. Not much more can be said on this topic, since the plant variant of homeostasis has not been as extensively studied as the animal one. We do not even know why “mixed” strategies are not possible. (Theoretically a system alternating between being plant and animal could gain a significant survival benefit; but such a “hybridization of principles” may not be possible construction-wise.)

In summary, there are two kinds of values. The first, instrumental, defines the degree of suitability of specific means to achieve a given goal, which, once achieved, may become the means to achieve the next goal, and so on. At the apex of this pyramid there is usually a value of the second kind. The value of bridge construction is instrumental, and the usefulness of a bridge can also be measured instrumentally; but if, observing that people travel on it, we ask why they do not stay at home instead, we get into an endless circle, because one instrumentalism leads to another, unless we accept an answer based on a value of the second kind: namely, that people do various things for survival, which is a value that cannot be reduced to any other. When we transpose this reasoning to the terrain of biology, instrumental values correspond to the measurable ability of systems to pursue the goal of survival, an ability embodied in their physical structure. One could of course argue that the acceptance of life by humans as the supreme value is also part of the human structure. These are just two formulations, in different languages, of the same thing.

A value of the second kind cannot exist if there is no subject to recognize it. (The recognition does not have to be conscious; it could be just a system of reactions marked with regularity in behavior.) In contrast, instrumental values are, one might say, objective, in that they directly depend on the features of the material world. The value of iron as a material for bridge construction existed before there were people on Earth. We do not create these values but only discover them in the world according to the scientific and technological knowledge that we have accumulated. This knowledge itself therefore has instrumental value because we do not create it by fiat but rather “extract” it from nature. But can we really claim that instrumental values exist in the absence of any homeostat?

It might be better to say that they do not exist if there are no goals pursued by anyone or anything, since these values usually mark an extent to which a given means is suitable for achieving a specified state. It is not necessarily incorrect to claim that the value of iron as a bridge material was established by the physical properties of the element before humans appeared on the planet, but we should realize that such a statement can easily lead to equating the existence of an instrumental value with the existence of a material object itself, which is false. Concrete conditions must be present for an axiological measurement to make empirical sense (to be intersubjectively verifiable, like any experiment). Instrumental value is the suitability to achieve a goal, and the more precisely we determine the goal that the valuated object or procedure is expected to serve, the easier it is to measure it. Some products of technology, like a bridge or a camera, have their goal built in. “Monotelic” systems are usually easier to valuate than “polytelic” ones: the more goals an object can serve, the more difficult it is to establish its instrumental value without any goal-related qualifications. If an object appears to serve no goal at all, its instrumental value is either immeasurable, that is, approaching the infinite (and then it becomes a value of the second kind), or it is zero. A star has no instrumental value for most of us, serving no purpose, but for a sailor it has the value of an orienting signpost. A human being, who has no “built-in goals” and can “do anything,” that is, create any number of arbitrary goals for himself, also is not subject to instrumental axiology. So if I say that iron has an instrumental value, I implicitly place that element into the sphere of human technological practice.

“To be an instrumental value” is a relationship, as is “to be a disease agent.” Such terms owe their measurability and general sense to the second, silent part of the relationship: value—with respect to what? the ability to cause a disease—in whom? Humans adopted iron for their purposes, but its properties that prompted them to do that existed before humans arose. Bacteria adapted to higher organisms, making them ill, but most of the features that enable bacteria to do this existed before there were higher organisms. So the more precisely we define the goal, the more exactly we can determine the instrumental value of anything that serves to achieve that goal—within the scope of a given technology. If we do not specify the scope, we may find in practice that a given goal can be achieved by infinitely many techniques. For this reason, instrumental values are always system-dependent. The instrumental value of food can be expressed with respect to everything that lives; the instrumental value of meat—to every carnivore.

Values of the second kind exist only when someone recognizes them. The consumption of goat meat elicits shock only if a person accepts the taboo against it, that is, if that person is properly informed. If the effects of what is recognized are incorporated into the system of adaptive functions of an organism and hence increase the chances for its survival (as a species or a person), then the value becomes instrumental. But when these effects are superfluous for all homeostatic functions—and at the same time form a nonrandom set—the organism manifests in its behavior (through its preferences) the presence of a value of the second kind. Values of both kinds can mix in behavior such that making a clear distinction may be practically impossible. (It is not necessary to know what the organism experiences subjectively; we treat it as a black box and search only for correlations between the sum of stimuli at the input and the sum of responses at the output.) Values of the second kind are either revealed through reasoning based on calculation (the sum of information usable for adaptation and information that is adaptationally useless but still systematically favored by the organism, both entering the global balance of the informational-transformative work of the system) or based on our systematic ability to recognize (without which we would be unable to discover the link between the death of the tribesman who ate the unspoiled meat and the taboo value of the latter).

The last general issue to consider is whether a homeostat itself is an object with a value of the first or second kind. This question is not about the homeostat’s value-generating power, which it exercises by the choices it makes. When we declare X a homeostat, we are saying that X favors certain values and therefore lacks the indifference that characterizes a lump of dead matter. But how do we know that X (as a homeostat) is an axiogenic, teleological thing while Y is not—if, say, X is a lizard or snail and Y a planet surrounded with an atmosphere or a music box with dancing figures on its top? As I have already said, our decision will depend on a series of investigations and not a single observation. The presence of three kinds of factors in X will indicate that it is a natural (living) homeostat.

First, there must be a set of parameters of X that stay within a certain interval for X to continue being X. Second, there must be a set of parameters of X such that, when they reach certain values, X “ceases” to exist, but it creates, either at the same time or earlier, “offspring Xs.” This second set has higher cardinality than the first.7 And finally, for the whole class of Xs (for the whole species of organisms) there must be parameters, of still higher cardinality, such that when certain values are reached, all Xs “cease” to exist, together with their offspring, but their ceasing occurs through a transformation into Z, a different species, which means that speciation occurred. This is how new species of homeostats come into being.

According to modern biology, the set of Xs that we are studying now emerged from a sequence of previous sets, and there was a set of Bs or As that had no parental set but constituted itself from dead matter through a process of self-organization. Thus self-creation, self-preservation, self-reproduction, and the ability to evolve are the four characteristics of natural homeostats. When we model them, we do not have to include all four at the same time; these characteristics go together biologically but can be separated technologically. It is possible to build an apparatus capable of self-preservation but unable to reproduce, an apparatus capable of evolution but incapable of self-creation, and, possibly, an apparatus capable of self-creation but incapable of stable homeostasis. (If an object does not exhibit some of these characteristics, we may be confused, not knowing if we are dealing with a true homeostat or only an imitation of one, for example, a wind-up doll.)

All products of our technology are equipped with self-preserving characteristics. Some of those features may be an unavoidable component in their construction, but often they are designed to protect a device and thus serve its purpose only indirectly. Even a device constructed to destroy itself, such as a self-guided missile, must protect itself—before it reaches its target. The idea of a machine as a tool that not only performs specific tasks that serve a purpose external to it but also preserves itself leads to the idea of a machine that has no external purpose but only functions to maintain its existence for as long as possible. A machine is an object that carries out defined transformations, and since a given class of transformations determines the machine’s purpose (if we know this class, we know the machine’s purpose), the machine cannot be a subject of the transformations it carries out—unless it has a single use and its purpose is to destroy itself along with its surroundings, as with a bomb. For obvious reasons, no one builds machines whose sole purpose is to destroy themselves. (According to some interpretations of the second law of thermodynamics, the world is such a machine.) The idea of a machine implies an invariant group of parameters, and what determines the use and the purpose of a machine must lie outside that group. Because the preservation of a device can be treated separately from its purpose, which is documented by the existence of specialized disciplines (e.g., the study of material strength), technology can provide criteria for measuring the performance of a homeostat as a “machine for nothing except self-preservation.”

Empirical measurability notwithstanding, we face a vicious circle: an instrumental value is the suitability of a means to achieve a goal, but the goal is to achieve the highest possible instrumental value. If we consider the parts of a natural homeostat separately, the purpose of the whole will seem predetermined. We can explain exactly and rationally what purpose is served by the organs of locomotion—for example, to gather food. Digesting that food is the purpose of another organ of the system, and we can examine how effectively this organ transforms food into energy which is then supplied, among other things, to the organs of locomotion. The circle closes: the organism moves to find food, which enables it to move. The engineer, although he provides the biologist with the scale and tools to measure the efficiency of each organ or even its overall fitness (given by the program of homeostasis), cannot—being an engineer—accept that such a device is rational. This troublesome circularity can be brought to the next level: if each generation serves the next, what is “the ultimate purpose”? We follow the food chain, plants serving animals, animals serving other animals, until we reach the level of the biosphere, of which we are certain that it serves no purpose and about which we can say nothing in an axiological sense. But much of this difficulty stems from the way we describe systems at various levels that we consider principially separate, which they are not. Because we focus on self-preservation at the level of an individual, then at the level of a species, then at the level of global evolution, organisms sometimes appear to be “autotelic mechanisms,” sometimes appear to serve an external purpose, and what’s more, various purposes in their teleology can be graded. Grass serves herbivores but also produces oxygen for all animals.

Let us avoid falling into the logical error of equating the sentence “This is a basket of big apples” with the sentence “This is a big basket of apples.” That is, let us not go from the sentence “This is a set of purposeful devices” to “This is a purposeful set of devices.” Individual devices may behave purposefully, but it does not follow that the whole biospheric set of them is purposeful. The search for the “purpose of evolution” is devoid of empirical sense if it goes beyond attempts to find the limiting distributions of the bioevolutionary process in time and space, that is, to find how the planetary development of life satisfies the conditions of ergodic theorems. We can, for example, treat individual systems, individual organs, as predictive devices, but we cannot treat the Earth’s biosphere as a predictor. The integral behavior of the biosphere does manifest functions that can be identified as homeostatic or predictive, but as a homeostat or predictor it crucially differs from its organisms.

If in the future an active focus on values becomes equivalent to the optimization of ultrastable states, biology will meet physics somewhere in the middle. In so doing, biology will get rid of the ballast of the axiological notions that have grown anachronistic in their meta-instrumental reach, and physics will include the instrumentalized issue of values into the theory of antientropic systems, a part of the general theory of physical systems.

II. Biology and Engineering

In what follows I introduce two basic ideas, that of a minimal homeostat and that of an ideal homeostat. A minimal homeostat is the realization of a self-preserving system that is reliable and at the same time energetically and materially the most economical in an environment with a specific amplitude of perturbation. An ideal homeostat exhibits the maximum self-preservation in the maximum number of perturbationally different environments. These are not definitions that a biological axiometrist would find useful, since they do not specify the many conditions that must be met. For example, “self-preservation” can be interpreted in a structural-material way, in just a structural way, or in a way that includes only those parameters that are invariant both materially and structurally.

Homeostasis, as behavior that resists perturbation, denotes a situation of conflict, which is why it can be described in the language of game theory. But this language may be misused. For example, pebbles rolling from scree into a stream might be treated as a kind of “evolutionary game.” A pebble that has acquired a round shape experiences less friction when rolling and therefore goes farther into the water than others. The winner in such “topological evolution” is the pebble whose shape is closest to a sphere; the “goal” of the game is to get the round pebbles to the sea. This description may reflect the facts, but giving it a teleological dimension lacks validity, because the boundaries of the “system” and the events in it were defined arbitrarily (at variance with Ockham’s razor: the facts do not provide sufficient justification for calling the pebbles in the stream an “evolving system”).

If an axiometry of homeostats is permitted, there will exist very good, good, mediocre, and worthless homeostats, where the latter are homeostats no longer, just as an ax is not an object that can float.

I have been describing the value of homeostats in terms of self-preservation, an ability that is determined by a fitness test. The propensity of a homeostat to be valuated (axiometric) is coupled with its function of value-creation (axiogenic). In an ideal homeostat, this duality (“being valued” and “valuing”) merges into one. When any perturbation in any environment can be coped with, no event is “harmful” and therefore “bad.” For an ideal homeostat, it does not matter where it finds itself, because no external influence can either disturb or improve its internal state. The further a homeostat is from that end of the spectrum, the more axiogenic it is, because it shows “interests”: events that have a negative value it will avoid and events that have a positive value it will pursue. At the opposite end of the spectrum, where homeostasis ceases, values disappear again, this time because for a thing unable to make use of any event for its own benefit, all events are equally worthless. This is the natural state of a pebble or handful of dirt: “inertness,” “axiological neutrality.”

Living systems, being continually sifted by the filter of evolution, are “good,” that is, efficient homeostats. They are not ideal, because we can find perturbations that will destroy them. Devices that we build—computers, perceptrons—simulate some parameters of homeostasis but not its operational whole, which confounds us when we want to determine whether or not they produce values (by their choices).

With his constructions, the modern engineer fills the gap between a passive physical object and fully functional organic homeostat. But this gap was not devoid of entities even before there were people: it is hard to say whether a virus makes a decision to invade a cell or its action is merely a result of a specific reaction of chemical catalysis. The cyberneticists who build logical decision-making systems might argue that a virus does make a decision, but then chlorine molecules must do that too when they join with sodium to form table salt, and an ordinary doorbell will be a logical device that oscillates between the decisions “yes” and “no.” The logicians are right to relate certain physical phenomena to the realm of logic; the selective isomorphism of physics and logic permits that. Only thanks to a similar kind of relationship, a perceptron constructor can assert that an apparatus, by recognizing certain shapes, assigns to them a positive value—and a negative value to other shapes. Such an assertion is allowed as long as we remember that we are dealing with derivatives of phenomena that were extracted from homeostasis but can reach full expression only within its boundaries. A constructor might eventually succeed in creating a true homeostat. He would have to choose between making a system that uses solar energy for self-preservation (the apparatus could then be stationary, because sunlight is everywhere on Earth, albeit periodically) or making a system that seeks sources of energy by moving around. Without any reference to the dichotomy of organic forms on our planet, just by pure reasoning, the constructor arrives at two alternative projects, functional homologues of plants and animals.

Validation of the teleological and, consequently, axiological approach in biology as an empirical discipline points to a research paradigm, but the modern researcher may not yet have the theory and technology to realize the program of objective axiological analysis in practice. It may not be an exaggeration to say that if biology had any expectation of help from other disciplines, engineering was at the bottom of the list. At least that was the situation forty years ago. Today it is precisely the aid of engineering that is responsible for success of biology, including its theoretical branch. The differences between an engineer and a biologist are obvious. A biologist studies systems that are given, that he did not construct, so he knows neither their “goal,” in the functional sense, nor particular characteristics of their subsystems. An engineer always has a goal that guides him in his projects when he makes consecutive variants of devices, and yet—and I stress this—his predictive knowledge about the behavior of his products is usually incomplete. It is because an engineer, especially when a project is complex, works less on the basis of a theory and more by successive approximation, by testing a series of prototypes. This is the fundamental difference between the two approaches. Another difference between a biologist and an engineer is that each field, like every developed discipline, has its own shell of paradigms and rules, its own “cultural norm,” as it were, which defines the limits of what is no longer in dispute, within which specific instrumental values may be determined. The point is that in engineering there is nothing that tells us whether the safety coefficient should be two or three times the normal (predicted) operational load of a structure. The rules in this technological shell, which cannot be derived from the shell itself, are known to every expert, since they are used in everyday practice. An engineer therefore knows how to balance economy in making a particular product with the redundancy in its safety reserves. A biologist rarely possesses such knowledge. What’s worse, the leading discipline in natural sciences, physics, does not even recognize those categories, since atoms have nothing to do with either economy or safety. In this respect, then, the objects of biological research may be more similar to the products of an engineer than the bodies of a physicist.

Although engineering technology is slowly becoming a resource for biology in research methods, modeling, and formal structures, the devices it produces are still inadequate to put the biologist, who studies organisms, on an equal footing with the engineer, who studies machines. The formal analytical apparatus that biologists adopt from the discipline of techniques that can be called “cybernetic” is too simple for the complexity of biological systems. Not surprisingly, the apparatus has already become too simple for the engineers themselves.

Engineers cannot always tell—let alone measure and confirm—how close a construction is to the highest perfection that can be accomplished in a given branch of technology. Because of the progress in all branches, the bar is set higher and higher. The best combustion engine in 1940 is no longer the best in 1968; the best computer in 1949 cannot compete with the best in 1970. Yet this dual stream of progress—improvements of prototypes at a given time, a synchronic cross-section of technology, and improvements throughout the path that a particular technology spans diachronically from its birth to its demise (for example, large-sail ships are now extinct)—which compels instrumental axiometrists to constantly modify their criteria, does not lead to chaos. At any given time, the instrumental criteria that make instrumental axiometry possible are well defined. That the engineer does not know at the birth of a new technology the whole domain of its theory (which is the only thing that would allow him to make predictions both technological and axiometrical) is not a problem, because he learns as he goes, by trial and error; in the creation of the next generations of a device his knowledge increases as the device improves, thanks to the feedback between creation and creator.

Nothing like this exists in biology. The biology of the “simplest” organisms (e.g., some ciliates) is not theoretically or formally simple at all, nor is there a gradual transition from “elementary” organisms to those that are more complex, because even the most elementary are several orders of magnitude more complex than the theoretical models that we apply to them. For example, the regulatory systems of all organisms are nonlinear, not to spite the researcher but because this type of regulator is more effective, and there is little chance of finding an algorithm for a nonlinear system. We must resort to approximations, crude simplifications (e.g., by assuming that only one steering block is nonlinear while the rest are linear, which may help but is not true), or numerical simulations (which seem to work the best). This suggests that our knowledge is still incredibly primitive when compared with the information that living organisms have accumulated, starting with bacteria. How, then, is a biologist to practice instrumental axiometry instead of speaking about it in generalities? The scientist’s handicap is best illustrated with an example. If an energy engineer from the mid-nineteenth century was asked to determine the instrumental value of a modern nuclear power plant, he would not be able to do it. Some subsystems of the plant might look to him quite strange but operational principle of some others would be completely beyond his grasp. If one knows neither the fundamental principles of a technology nor its limits (which requires a mastery of its theoretical basis), one can make no judgment about the efficiency or the degree of perfection of a given construction. If the nineteenth-century engineer were to apply the safety criteria established for a steam boiler to a nuclear reactor, I doubt that anything sensible would come of it. Preventing an explosion because of high pressure has little to do with preventing a nuclear reaction from running out of control.

That a biologist is in an even worse situation with respect to the objects he studies results in very different axiomatics of valuation in different branches of biology. Works by the “synchronists,” researchers studying organisms presently living, take as a given the optimality of biological solutions. Insects may look primitive to a human anatomist but not to an entomologist. This is not meant as a criticism of construction premises in different types of organismal organization: we cannot criticize what we do not sufficiently understand. For example, we cannot question the need to sleep that all “cephalized” animals have, because we do not know what its purpose is. Works by “diachronists,” that is, evolutionists, carrying out paleontological comparisons, argue that some forms were constructed “worse,” that is, less effective anatomically and physiologically, than others. But it is easy to notice that these arguments tend to follow the model post hoc, ergo propter hoc: first they show that a species with a certain construction died out and then they feel compelled to seek—by speculation—the “construction flaw” that caused the extinction. Few refuse to advance such hypotheses and admit that looking for the cause of extinction within an organism itself is problematic. If a supernova wiped out the dinosaurs of the Mesozoic, no “construction flaw” was responsible.

These two lines of reasoning contradict each other. There were attempts to make them compatible, with the argument that better and worse organisms existed in the past but that evolution has come to an end and there never can be anything better than what we have now. This argument, popular in its time, has hardly any supporters today. The process of evolution continues (albeit with the perturbation of human culture superimposed on it). Even today, some species are being filtered out and others are thriving. How, then, do we make compatible “synchronic” biology, which only praises, with “diachronic” biology, which only finds fault?

The diachronic and synchronic can be combined only in this way: each consecutive stage of a species is perfectly adapted to its environment, but changes in the environment produce evolutionary gradients; what used to be the optimal solution to the adaptation problem ceases to be such, and the species adjusts. The total of taxonomic hierarchies therefore represents something like a vast tracking system that follows environmental parameters, and the coupling is regulatory. Yet this model is evidently wrong. As Stafford Beer noted in his book Cybernetics and Management,8 the first vertebrates began flying not because their environment changed to air, even if earlier, fish gave rise to amphibians because areas of water dried out. But reducing the rise of amphibians to that cause is just as weak a hypothesis. The idea that expansion into new niches can be explained by “following environmental changes” is naive. We must recognize that the generator of diversity in organismal forms, given by genotypic variability, is creative process, not just a passive following of changes. If we revise the evolutionary-selection schematic accordingly, “optimal adaptation” appears to be a simplification, as there exist various kinds of responses, at different levels, to an identical adaptation challenge. It is precisely the phenomenon of the constant mutational inventiveness that reveals that the finitistic9 picture of the evolutionary process implied by the idea of natural selection acting as a mere stabilizer contradicts the basic tenet of bioevolution.

Let us, then, admit our ignorance and accept the possibility that among the life forms that appear to us to be equally well adapted there are some that are being filtered out because they are inadequate and some that are undergoing transformation—and we cannot distinguish between the two. It is impossible to do so by studying the diachronic order, summarizing and integrating the results of a long series of selections, because our lifespan is too short; moreover, two centuries of biology’s existence are a blink of an eye when compared with the time scale of evolution, too short for noticing these kinds of changes. And it is equally impossible to make the distinction by studying the synchronic situation, because we do not know all the intrataxonomic links. In early aeronautics, the diachronic approach would correspond to a series of observations that would inform us about the history of the lighter-than-air flying machines; the synchronic approach would correspond to the knowledge of their kinematics, steerability, and reliability in comparison with those of airplanes. The diachronic approach dispenses with the knowledge of the “immanent” properties of the flying machines, satisfied with knowing how the development took place in fact (i.e., that the balloons, blimps, and dirigibles lost to the airplanes in that “battle for survival”). In the synchronic approach, we do not have to wait for the end of the race but make predictions based on our knowledge of those devices’ “immanence,” that is, theoretical comparisons of the flight characteristics of airships versus airplanes. Our position in biology is more or less like that of the nineteenth-century engineer who knows that steamboats have triumphed over sailboats but seeing the first zeppelins and fuselages, with no option to wait for the outcome of their competition and not knowing the flight theory of either, is unable to tell which is headed to a bright future and which is headed to extinction.

The difference between the engineer and the biologist is that the engineer faces two forms that are equally primitive and imperfect and does not know their future, while the biologist faces two forms that seem equally perfect but he does not know which of their structural-dynamic characteristics is the key for their survival in the future.

III. Evolution Punctuated and Gradual

The parallels between techno- and bioevolution have their limits. An axiometric analysis of natural homeostats can be carried out in at least two ways: considering that the conditions of the beginning of life were either necessary or random. If they were necessary, we must accept them, and then we can question only the decisional sequences of the evolutionary process, because after the beginning, the process acquired a distinctly random character. This can be seen in the fact that where many populations participated in the “evolutionary game with nature,” for example, on large continents, placental mammals appeared, but where the number of “players” was much smaller, as in isolated Australia, “only” marsupials evolved. This shows that the winning strategy in the game with nature depends on, among other things, how many “playing partners” form a “coalition” to counteract the “mutational moves” of nature, represented by the habitat. The more partners there are, the higher the chance that one of them will draw from the mutational raffle box a rare gene configuration that is the “main prize” of the game.

The most general formulation of the question is whether all the genotypic configurations that ever existed on Earth have exhausted the set of optimal homeostatic constructions. This question could be called “the problem of missed opportunities,” that is, the winning tickets in the evolutionary lottery that no one ever drew. Evolution is a learning process taking place in a cruel school, where the punishment for failing is death and the reward for passing is life. I use the term “cruel” not in the moral sense but to stress the extremity of its measures, as it is hard to imagine a stricter method of teaching. The evolutionary selection is then a method of trial and error, in which the penalty for an error is death and the reward for a successful trial is—death’s deferral. One might say that this strictness compensates for the slowness of the process, which is by principle Markovian and therefore has no cumulative memory: the best mutational inventions got erased from the stream of evolutionary improvement if the species that carried them was eliminated from the game because it made a mistake. This is why the same invention was arduously assembled from the elementary gene combinations many times from scratch.

In this light especially, two phenomena in evolution appear particularly important and, at the same time, surprising. (When we are surprised in science, it simply means that we do not know the causal mechanism.) The first is the construction universality of the genetic code, which shows that the information system that arose at an early, unicellular stage of evolution later exhibited a plasticity that could meet the construction requirements of all the multicellular forms of plants and animals that we know. This fitness redundancy is surprising since it established itself hundreds of millions of years before its actual use in evolution. Because the genetic code, both its “lexicography” and syntax, was originally—for a long time, perhaps a few billion years—an information tool for making systems with the complexity of an amoeba but later proved capable of producing organisms with much higher complexity, such as insects and vertebrates. One would have expected instead an exhaustion of the combinatorial power, and the entrapment in an absorbing subset of forms not very different from those that invented the DNA code.10 We cannot explain this universality, acquired so early, unless we accept that there are deep, nonaccidental connections between the genetic code and language, namely, that both are principially open information systems with a set-theory character and similar numbers of degrees of freedom, owing to which the huge dissimilarity of their material substrates appears to be inconsequential. It follows that the rise of the genetic code and natural language represent two particular cases of the evolution of informational dynamic structures. The opinion that the hereditary code is a form of language has not yet been generally accepted, and most of the scientists who express it do so metaphorically. But I do not think that this is a metaphor; I believe that structural-linguistic research may finally enable us to understand the supreme laws that govern the emergence of all (not only natural) languages. Only then, with newly gained understanding, we will lose the sense of awe that we have vis-à-vis the chromosomal phenomenon, which will thereby find its place in the general theory of informational systems.

Today we know absolutely nothing about the generator that produced the language of heredity except that it must have been an enormously complex apparatus. The elements of the genetic code are not similar to any specific technology, because all our technologies have historically been finite, closed, unexpandable systems. That is the reason why they always reach limits of their possibilities, and we make progress only through our ability to abandon an old technology and turn to a new one. For example, the technology of converting thermal energy into electrical energy will have to be abandoned at some point in time, because there exists an efficiency limit of heat engines that cannot be crossed. We will switch to a nuclear method of generating electricity without using heat exchangers or to the direct conversion of the energy of chemical bonds into the energy of electrical current. During such an industrial revolution, a massive amount of knowledge, both theoretical and practical, stored in various constructions (e.g., the steam engine), is simply thrown away. Had evolution encountered such limits, it would have come to a stop, incapable of a total reorganization in which some solutions are stockpiled and others abandoned. Evolution can proceed only in a continuous way. (It is not continuous at the “quantum” level of genes, but this graininess is completely concealed in phenotypes by the compensatory-regulatory activity of ontogenetic buffers.) If biology is always a result of a summation of small changes, engineering, especially at significant turning points, proceeds in steps. We might argue about the scale of the bioevolutionary changes, but they can never compare in magnitude with the replacing of the steam engine with the nuclear reactor, for example.

But this difference is superficial, and the comparison is not rigorous. The basic principle of technological innovation is sequential change in energy sources, building materials, production tools, and the means of regulating those tools. In evolution, the energy sources, building materials, production tools, along with their control, are the same today as they were at the beginning: the energetics, the substrate, and the regulation have remained unchanged. More, any significant change in them appears impossible. There is no gene reshuffling that could make a newly emerging organism abandon chemical energetics for another type (e.g., nuclear) or change its building material or the rules of its transformations. It is only within this invariant triad that we can ask whether a combination of genes is possible that would allow an organism to organize itself in a way that is nontrivially distinct from the set that has been realized by evolution.

The magnitude of our ignorance will make the answer grossly simplified. Crude calculations suggest that the assumption about the complete lack of directionality in mutations and in selection of their outcomes is not able to generate a set of structures that would compete with the organizational diversity that evolution has achieved. Besides, the currently accepted theory implies things that are strange if not plain wrong. For example, the number of people living today already equals the sum of all their ancestors, starting with Paleopithecus. If Homo sapiens arose from that predecessor because of a mutation, then today’s human population should exhibit no less variability than its parent populations. Consequently, any day now we should expect the emergence of a form of Homo that would be at least as new as the Cro-Magnons were with respect to the Neanderthals. But this is not happening.

After the death of entelechy,11 there were attempts to embody its remains in genes—by bestowing on them a kind of omnipotence, the chromosomally located responsibility for everything that happens with an organism’s phenotype. Some believe that there are genes that determine the chance of getting cancer, and that natural mortality is due to evolution’s failure to eliminate lethal genes, just carelessly moving them from one corner of life to another, from the reproductive phase to the postreproductive phase, that is, the period of age-related decline. But in an automobile the destructive effects of its material’s fatigue are not caused by an equivalent of a lethal gene: the fatigue is not a mistake of the project engineer. The logical model of embryogenesis, as transformations of original elements into their final organized set (determined by cell division), is a sequence of enumerable and ordered step operations. The logical depth of such transformations can be arbitrary, because deductive operations, whether informational or material, are error-proof.12 But the logical depth of the embryogenetic process cannot be arbitrary, that is, there are construction limits given by the initial instruction of the genotype: after a certain level of complexity or a certain number of steps (the number of cell divisions in the embryo), the instruction loses its causative power and gradually drowns in “noise.” The engineering concept that embryogenesis utilizes is that the total of causative information that creates the final product is ready at the beginning, in “a single packet”—the fertilized egg—and the process of creation needs no further regulatory “help down the road.” In other words, a strong principle of engineering autarky applies, the same autarky that later gives trouble to doctors—helpers “unwanted by evolution”—who try to replace a damaged organ with a healthy one from another member of the species.

So lengthening the text of the original instruction and thereby increasing the depth of embryogenesis will not result in “completely new solutions” to the problem of homeostasis in the form of new organisms. A genotype, as a prognosis that realizes itself, has its causative limits. They were imposed on the genetic alphabet from the start and come in at least two kinds: limits “in width” and limits “in length.” Limits in width are barriers that do not allow access to new energetics, materials, and regulation strategies; limits in length are barriers caused by “noise” that at some point exceeds the regulatory power of the structure. If genotypic engineering is helpless vis-à-vis limits in width, it may not be vis-à-vis limits in length: an embryonic process that because of its exceedingly high complexity begins to “trip on itself” can in principle obtain “supplemental” regulation from outside. In this way, genetic engineering could achieve states that natural evolution cannot. Such an achievement would be meaningless only if it turned out that beyond the space of constructions to which embryogenesis leads naturally there are no other solutions to the problem of homeostasis, that is, somatic regulators of the phenotype can stabilize only what “creative” regulators of the genotype create.

The combinatorial set of genotypes (artificial ones, made by the “genetic engineer”) that can be constructed within such limits with the given DNA alphabet and its “syntax” has a cardinality significantly higher than the set of all the electrons in the universe. Even its subset whose members are only those reshuffling results that are true homeostats here on Earth has a cardinality that is not much less. However, a great majority of this subset’s members will be trivial variations of actual organisms (such as a horse with a cloven hoof, for example).

The building blocks of all those constructions are the cells. As for their properties, we cannot expect any revolution there: despite appearances, undifferentiated cells do not differ that much from specialized ones—the parametric width is at most like that between the contraction of an amoeba and the contraction of a muscle or the stimulus transmission in an amoeba and in a neuron. These property differences are of course important for the organisms but quantitatively they have the same order of magnitude. A stimulus in an amoeba is transmitted with a speed of several tens of centimeters per second; in a muscle, between ten and twenty meters per second. Organisms on various evolutionary levels differ for structural-integral reasons, not because they are “squeezing” out of their building blocks some new abilities that were not present from the start. The physical parameters of the structure are given with an inviolable condition—that they are given once and for all.

The novelty level of homeostatic invention, then, is a function of the cardinality of all the configurations understood as structures created by the shuffling of genes. But this novelty, so limited in scope and variability, turns out to be questionable. With the gene alphabet it is not possible to build any energetics other than the current one, any locomotion apparatus other than the one that has been adopted (given the energetic restrictions) within the boundaries set by the skeletomuscular type of movement (the skeleton can be either internal, lever-axial, or external, plated, but nothing else), and any regulatory structures in information transmission other than those that exist.

With these limits and restrictions, at both micro and macro levels, what synthetic invention is still possible? Has evolution already realized all the options worth trying? That is doubtful: evolution “forgets,” through the extinction of forms, various construction solutions, and we can “propose” them again to the genetic instruction. What’s more, some radical systemic reorganizations seem to be possible—for example, replacing hemodynamics with another way of supplying electrons to the tissues instead of circulating oxygen.13 It is doubtful that we can completely depart from the circulatory solution (i.e., pumping a liquid through a network in the body), but we might at least improve the pump. All animals use a pump, which is from the engineering point of view primitive (it is the principle of the pump that is primitive, not its realization; in an evolutionary aspect, this solution approaches the limit of what is possible, but the limit itself, given by physical properties of this type of pump, cannot be moved). Replacing a mechanical pump with an electrodynamic pump would not pose any serious problem, because, as we know, gene instructions are capable of making very competent electric organs. But equipping the moving corpuscules—the blood cells—with magnetic or electric polarity would be more difficult. No genotype can produce a magnet or, in general, a metallic part, and so will inevitably involve ions (no other solution seems feasible in the liquid environment). It might be necessary to concentrate ions above the tissue safety limit. So we can dispense with the localized pump that is the heart muscle and instead make the walls of all arteries an electrodynamic pump that has no moving parts.

We have come to understand not only that an electrodynamic pump is possible but also, what is more important, why evolution has not realized it. The prototype of the heart was a small contractible tube in small animals, and this solution was subsequently “dragged” through all branches of the evolutionary tree. It is a kind of solution that works the better, the smaller the animal in which it is “tried.” For example, the tracheae in insects14 make lungs and blood circulation superfluous but appear to limit body size, so insects could not “become smart”: because the ratio of information-processing power to the capacity of the neural system is roughly constant, a neuronal brain cannot be miniaturized to such a degree that a moth or ant would reach the “intellectual level” of a rat. Had insects not stumbled on the tracheae solution, we would probably not be here. This is an example of how, on the evolutionary path, a solution to the problem of homeostasis becomes irreversible. Once the tracheae or the heart as a mechanical pump have arisen, it is impossible to back from them through natural evolution. Yet a transition from the discontinuous to continuous pumping of blood would bring advantages. Blood pressure would be stabilized, the problem of distributing the blood supply to different parts of the body would be simplified, and so on. The “instrumental axiometry” of homeostasis would definitely welcome such an “improvement.”

Then why did evolution not “get this idea”? Regarding the opportunities that evolution “missed,” I point to the possibility that a nontrivial innovation like this depends on the simultaneous (synchronous) occurrence of a number of mutations that are independent of one another. The higher that number must be for the “innovation” to take place in an organism, the lower the probability of the occurrence. Above a certain value, we could be speaking of an astronomically rare event, like tossing a coin a thousand times and getting a thousand heads. The evolutionary game would then be condemned to failure, were it not for a clever maneuver, “an ace up the sleeve,” in the form of recessive alleles.15 A recessive gene is something like a trump card: while “hidden,” it has no power, but in certain situations of a game that is under way it can determine the outcome. A bridge player must wait a long time to receive a grand slam hand from a card distribution made random by shuffling. But with one trump card up his sleeve he can have a grand slam even if he is dealt all the necessary cards but one. The more trump cards he has hidden, the less time he must wait for the lucky hand. But just as a player cannot have a whole deck up his sleeve, an organism cannot keep in reserve an arbitrary number of recessive genes—especially since such genes are generally “not good for anything,” that is, they do not find “instrumental values” in any of the possible population distributions. Besides, an organism cannot determine its own inventory of recessive genes, deciding which are “good to retain” and which are not. But the “chromosome engineer” will be able to do this someday.

Now we are getting to the point that is of special interest to the evolutionary axiometrist. Evolution is often “blamed” for the Markovian character of the regulator that governs speciation. Being Markovian, this regulator is uneconomical and extraordinarily slow in its learning. For this reason, many biologists considered that inheriting acquired traits—a non-Markovian type of chromosomal “learning”—was an evolutionary necessity. But evolution’s method, for all its wastefulness, turns out to be better in the long run, because the equilibrium that a Markovian chain reaches is not final. The impracticality of the natural regulator, evident in the seemingly “meaningless” combinations that selection must constantly keep pruning away, is in fact a valuable source of variability, because only a high degree of variability guarantees the successful resisting of any change. Inheriting acquired traits is no doubt much more effective in the short run than a Markovian process, but it could easily lead the species into a dead end. A Markovian regulator allows the game to continue, although the price that evolution must pay for this freedom is huge: the wasted lives of billions of beings. I am not saying that this type of regulation is the best out of all that are evolutionarily attainable. A Markovian regulator may keep redealing the cards—in the “round” of amphibians, reptiles, and mammals—but its new decisions, governed by randomness, cannot be completely independent of the previous ones. The permanent supremacy of tactical solutions over long-term strategies results in limiting all future states, perhaps even billions of years from now, by the decision made for just one state in the present. In other words, the Markovian process is not immune to the possibility that future states that are homeostatically superior will be blocked by previous inferior solutions owing to the simple fact that those solutions occurred. A Markovian random generator cannot change the fact that the ancient amoeba had a much wider choice of evolutionary alternatives than the mammal. Evolution is thus a true game in the game-theory sense, in which good luck can beat any employed strategy and bad luck can bring an inescapable defeat.

The second way in which the evolution of life differs from the evolution of technology is the absence in the former of what is called moral obsolescence in the latter. Or it at least appears absent at first, as ancient corals coexist with the “modern” dolphin, the snail coexists with the human, and primitive lichens coexist with the latest botanical product of evolutionary inventiveness. This suggests that the set of evolutionary realizations is not ordered on a single axiometric axis. There are evolutionary tasks that may be solved in many engineering ways, but these solutions cannot be compared on the same scale. In reality, however, this is not different in engineering. In both biology and technology, problem-solving cannot avoid compromise, that is, the dilemmas that could be called engineering antinomies. These occur whenever a state that is optimal or maximal in one function represent, at the same time, a suboptimum or less than the maximum in another function that is equally important. An example is the relation between the deviation correcting and the tendency to oscillate, which always exists in an optimally regulated system. Such conflicts are usually more complex, having more parameters than two. It is an old wisdom that a whale cannot be as agile as a flea. Evolution works in an environment brimming with dilemmas like this.

I would add that an engineer’s advantage over evolution is not as great as one might think on the basis that evolution does not have the luxury of prediction that an engineer has. The thing is that an engineer’s information is incomplete and he cannot wait forever to achieve perfect knowledge about what he is constructing; in a way, every one of his inventions is “premature.” The price to pay for decisions that are “premature” (in quotation marks because they cannot be other than that) is a high failure rate in technology (consider the first airplanes), and in evolution a high mortality rate (in a “novelty” radiation). A compromise is therefore unavoidable, whether the process is being carried out by a person or, as in evolution, “impersonally.”

In any case the instrumental axiometry of biology must learn how to employ the methods that engineering has developed. Measures of value make sense only in the context of a particular technology: multidimensional comparisons can be drawn between a jet and a propeller plane, but one cannot compare a plane with a pair of roller skates. Intrasystem values of a particular technology have an equivalent in values found in a particular schematic of body organization in biology. Someday we might axiometrically order all insects or all land mammals, but we cannot ask (and even less so answer) whether the organization of insects is better than that of the mammals. The less particular and more general the thing that we are trying to measure is, the more obviously the measures of value lose their applicability. No axiometry can possibly exist that would objectively justify the claim that the human being is “the crowning achievement of creation.” It may be an adaptation solution that is radically different from all others in the animal kingdom, but this “being different” can never become, in a purely instrumental sense, “being better.” To receive this compliment we would have to supplement our criteria with noninstrumental values.

A “random-start” axiometry is mentioned here only briefly, because it goes beyond the limits of biology. The “inevitable” version, sketched above, accepts that the gene alphabet is a “given” and cannot be questioned. Yet we can imagine that all the forks and zigzags of the evolutionary process were not inevitable in every detail of every speciation or every event, and that the “lexicographic” and “syntactic” features of evolution’s “articulation” apparatus—DNA—resulted from random events and therefore could have been totally different. This might be just a fantasy and nothing different could have occurred in reality, but until that is proven, a hypothesis of different contents is permissible.

As I have shown, the causative “articulation power” of the genes made from DNA is—from the technological point of view—not only far from being infinite, but it may not even surpass the capabilities of human engineering. For example, it cannot realize an energetics that is not chemical and does not rely on proteins. It cannot access processes whose realization requires high pressures, high temperatures, or high radiation densities. So the field of gene causality is clearly bounded and closed. We may still question the cosmic uniqueness of the initial alphabet and syntax. It is possible that different initial conditions, geological and chemical, could facilitate the emergence of a different “articulation apparatus,” one not based, or at least not exclusively based, on proteins. If support for this is found either in a terrestrial laboratory or in the exploration of extraterrestrial bodies, biology will become a science that studies only one particular form of life processes. Which will obviously bring a new relativism into its axiometry, because what cannot be meaningfully valuated by an instrumental approach when it belongs to the set of genetically unrealizable constructions may be so valuated when it is created by another generator of heredity codes, offering us a broader configurational space for solutions to the problem of homeostasis. But today we can only think of this as a possibility because we know nothing of its concrete instantiation.

IV. Biology and Noninstrumental Values

Values “of the second kind” are typically a cultural phenomenon,16** which is well known to anthropologists, who dedicate all their efforts precisely to uncovering and comparing them. The question arises whether the presence of such values in biology can be detected in any objective way. I believe it can. But certain initial assumptions must be made, as in any kind of research. If in an organism any nonadaptive trait were immediately eliminated by natural selection, values of the second kind could not have emerged by evolution. If the environment had shown no “neutrality” with respect to at least some of such traits, it would be impossible to fit anthropogenesis into the evolutionary scheme.

Here too ex nihilo nihil fit.17 Nonadaptive mechanisms, which in later stages of anthropogenesis manifested their axiogenic character (in the second-kind, noninstrumental sense), could not have arisen out of nothing, just as the human brain or a bird’s wing could not have come out of nowhere: the predecessors of these organs must have already contained certain traits that were augmented in selection. It is therefore likely that not all traits in organisms are necessarily adaptive; some are additional, superfluous. These “add-ons” keep passing through the eye of the evolutionary needle because the environmental filter does not sift them out and because something facilitates their passage. Such facilitation can at first, in a small population, take the form of an ordinary “passive-statistical” genetic drift. Since the traits realized by the drift are generally not a substrate of any functional “meaning,” they are instrumentally harmless but serve no purpose. If endorsed as a result of sexual selection, they might have participated as an “aesthetic” criterion in that selection. Discovering them is equivalent to discovering “autonomous values” in biology. Such hypotheses have been put forward, if in a slightly different form. Employing instrumental axiometry, we would conclude that a fancy courtship feather display in birds “is not good for anything” because a signalization promoting contact between the sexes can be achieved by much less showy and “aestheticized” feather patterns. If we definitively prove that the display is informationally redundant, that it does not combat environmental noise or augment the signal’s species specificity, we must accept that it is the result only of the sexual partners’ “aesthetic decisions.” In that case a given feather pattern is preferred for reasons that go beyond signaling—the partner birds simply “like it” better.

Of course, biology is not permitted to speak in such terms; all it can do is note the nonadaptive redundancy of certain information. Most likely the reasons why that information has been privileged in sexual selection will remain a mystery forever. Only reasoning by analogy, which in a methodologically cumbersome and convoluted manner extrapolates from human to other organisms, allows us to ascribe this redundancy to esthetic criteria used by some animals (birds, lizards, etc.) for mating. But to determine what in that information is instrumentally superfluous, we must know in complete detail the utility boundaries of all information in an organism. Therefore, “the discovery of values of the second kind” must be preceded—and even that is no guarantee—by a thorough grasp of the instrumental axiometry of homeostasis. Unfortunately this goal lies in a distance of an unknown number of generations of biologists.

V. The Axiometry of Progress

The term “progress,” when used in biology, denotes an increase in specific abilities. Not always, of course: we also say that a disease is “progressing” in an organism. But in general, progress means that the next state is in some way better than the previous one. The path that evolution has covered from amoeba to human being seems to be clear evidence that great progress was achieved. But when we subject this progress to axiometric tests, we encounter difficulties.

These difficulties do not appear when we compare the elements and functions of organisms in isolation. Different evolutionary solutions to the “problems” of seeing, hearing, blood circulation, locomotion, or formation of the “image of the world” in a nervous system can be ranked according to the level of their effectiveness. As it usually happens, if we have an aim specified in an articulated assignment, technological valuation is fairly straightforward. It might seem that to determine how far each solution is from the optimal point on our scale it is sufficient to transfer the results of our comparing to other forms of life plucked from the evolutionary tree. But this will quickly turn out to be nonsensical, because as the microscope is not the best watchmaker loupe or the lighthouse the best car headlights, so the eye of an eagle is not “better” than that of a fly, nor does a flatworm “improve” when given the organs of sight. The failure of this method prompts us to compare whole organisms as homeostats. But the best homeostat is not the biggest, not the one with the largest number of sensors, not the one that has the highest degree of harmony or complexity, and not the one that is thermodynamically the least probable. The best homeostat is simply the one that pursues self-preservation with the greatest success.

The task of homeostasis is not equally difficult in every environment; this difficulty depends on the level and quality of the environment’s “noise.” Ranking all the Earth’s environments by difficulty is probably not possible, however, because a perturbation in one environment poses a different homeostatic task to solve in another. A good point of departure in our comparison would therefore be to introduce the notion of a homeostatic minimum for a particular environment. We could then proceed to create the notion of a “generalized minimum” as the ability to survive in two, three, or even more different environments (on land, in water, in air). Naturally, ecological classification distinguishes many local niches in each environment: adaptation in shallow waters is a completely different task from adaptation in oceanic depths. This fact notwithstanding, the notion of either a single-environment “minimal homeostat” or one “generalized” for several environments would be tenable if the course of evolution ran along either of those lines. But it doesn’t: organisms with radically different structural blueprints can occupy the same ecological niche. Nor can the information theorem of heterogeneity (which says that a regulator must have heterogeneity sufficient to represent the environmental states) serve as a criterion. This theorem assumes that the regulation process—in our case, homeostasis—must act continuously and that the suspension of life functions is due to homeostatic failure. But such a suspension is often reversible. A homeostat, subminimal in a given environment, may not pit its heterogeneity against that of the environment but instead cease to be a homeostat temporarily. An engineer would appreciate such a trick and would love to be able to use it. When an airplane is about to crash, it would be wonderful if it could temporarily turn itself into a parachute. That this does not happen in reality is due to technical reasons, not any engineering principles. A person in an awful situation, instead of struggling to extricate himself, could hide in a freezer and in a state of “reversible death” wait for the situation to improve. Such behavior might be culturally judged as cowardice, but it is a value judgment distinct from instrumental valuation. As Scripture says, better a living dog than a dead lion.18 Because no organism can withstand all perturbations, the ability to die reversibly in an emergency would benefit all living creatures. If evolution did not make this strategy universal, it must be due to enormous difficulties on the path to that solution, which modern medicine is trying to achieve in collaboration with biology. Artificial hypothermia has its equivalent in the natural hibernation of some mammals, but mammalian reversible death by “vitreous” freezing, of which biotechnologists dream, has no natural equivalent.19

Evolution creates “subminimal” homeostats, but it creates “redundant” homeostats as well, which are usually considered “progressive” forms. Evolution is a game that complicates itself over time. Its rules, initially limited to interaction between an organism and its environment, become enriched with rules for interaction between organisms, at first between organisms of the same species, then between organisms of different species. Locomotion solutions are simple when a predator ciliate chases a vegetarian ciliate but grow complex when a lion chases an antelope. Yet these two solutions are incomparable, and they remain incomparable if, for example, we replace the lion and antelope with a pike and a bream. All address the same homeostasis problem but for homeostases at different levels of redundancy.

Evolutionary progress may be real but is not rational from any technological point of view regarding homeostasis—because an engineer believes that a solution must be as simple as possible, that “constructed entities” should not be multiplied beyond what is needed. If we are in Europe and our destination is America, then any means of transportation that carries us from Europe to America faster than the one used before is a solution as rational as it is progressive. They are compared to the principially unattainable locomotion ideal, which is an immediate transfer, a transfer that takes no time at all and therefore marks an end on the scale of possible improvements. But if the task is the construction of a homeostat, there is no point in making the homeostat more complex. If an increase in complexity could bring us closer to the desired end of the scale, the ideal homeostat, and then the complexification, justified by the increasing homeostatic capabilities, would earn technology’s approbation, whereby it would become subject to instrumental axiometry deducible from technology. But this is not the case. Higher organisms do not function better than lower organisms as homeostats, and an increase in complexity does not mean progress toward “a perfect homeostat.” The difference between an ant and an antelope is like the difference between shooting craps and playing chess, not like the difference between a bicycle and an automobile. The latter difference is between two things that essentially have the same purpose but operate at different levels of complexity.

Evolutionary progress in organisms viewed globally amounts to the increasing investment of elements in a living system for the sake of better stabilization. The tasks that need to be solved become more complex in all aspects—in the material, energy, and both local and integral regulation—but our knowledge of the theories of regulation and dynamic programming, which is still negligible, suggests that although the difference between the information invested in an ant and an antelope may be physically measurable, it is not a sufficient basis for applying technological axiometry here. Neither an ant enlarged to the size of an antelope nor an antelope “microminiaturized” to an ant would be able to function. A hummingbird and an albatross may have the same construction blueprint, but they appear to be two different solutions to the problem of flight: an albatross is a long-distance glider, while the flight parameters of a hummingbird are more similar to those of larger insects. The baselessness of value judgments delivered when different organisms are compared causes the apologist of progress in evolution to prop his thesis—no doubt unintentionally—with ad hominem arguments. Julian Huxley, for example, compares the eagle to the flatworm and asks readers to picture the tremendous “progress” that has occurred between the two forms.20 But what critical judging instance does this appeal address? Only our esthetic criteria make us imagine that an eagle’s existence is wonderful and heroic, whereas that of a flatworm is opportunistic and ugly. These are not instrumental values, however. What’s more, Huxley’s words also imply an assumption that cannot be made in engineering. No engineer will say that the electrification of a city of one million is more progressive than the electrification of a hamlet, or that train dispatching in a large railroad network is more progressive than that in a small one. In each case, the former task is more difficult, but that’s all. It is inconceivable that someone would ask us, “Which would you prefer to be, a small railroad network or a large one?” Yet it is possible to ask, “Which would you prefer to be, an eagle or a flatworm?” Empirically it is the same nonsense, but people will choose the eagle, not because “the eagle’s situation” is “objectively better” than that of the flatworm—no one knows that—but because they feel that it is somehow existentially closer to them. The subtext of many an argument on the matter of evolutionary progress is, “The evolutionary higher an animal we are, the nicer (more appropriate, more interesting, experientially richer, etc.) it feels.” We should not be surprised at the power of an argument like that.

A change in the environment means a change in the rules of the game. The factor that caused increasing cephalization of organisms or more generally their “individual ability to learn,” was that the genotype’s rate of learning could not keep up with the pace of environmental changes. Obviously the longer a personal life cycle, the slower the “chromosomal learning” will be, since the “quantum” of learning is the individual passage of an organism through the environment. In a simplified form the adaptation dilemma is: how can a regulator be built that performs well in the current version of the game but at the same time is prepared for any significant and possibly rapid change in the game’s rules? Evolution has answered this question in a variety of ways. First, if the change is significant and rapid, withdraw from the game temporarily by “turning into a spore.” But this tactic presupposes an eventual return of the original conditions (the game that the organism knows how to play). Second, maximize the preprogram heterogeneity of the regulator and make the preprogramming itself evolutionarily flexible. Finally, if possible, replace the preprogramming of the regulator with self-programming, learning capability.

In practice, the “answers,” represented by organisms, are “mixed.” The first belongs to the category of “spore” solutions, and it came first because reversible homeostasis is easier to realize the simpler the homeostat. The second answer makes up the class of solutions given by insects. Insects play an enormous number of preprogrammed games, with the preprogamming stabilized in the forms that passed through the environmental sieve many millions of times. The third answer, constituting the class of cephalizing solutions, is better than the second, but compromises between them are possible. The first solution retains a perfection of a kind that the other two have not achieved. The hibernation of some mammals is an attempt in that direction.

It is often said that insects are a lower form of animal that has shown a certain evolutionary success. Success and progress are not the same thing, however. Insects have existed for several hundred million years. Some build underground cities, others farm plants for food, and still others “domesticate” animals (other insects). Certain insects (bees) are the only animal form that has created a simple but effective language for instrumental communication that is hereditary. There are four times as many insect species as all the other species taken together. Insects are found in every environment on Earth except the oceans. None of the higher forms, which appeared considerably later, pushed the insects out of their ecological niches. In their time, the planet went through many random oscillations—mountains rose and fell; deserts turned into jungles and jungles into deserts; vast marshlands dried up, which finished off the saurians; ice ages froze once subtropical regions; the flora of whole continents changed—and insects passed through all those environmental sieves. When eventually humans appeared with insecticides, insects quickly adapted to them and their poisons. But this success is not an argument for the “progressiveness” of insects’ construction. And if progress in construction is something other than the optimization of survival in multiple environments, there are no homeostatic criteria to measure it. Insects indeed fail to meet the requirement of “progressiveness,” which according to Huxley is the power to evolve further: a progressive form should be not only optimally adapted but also able to change into a next, higher form.

It is easy to notice the problem with this definition, which implies a “duty” to change. In science, an analog of this evolutionary rule would be an empirical theory that not only predicts things but also is falsifiable, that is, it must be possible for data to disprove it. But if it turns out that a theory is always predictively effective, that data repeatedly confirm it instead of contradicting it, we accept after time that the theory is good. We do not think that it is the worse for not needing modification. Just the opposite: the larger area of facts it covers and the longer it lasts without change, the better it is.

One could say that these are a theory’s “external” aspects, that is, in relation to the real world, the environment, and we valuate it accordingly. We can also valuate a theory in another way, by its internal properties: the criteria of structural simplicity, logical consistency, and so on. If someone were to show us a new theory that was equally successful in prediction as an old one but more complex internally, we would never say that the new theory represents progress for the reason that it was more clever in its structure or had more mathematical finesse.

Theories are principially valuated according to their predictive effectiveness; and what in science is predictive effectiveness, in evolution is the effectiveness of survival, fitness. Neither the “higher pleasures” of life that a larger brain may provide, nor the enormous complexity of the brain’s structure, nor the regulatory ingeniousness of the body housing the brain justifies instrumental valuation of a mammal over an insect. Of course, we can employ another kind of valuation—by degree of complexity, for example. But then the higher value (the higher complexity) is a consequence of our assumptions, which are not instrumental. We would have to accept first that what is more difficult to make automatically merits higher valuation. But why should the probability of a state be inversely proportional to its value? No one knows.

Insects play the game against nature no less effectively than we do. A species that has survived several hundred million years does not need to “prove” anything, unlike a species that has been around for a mere six hundred thousand years. Insects passed the test; humans are just preparing to take it. No doubt insects are closer to the “homeostatic minimum” than we are. Their solution, simpler, has proved extraordinarily stable. Whether the cephalization solution is better, more progressive, is an open question worth considering.

The postulate of permanent plasticity is not absolute. It is relative with respect to the set of possible perturbations to the homeostatic process. If a form arises that can handle any perturbational eventuality, there is no “better” than that; there can only be a different form that solves essentially the same task in a different way. But is the task truly the same? Does not civilization introduce new conditions to be met, conditions that may contradict those created by the evolutionary process? When a civilization strives to minimize individual mortality, it exhibits a parallel with “engineering frugality” that contradicts evolution’s approach, which does not minimize mortality at all. For evolution, preserving a species seems to be a task more important than that of preserving its individuals. And preserving the biospheric process itself is the most important task of all. Civilization attempts to reverse this hierarchy. But comparisons can be made only when the tasks are analogous. A flying species is not automatically better than a land one just because it can take to the air, and a species that conquers space is not automatically better because it can do what so far no other species has done. Organisms should be compared not according to a success of one particular solution but according to how they handle the “minimum task.”

A recurrent theme of evolutionary progress is the thesis of the superiority of the cephalization solution over all others. The increase in neural mass noted in the diachrony of paleontological reports appears to confirm the universal benefit of having a brain and also the higher adaptivity of animals that have larger brains. Yet what exactly has a large brain, comparable in size to those of the primates, given to dolphins? A stable presence in the ecological niche of the shark, an undisputed “dimwit”—and not much else. In evolution, it is necessary “first to make the fish,” then “turn it into the amphibian,” and “through the reptiles” one can get to the mammals. Since a large part of the evolving organisms gets “trapped” on the way in the absorbing “sinks” of evolutionary immobilization, the solution that peaks in the neuronal approach—the anthropogenic—belongs to those that are the most difficult to achieve and are the least probable. But the maximum homeostatic value of this solution demands a separate justification, otherwise we are making a circulus in explicando: intelligence is the best, because the path to it is the longest, and the path to it is the longest, because intelligence is the best. That intelligence has created culture is a separate issue. We cannot valuate it in its immanence; we can only use the biological criterion of effectiveness. One of the two must obtain: either intelligence is the pinnacle of homeostatic self-preservation among all evolutionary solutions, or it is just one of many solutions, having no universal value.

Plants would survive without animals, but animals would not without plants. And this includes humans. At present, the maximum perturbation that the lower forms would survive in would be too much to handle for the higher forms. As spores, the lower forms could escape the destruction of a nuclear war, a nearby supernova, and other planetary or cosmic catastrophes to which civilization would succumb. Intelligence does enable us to note this vulnerability and thus make civilizational efforts to protect ourselves—but what measure do we use to mark the value of this diagnostic talent displayed by Pascal’s reed? Especially when the optimization of evolutionary success always involves the overall equilibrium achieved by biocenosis. In the past there was so much talk about evolution’s “tooth and claw” that people forgot that a predator that hunts too effectively, eliminating all its prey, dies out. Consider the evolution of parasites: the evolutionarily young parasites can be distinguished precisely by the fact that they are “too effective,” killing their hosts and so endangering themselves. The evolutionarily old parasites work out a nonsuicidal equilibrium with their hosts. Evolution eliminates forms that are too exclusive in their self-preservation, making their narrow success temporary. Hence the form that defeats all other forms in the competition disturbs the ecological balance and the form turns into a self-destructive homeostat instead of the perfect one. Some “evolutionarily experienced” parasites show so much “caring” for their host that when there are too many in one host, they curb their “exploitation.” Yet what in a long temporal series appears to be, in a purely physical sense, an inevitable outcome (an equilibrium that is simultaneously a criterion for the singular homeostatic effectiveness of the species and a dynamic state capable of further “complicating random walk” only when the stability of the entire system is preserved) is in reality a vast regulatory automatism which our so-called intelligent activities undermine instead of supporting it. Saving the biosphere is in our best interest, instrumentally, but we have not been very successful on this front. So civilizationally it is a long path from the emergence of intelligence as a homeostatic tool to making it safe, that is, eliminating its self-endangering potential. Intelligence may eventually become independent of the processes of the biosphere, at which point the biosphere will survive only if intelligence wants it to, but that decision will be based on humanitarian, not instrumental concerns. But of course, this refers to an unimaginably distant future, whose facultative state cannot be a measure of what is taking place now.

An opinion, sometimes formulated implicitly and sometimes explicitly in the context of the thesis of evolutionary progress, is that anthropogenesis is not a passing phase. The criterion for evolutionary progress, as Huxley clearly put it, is two-pronged: optimal adaptation in the present and the possibility for the evolutionary process to go “further” in the future. If the human being is the ultimate form, if there is no better adaptation tool than human intelligence, then we are a sink from which evolution cannot escape. But from purely organizational, statistical, and also adaptational points of view, machines that replace biological forms and create a global, autonomous planetary homeostat are a more robust solution than human civilization. Looking in the arsenal of evolutionary criteria for an argument that would make the value of civilization greater than that of planetary machine homeostasis is in vain: no such instrumental criteria exist. What to do then if some day cybernetic machines inform us that they are “the next stage of evolution”? If the principle of “transmissibility” as the determinant of progress is nonlocal, we must pack our bags. If it is local, we have articulated an apology for ourselves, for the system that created us. As we can see, granting intelligence the highest, ultimate value is a self-aggrandizing gesture that may turn out to be double-edged. We may lose the race of purely instrumental abilities, and therefore we should not consider the evolutionary-instrumental justification of culture as sufficient. Let us instead give civilization an autonomous value that is not deducible from anything. Of course, the machines that pretend to the position of our more perfect successors will consider that merely a desperate human trick—but this dispute with them is definitely beyond the limits of our topic here.

The strictness of my valuation of intelligence may be seen as disregard of values that have led to the “psychozoic culmination,” but the rigor is just a consequence of the initial assumption—that of “falsifiable,” experimental axiometry. From its point of view life is a self-supporting systemic response that is as indestructible as physically possible. This indestructibility is the homeostatic minimum environmentally bounded by the amplitude of biospheric conditions. In valuation, survival of individual homeostats, their personal longevity, apparently aiming for the principially unattainable goal of immortality, may be indistinguishable from the variant in which the system is stabilized by a frequent renewal (the “mayfly method”). Similarly, the expansive nature of life radiating from a single environment into all directions should be valuated only by its resistance to perturbation; the expansiveness is not a value in itself, it is just the means to increase the number of tests for durability. As in engineering, the instrumental value of an object is determined by the result of the object’s durability test, and the test in its technical aspects is subject to axiometry established by the measurement theory. The test, as a set of procedures, can thus be valuated regarding its ability to uncover the object’s features that are of interest. In engineering, the test is relative with respect to the purpose that it serves; in biology, it is an end in itself, because it is given—by the self-preservation principle of homeostasis. If the products of evolution exhibit an increase in self-preservation effectiveness despite rising levels of noise, greater complexity can be explained instrumentally: for a given homeostatic minimum, the heterogeneity of the homeostat is proportional to that of the environment. But the overall bioevolutionary characteristics cannot be reduced to this proportionality, because it is only one of many determining factors.

Culture is an exceptional form of adaptation: first, because it is created by the other, metagenetic channel of homeostatic regulation (the channel that enables metachromosomal, cumulative learning), and second, because, unlike bioevolutionary change, a cultural change is principially reversible. This is suggested by the fact that the destruction of culture, whatever its cause, does not affect human biology; it merely pushes human beings back to the point of origin of sociogenesis (of course, within limits: a literally total annihilation of a culture is a historically rare phenomenon and the purely biological characteristic of the human being, the one that is solely determined by genetics and is completely peeled away from the layers of culture, can be—negatively—affected only by certain intentional procedures). Slightly digressing, I note that human efforts have usually tried to make a culture precisely an irreversible phenomenon and that the failure to achieve that goal has been considered a culture’s weakness. But this phenomenon cannot be valuated according to typical evolutionary criteria. The rules of cultural development are not bioevolutionary rules, so bioevolution can teach us nothing about cultural norms and, vice versa, we cannot apply cultural criteria to evolution. Consequently, the point at which the bioevolutionary process leaves its traditional, monoselective, exclusively biological stochastics, that is, “the anthropogenetic locus of evolution,” cannot constitute the pinnacle of the valuation scale that the biologist-axiometrist utilizes. The purpose that the properties of this point serve is no longer measurable on the scale of biological values. More, that scale itself changes at this point, and biology becomes valuated by culture. And this valuation is not necessarily instrumental. There is no contradiction between saying that “cultural criteria cannot be applied to evolution” and then saying that “biology is valuated by culture.” The first, an “impossibility,” simply means the absence of a place that is external to both culture and biology; the second, a “possibility,” denotes the valuation of biology from inside a culture, which is therefore not completely empirical. Ascribing the greatest value to intelligence, stressing Schopenhauer’s personally realized “principium individuationis,”21 and striving for immortality, equally valuable as human strife for the “the ultimate truth” in epistemological determinations and for “understanding of the world”—all these are bound to be just accidental with respect to the evolutionary process. This accidental character applies to valuation, not to the facts that take place. Facts can be used to measure the degree to which particular features of evolutionary dynamics determine particular nuclei of human cultural endeavors (this influence is principially stochastic and therefore largely unpredictable). Neither these stochastics nor their consequences can be characterized as “good” or “bad” in a sense that goes beyond homeostasis. Even if all possible kinds of intelligence were equifinal states of many distinct planetary bioevolutions, the valuation of each bioevolution, made by the intelligent beings that are its own products, would be a declaration articulated from a random, arbitrary point of view. Even if the whole universe were bursting at the seams with intelligent beings, each and every valuation of the processes that produced such an outcome would remain instrumentally unjustifiable.

Life uses various tactics to realize homeostatic minima and homeostatic redundancies. We can call progress the increase in various abilities that takes place on the path along the branches of the evolutionary tree. But the scale that would allow us to measure how much instrumental goodness emerged between evolution’s biogenetic beginning and its psychozoic end does not exist, because where there is not one path but an immeasurable many we cannot speak of values without ambiguity.