The Ethics of Technology and the Technology of Ethics

Introduction

1. The acts of a lion killing a lioness, a rabbit doe eating her kittens, or a praying mantis female eating the male after copulation usually are not considered unethical, because we do not put animal behavior in such a category. Yet we may find a difference between the first two examples and the third: what for the praying mantis is a species-specific behavioral stereotype is a departure from a stereotype for the rabbit and the lion. The difference is based on the idea that animals—in view of the goal of species preservation, which is an evolutionary given—cannot behave in a manner that leads to their species’ extinction. In the sense that it is teleologically conditioned, the stereotypical animal behavior is rational—it would stop being rational, for example, only if the female mantis ate the male before copulation.

2. Human ethics contains a similar rational nucleus, but it cannot be considered a stereotypical species behavior simply because no such unified stereotype exists. Ethics appears to be a consequence of the emergence of language, which enables us to compare present “model” situations with those that have taken place or are anticipated. If the model is “appropriate” or “inappropriate” (distinct from the criterion of true or false), it is possible to evaluate it axiologically. And if the model represents an interpersonal situation and the comparison is made to determine its consistency with behavioral directives (stabilized by culture), then it acquires an ethical character.

3. Which situations are subject to ethical evaluation is decided by culture. For example, personal unemployment is in some cultures ethically neutral, while in others—especially in the industrial ones—it is blameworthy. For example, ruling cultural patterns of behavior may require constant activity, what’s more, an activity of a specified type: in some cultures, a sensu stricto creative work is esteemed, while “doing business” is looked down upon.

4. By “ethics of technology” I mean, in this essay, the effects of technological development on the ethical behavior of individuals in society. When speaking of “ethical norms,” I mean those that can be “distilled” and reconstructed from empirical studies of individual behavioral stereotypes in ethical situations, not the norms that people stereotypically declare in their oral statements. A societally declared ethics is not necessarily identical to the one that the society practices. Such divergence between a theoretical formula and an actual stereotype used in practice occurs in all societies, and if they are stratified (by class, occupation, etc.), the divergence, which is to some extent adaptive, can create various groups, classes, or professional ethics. The difference between the ideal and actual behavior is certainly an important parameter of any culture, but I am not addressing that topic. Speaking of the effects of technology on ethics, I limit myself to the changes occurring in “ethical behavior” without giving much attention to how they are viewed by education, propaganda, or religion.

5. Readers might get the impression that the effects on which I focus work like this: ethical norms A and B allow for the development of technology X, but after some time it turns out that the technology has pushed norm B out of the system and replaced it with the new norm C. The system (A, C), different from the original (A, B), can be called an ethics transformed by an instrumental effect or, in short, “the ethics of technology X.” But, except in special situations, technological processes do not affect ethical phenomena in this way. A change in ethics after a societal transformation caused by technoevolution is marked by adaptability that is extemporaneous, and therefore ethics is a behavioral program that undergoes transformations on a level that is different from that on which ethics actually operates. Here is an analogy from the organic world: a change in ethics corresponds to a transformation that creates a new species: the factor that induced the variability does have a connection with the creation of the species, but adaptation is not a simple result of inheriting acquired traits. Like genotypes in biogeocenosis, people in a society have at their disposal an enormous excess of diversity in their response to situations, a diversity that can exert a regulatory effect if a need arises. In a genotype, variety exists thanks to the reservoir of recessive genes, which is continually being enriched by mutations; in homo socialis, the variety is due to a behavioral plasticity (the “reactivity potential”). In an evolving culture, technology and ethics appear to be dependent random variables, so we need to study changes in both and at the same time heuristically accept their stochastic nature. Such study is difficult to carry out, because in a complex system like society causal chains often fork, and we end up not with a chain but with a network. In that case, selecting specific links and connecting them in a single chain will always be somewhat arbitrary. Therefore, instead of seeking technological causes and linking them to ethical consequences, we should look for correlations. To my knowledge, no one has done this in a rigorous and well-documented way. For example, there may be a connection between the ethics gravitating toward nihilism in some youth groups and the “technological explosion” in this century, but this hypothesis cannot be subjected to “falsifiability” tests.

6. In the second part of this essay I deal with the “technology of ethics” understood in two ways: a search for technological tools for modeling ethical phenomena in a research program to study cultural/societal phenomena in a substrate that by itself is neither “societal” nor “human,” and an attempt to harness instrumentalities to serve ethical directives.

I

1. Since technology rearranges the environment in ways that make it conducive to human existence, it is an extension of natural homeostasis, since there is no principial difference between the five senses and sensors of research instruments or between muscles and engines. The senses and sensors both receive useful information from the environment; muscles and engines, guided by that information, both enable energy independence from the environment. But technology, once set in motion with the aim of “satisfying needs,” increasingly tends to facilitate all kinds of “satisfaction” that we can imagine. From a strictly instrumental point of view, there is no significant difference between satisfying the hunger for food and the hunger for sex, since both are biological. Technology, which entered the area of interpersonal relationships a long time ago, is now, in the next step, penetrating into more private spheres of our existence—with ambiguous results. We are finding out again that the sequence, in which we conquer specific segments of nature, including our own bodies, and which nobody rationally planned, can hide antinomic traps.

Technology offers choices where only fatalism existed in the past. In the not too distant future, we will probably have the option to decide the sex of an unborn child. The equilibrium ratio of sexes in humans has been regulated, as in any “undomesticated” species, by probabilistic chromosomal automatisms. But if parental decisions about the desired sex of the child depart from the ratio determined by those automatisms, for example, due to a cultural preference for one sex over another, the existing equilibrium will be disturbed, and we will need to take steps to restore it. This is one example of a general phenomenon: when parameters that have been kept in a homeostatically beneficial interval by “natural” regulative feedback (i.e., without human interference) are removed from the control of those automatisms by a new technology, “artificial” actions may be required to keep the parameters in that interval. An “artificial” action may be one that limits the freedom of an individual that was just recently expanded by the new technology. In this situation, the peremptory simplicity of the original, ethically neutral statement “It is not possible” (i.e., to predetermine a child’s sex) is replaced by the directive “It is not allowed” (albeit technically possible—unless, say, the quota for a choice has been reached).

What, then, about the possibility of selecting other physical and mental characteristics for an unborn child, which biologists currently anticipate (i.a., Rostand has written on this topic)?1 It would be extraordinarily difficult to satisfy the wishes of the parents while at the same time keeping in mind the welfare of society—a society consisting solely of geniuses would probably not function in any kind of equilibrium. But changes that would have to happen in specifically human values should be considered even more significant. Suppose, for example, it becomes known that the exceptional talent of Mr. X is not the result of a “chromosomal accident,” or, one might say, “winning the jackpot in the inheritance lottery,” but instead is due to the permission, obtained by Mr. X’s parents from the appropriate authority, to add this talent to their child’s genotype. Objectively there is no principial difference between the natural genius of today and the engineered genius of the future, because in both cases, the talent’s cause is external to the individual. A great composer is a great composer whether his genes assembled themselves “on their own” or were assembled by the genetic engineer (with proper administrative permission). Yet it seems that an instrumental intervention that would privilege some people over others creates in the public consciousness a feeling of injustice, because not everyone would obtain what almost everyone lacks, that is, the “talent gene.” The novel of a “synthesized” author might still draw applause, but many readers might feel a rather general aversion for the author’s persona. Once “genetic composition” technology is initiated, changes in the societal system of values that are called autonomous must inevitably follow. But let us leave the worries about that to the future.

2. Two-thirds of humanity, two billion people, are chronically underfed, and out of those about forty million starve to death each year. At the same time, elsewhere, an excess of food shipped to market makes it necessary for crates of it to be destroyed. Yet it is not correct to think that the poor are miserable and the rich are happy. In reality neither are happy, although the effects of surfeit and want have little in common. We tend to take want seriously but disregard, or consider humorous, the dangers of surfeit. It is understandable: our species evolved in conditions of a continual struggle to meet elementary needs, like all “undomesticated” forms of life in nature. The situation where hunger and all desires can be satisfied too easily is a true novelty in our history, and until recently it was held to be a good thing. But we are learning now that surfeit can be detrimental to the values that constitute the motivational skeleton of human behavior. The negative impact of a technological satisfaction of needs is sometimes obvious. For example, a few micrograms of LSD (lysergic acid diethylamide) induce a state of subjective bliss, almost a mystical fulfillment, unlike anything else. Human beings are always looking into the future, and the meaning of life is shaped by expectations, hopes, and desires. LSD, by removing all personal anticipations, amplifies the existential experience of the present so much that everything else seems meaningless, as if someone reached the peak of a mountain that had never been climbed before. A comparison with the effects of LSD on insects will be instructive. A spider under the influence of LSD keeps weaving its web,2 but the web is much more geometrical than normal: the drug cuts off external stimuli but does not interfere with the instinctive behavior that has been established once and for all by genetic programming that can now manifest itself in its “purest” form. A person under the influence of LSD, however, loses all ability to act in the real world, because his motivational mechanisms, which are not inborn but created by cultural imprinting, are much easier to dissolve. A state like this is detrimental because it leads to breaking all ties with other people.

Because a society consisting of individuals under the influence of LSD could not function, the drug has become a threat for society, especially in the United States, where millions of young people use it; it was declared a narcotic (even though technically it is not), and its distribution was made illegal. It is used, experimentally, to ease the suffering of the terminally ill, and it works well: people become indifferent to death though aware that it is imminent.3

More or less at the same time, oral contraceptives were introduced in the United States. The widespread use of these pills has not shown any adverse effects on the body, and prima facie it is not clear why anyone would object to something that separates procreation from the pleasure that evolution has attached to it. Considering the current growth in world population, the pill has come just in time. Previous contraceptives were lacking in esthetics and reliability, whereas the pill can be taken like a vitamin tablet, and, what is more, it can be used post coitum, which is a great psychological advantage (the woman may not anticipate the possibility of sex). Men and women now have equal rights in the biological sense as well, as both can avoid the consequences of copulation.

But for both LSD and the pill the transformative effects of technology have negative aspects. The chemically guaranteed barrenness of copulation (not omitting other factors, about which more later) may lead to the weakening of the relationship between the two sexes, just as LSD severs a person from other people. The problem is not so much the value of the “experience of the absolute” itself or that it was chemically induced as that this technological intrusion achieves full satisfaction through purely local action. Because local actions may have distinctly nonlocal consequences. The use of pesticides to kill certain insects ended up shaking the entire ecological pyramid of the species in a given area. Insecticides cause an imbalance in the material system of ecological hierarchy, and chemicals that quench desires and motivation can cause an imbalance in a society’s axiological system. Making the sexual act “safe” by removing its normal consequences adds to the ease and casualness of sexual relations that has already been taking place in our culture. The historical values regarding sex are due not to inborn mechanisms but to an internalization of particular attitudes that have ethical dimensions, have been approbated by society, and therefore deemed valuable. That the imponderabilia of eroticism are conditioned culturally, just like the complex and often painful initiation practices in primitive societies, does not mean that they can be dismissed as irrational and dispensed with. In fact, all culturally relativized values are “unnecessary”—but only in the sense that in different cultures, different values serve the same purpose. The obstacles that a society places on the path to an individual’s maturity (group, family, professional, sexual obstacles) are not just “superfluous complications.” By removing them, we are at the same time destroying attitudes that motivate, and usually without offering anything in return. Technology is more effective in destroying autonomous values than in creating them. Forcing technological “improvements” can therefore initiate an “axiological implosion,” a collapse of an entire value system. It may lead to a life that is effortless but not worth living.

I am not saying that the contraceptive pharmacology will destroy erotic love. There surely are cultures axiologically constituted such that the pill has no “value-killing” effect. But in ours, given the trend mentioned above, the pill is a factor that makes “loveless sexual situations” more probable. The statistical aspect of this phenomenon is essential, because it determines the developmental direction of ethical change. It is true that there have been women who were chaste only because of the fear of pregnancy and did not really respect the immanent values of eroticism, due to which the abandonment of sexual ethics will now surge in the highest level of externalization—in behavior. But this factor does not seem important to me. Ultimately it is mass behavior that decides the hierarchy of societal values, not the analysis of individual motives and attitudes. (I sidestep the questions of which is more important, what people themselves think that they are doing or what others, e.g., the psychoanalysts, think about the reasons for their actions; whether the “spontaneous” self-knowledge of the common people or the “professional” introspection that philosophers practice should be the starting point for such analyses, etc.)

3. Knowledge today is gained through long and arduous research. When an “information pill” supplies people with knowledge, such research will become superfluous. The technology of “free learning” does not yet exist but appears to be a possibility. The toil of learning, however, has a role beyond the acquiring of information capital. It trains people to overcome obstacles, cope with stress, and improve their character. Therefore an “information pill” could harm a person’s mental development by providing erudition to a mind that is fundamentally unprepared to make full use of it. A new form of education would then be needed: How to benefit from the information just ingested. Or, what admittedly borders on the absurd, a direct intervention in the brain might arrange its processes into the same state that would be achieved through “regular” learning. Yet if it were possible to obtain a universally proficient mind through a sequence of instrumental (pharmacological or electrochemical) procedures performed on the brain, what values would remain in such a world to give life meaning? Creating shortcuts for all possible needs, desires, and wishes should not be the purpose of technology, because where everything is available in an instant, nothing has value. Value originates only from a hierarchy of goals and a gradient of difficulty that must be overcome to attain them.

Meanwhile technology is invading us on many new fronts, and it is not clear how our bodies should defend themselves, as the besieger appears to be the friendliest of allies. The philosopher Pangloss may not have been correct two hundred years ago,4 but we are now approaching the best of all possible worlds at the speed of a cannonball, a perfect place where pharmacies will distribute knowledge without learning, mystical states without faith, and pleasure without scruples. In this modern version of a mercenary economy, convenience replaces value. It is hard to object to the introduction of contraceptives, since desperate situations demand desperate measures, but it is necessary to call a spade a spade: technology cannot replace the axiological backbone of civilization. In the modern world, customs and moral norms are unable to resist the pressure of technology; they can slow it down (as in the case of LSD), and even that only when the instrumental innovation’s effects directly clash with the code of established laws. But technology, instead of launching a direct attack, takes roundabout paths that render societies and their laws practically helpless. And the damage, once done, cannot be reversed. When a technology becomes omnipresent, people grow accustomed to it and would consider its absence almost as an injustice. Changes in ethics occur gradually and without plan. I do not know if anyone has studied the socio-ethical aspects of the liberation of atomic energy, for example, the attempts to draw parallels between the genocidal practices of the Third Reich and the first use of the atomic bomb by equating the creators of the gas chamber with the creators of the bomb—because they all were scientists and engineers, weren’t they? In the area of societal practice there is no prognostication, only randomness; no control but at best a “concerned” passivity; and in the place of knowledge there is an ignorance that is barely even aware of itself.

4. One might argue that these lampooning remarks on how technology degrades ethical norms should be accompanied, for the sake of completeness if not anything else, by an apologia of technology’s positive influences. We know that advances in energy, transportation, production, and distribution of goods all promote global cooperation and are therefore not only morally commendable but also economically beneficial. Unfortunately, the many antagonisms of the modern world can put an end even to something that is materially beneficial for all. I devote more attention to the negative effects of technological development on societal value systems because they are more difficult to recognize. The role of modern technology is particularly questionable in the Third World, where many cultures remain at relatively primitive stages of socioevolution. Traditional norms, incompatible with demographic changes taking place now and helpless in facing new situations, are vulnerable to quick erosion and collapse, which may easily create a normative vacuum, a disappearance of the old values without the appearance of new ones, since it is not possible to speed up artificially the development of ethical norms. Yes, children learn ethics as they do their mother tongue and later the natural sciences and mathematics, but these are very different kinds of learning: behavioral rules are adopted not by remembering information but through the continuous observation of social patterns. When a civilization evolves, climbing from one level to the next by its own efforts, the pace of ethical growth will be organically slow and harmonious. The sudden invasion of technology into a primitive culture may cause ethical havoc, because the adaptive mechanisms of customs and morality cannot keep up with the changes. But even a continuously developing civilization can—due to the technological acceleration—reach such a technoevolutionary speed that customs inculcated at an early age may not last a lifetime but become obsolete, and the next generation, raised by axiologically disoriented parents, seeks behavioral goals on its own, often with poor results. I do not know if the pace of the evolution of customs has already been overcome by the pace of technoevolution or if it is just going to happen. But the constant acceleration of instrumental progress makes this divergence, this loss of intracivilizational coherence, a very real possibility.

Nature preserves its equilibrium and continuously renews itself, using its own elements; its highly stationary character is the result of very long processes, spanning billions of years, of coadaptation of geological and biological factors, partly by transforming the former into the latter (this is how the biosphere arose as a homeostatic unity) and partly by adapting the latter to the regular inanimate fluctuations of the planet. Human beings, infinitesimally small in space and time, have always treated nature as an open, inexhaustible system. Yet, although the mass of all living human beings is a tiny fraction of the planetary mass, human technologies have transformed this open system into a closed one and made the stable equilibrium of the biosphere unstable, hence the emergence of new technologies whose only purpose is to mitigate the damage caused by previous technologies that directly serve the human biological and social needs. One can imagine that a demand will arise for a next wave of technologies whose purpose is metamaterial—that is, technologies to counteract the phenomenon of “instrumentalisms going out of control,” which puts the front of the actualized causative possibilities of the civilization beyond the reach of any of its individual members, beyond the traditional axiological horizon, and beyond people’s abilities to absorb and adapt, which are truly enormous but still exactly the same as those of the Mousterian and Aurignacian cultures5—because biologically we are equal to their members. I ignore here the epistemological and cultural-educational aspects of this acceleration, the problem that the swift obsolescence of knowledge, including professional skills, will make constant learning and retraining in many occupations a necessity. This is already happening in the natural sciences: progress has forced biologists to learn mathematics, economists to master information theory, and so on.

5. Empiricism should be subordinate to ethics in the sense that through discovering links that are imperceptible in everyday life, we begin to get an idea about the ethical weight of acts and decisions that were previously considered ethically neutral (if a physician reveals to a young couple that their offspring will very likely have a genetic defect and they go ahead and produce physically or mentally disabled children, they may be innocent in the eyes of the law, but their ignoring the physician is morally questionable). It is admittedly strange when ethical is not what your conscience prompts but what has been sanctioned by the biologist, the system theorist, the expert in decision-making and linear programming, and the cyberneticist working in the field of game theory. Granted, we do not face such problems every day, and traditional morality, especially in our daily contacts with other people, has not yet been completely lost in the forest of instrumental directives and facts. It is still possible to be a good person in the ordinary sense, but unfortunately the sensitivity of our moral compass is under a constant attack, since the technologically-enabled global news makes us witness dreadful things happening in thousands of places and we can do nothing except shake our heads—and everyone knows that that is not enough. Belonging to the species Homo sapiens today can thus be regarded as being responsible for the species’ overall fate but having infinitesimally small personal power to influence it. This disparity is a consequence of the many technologies that have linked each one of us in an excessively unidirectional way to the other three billion members of the human world.

6. To speak in full about the “technology of ethics” would mean to explore the theory of an ideal society (by analogy to an ideal gas), because ethics is a part of a culture, a subset of regulatory parameters, which can be defined by conventional and simplifying abstraction, of the system we call “humanity.” Current knowledge does not provide sufficiently solid basis for such an endeavor, unless one remains within the limits of what can be empirically verified. So my discussion is no more than an introduction to this “technology”—the (formal) modeling of the phenomenon.

II

1. I see ethics as an unwritten part of the rules of the “game of society.” Some of those rules undoubtedly have an instrumental character, but whether or not they also have an ethical flavor depends, among other things, on the totality of the culture. Since ethics resides in interpersonal relations, which social situations possess ethical elements and which system of valuation applies to them is clearly defined when viewed from within a given culture; but many situations classified as ethical, together with the classification criteria themselves, prove to be variable (though not without limits) when examined from another culture. Observers from outside of a given cultural circle will offer divergent evaluations of interpersonal relations in that circle, which necessarily means that they have a different cultural imprinting. To refrain from judging another culture from the viewpoint of one’s own is possible only if the observed phenomena have no cultural significance but simply reflect an equilibrium behavior of elements in a highly complex material system. One might try to be as objective as possible even without resorting to such an extreme, physicalistic atomization of the “human set” with the extreme objectivity of scientific analysis, but there must be limits to it that no one is willing to demarcate because no one really knows where they may lie: what in our behavior is “metacultural,” and therefore free of any relativity, could only be revealed by experiments that cannot be conducted—for obvious reasons of an ethical nature. Comparing as many cultures as possible, however—those that have reached the same technological level and share some aspects of their development, such as similar ecological environments or anthropological ancestry, but also those that developed in very different conditions—shows great promise.

2. Anthropological research has shown many times that the biological differences between human races—relative to the cultures that the races create—are practically negligible. So if the compared cultures share the parameters in the areas of geography, climate, and technology, the comparison is supposed to reveal whether, in the absence of other significant factors, the structures and developmental trajectories of the two communities coincide, as one would expect. But as we know, this coincidence does not occur: in terms of customs, beliefs, and ethical and esthetic norms, primitive cultures (for they are the topic here) greatly differ from one another. Certain basic principles are certainly preserved in all, such as the principle of cooperation, which is, in a sense, both trivial and obvious, since a society that opposes all forms of internal cooperation cannot exist. The parallels that have been observed are limited precisely to those principles whose violation is, for purely biological reasons, impossible. A logical, rather than empirical, analysis suggests that the cooperation principle must have been the seed of cultural development; we might conclude then that cultural differences arose because societies took different paths to the same technological level (inventions and discoveries made in a different order, different ways of setting snares and traps, different ways of hunting and building shelters, etc.). But this is not the case. Cultures, even if they were “constructed” around the primary principle of cooperation, manifest rules of behavior that are definitely superfluous to all instrumental activities and therefore cannot be reduced to that single guiding principle or to the specifics of the methods used to manufacture tools, work the land, and so on. For unknown reasons, some cultures are patrilinear and others matrilinear; some practice ethics that Western scientists call “Apollonian,” others “Dionysian.” We can make many comparisons like this, since about three thousand different primitive cultures have been cataloged. Each culture has its own “ideal” of what a human being should be, and the range of these ideals is surprisingly broad.

The question is whether a set of tools and the ways they are used can serve as an impetus for the development of a culture’s activities that are, from the economic point of view, “superfluous.” Such a set would act as a seed for crystallization of activities that can later become autonomous, for a growth above and beyond the satisfying of actual needs, that is indeed unnecessary from the rational-engineering standpoint but justified by the mentality of people at a low developmental level (open, say, to animism or magic). Importantly, a mix of the irrational element, with aims that are physically fictitious, and the rational, instrumentally teleological element is found in many primitive cultures, but it does not explain why certain societies practice “Spartan” ethics (even its most extreme forms), while others—equally developed in terms of the intellect and technology—have created ethics that to us seem liberal, approaching the Western ideals of humanism, where the leading directives are to be kind and gentle to all. The very fact that this question is posed indicates that nothing like “immutable human nature” exists in the real world and people are neither “immanently good” nor “immanently bad” but only how conditions make them. But I ask again, where do the differences between societies, often shockingly wide, come from? A series of experimental studies, by the way, conducted outside the boundaries of anthropology—in theoretical biology, in the form of the computer modeling of bioevolutionary phenomena—suggest an answer.

3. Modeling evolutionary processes within the Markovian framework is powerful. It usually uses a relatively simple form of A. A. Markov’s probabilistic (stochastic) process with random dependent variables that is called the homogeneous Markov chain (but as we will see shortly, the genesis of a society and culture cannot be modeled so simply). Processes are called Markovian if the prediction of a future state is determined solely by the knowledge of the current state and no information about all the previous states is necessary. The same process can be Markovian when described one way and non-Markovian when described another way. If we are dealing with the development of a population, its purely phenotypic description is not Markovian, because it does not include the information about recessive traits; when described at the genetic level, the process is Markovian. A non-Markovian description usually omits some parameters that are essential for the system’s behavior. For example, predicting a person’s behavior on the basis of knowledge of his past is non-Markovian, but a forecast of his behavior can be made on the basis of a detailed study of his brain, with all its neuronal preferences in stimulus transmission. The latter description is Markovian and would not contain the word “memory,” because as Ashby noted, “memory” is just shorthand for parameters that are hidden from our eyes.

4. I quote from an article by A. A. Lyapunov and O. Kulagina (Kibernetika, no. 16, 1966):

The Markovian schematics of evolution have the following characteristic feature: Increasing the number of certain self-reproducing forms increases the probability of finding in the next generation more individuals having this form. Regardless of the initial state of the population, if selection acts only on the level of individuals and equally on both sexes, then any deviation from the initial state increases the probability of further deviations of the same kind, i.e., there is positive feedback in deviations from the norm in subsequent generations. Hence the conclusion that when the reproduction schematic is that parent couples with similar genotypes produce offspring easier than couples with distant genotypes, the expectation is that after a sufficient number of generations, a population “polarization” occurs, i.e., this schematic contains prospects of divergence, with fluctuations being subject to positive feedback. In other words, the population’s initial genetic distribution may prove unstable. Based on this, one can formulate a hypothesis, according to which a natural trait leading to biological isolation shows a tendency to become stable. Its stabilization is the greater, the lower the number of different states in the given trait. For example, left- and right-handed amino acids do not polymerize with each other. If at a certain point in time, two living forms existed separately based on the two amino-acid types, they would represent two biogeocenoses that do not interact in metabolic processes. But they would still utilize the same pool of the elements, and fierce competition would ensue between the two types of nature. . . . It should be expected that after some time, one of the forms would achieve victory. Therefore the fact that in the living nature, only one form, the left-handed amino acids, exists cannot serve as an argument in support of one or the other mechanism of the emergence of life. This circumstance is just one of the generalizations of the principle, formulated by Vernadsky,6 about the impossibility of the retrograde (backward) extrapolation of the evolutionary process.7

The article deals with the numerical modeling of the evolutionary process, and it summarizes results of such experiments. A population of 100 to 150 specimens was evolving for 45 to 90 generations under the effect of genetic drifts caused by random fluctuations. The environment was therefore constant, so natural selection, understood as an “adaptation sieve,” did not operate. There were three main, distinct outcomes of the experiments: two showed stabilization (either through divergence—i.e., the emergence of several, most often two, species no longer capable of crossbreeding—or through genotypic coalescence into one species), and the third represented what the authors called a “permanent instability,” meaning that particular configurations of this state were unstable, but the set as a whole was stable. The first two cases correspond to blind alleys in evolution: the emergence of forms that are no longer genetically reversible, as a result of which a species is “left at the mercy” of the environment and, lacking the genotypic reserves of adaptive variability, it will exist only as long as the environment remains the same. The third represents the preservation of evolutionary plasticity; in other words, a species has at its disposal a variability reserve that is regulatorily necessary, as Ashby wrote in his Introduction to Cybernetics.

5. These results demonstrate the importance of random factors in the evolutionary process. It is a characteristic of Markov chains with a finite number of states that if we can define a subset of states such that a chain will have a high probability of transitioning to it but a low probability of exiting it, then after a sufficient number of steps the chain will almost certainly become a member of this subset. Such subsets are called absorbing. It is probable (see A. A. Lyapunov) that the large Mesozoic saurians found themselves in such an absorbing subset, which is why they went extinct. A species can also survive in the absorbing subset, as long as the environmental fluctuations are random and do not exceed certain limiting values. The process of cultural development should then be recognized as evolution in the Markovian (stochastic) sense and understood as the random walk of a community that can either preserve for a long time (but not forever) the internal variability that enables a continuous increase in complexity (an example familiar to us is the rise of industrial civilization) or encounter absorbing subsets of stationary states, which corresponds to the freezing of some communities at a lower stage of technological development.

6. The boundary between a random fluctuation and an evolutionary regularity (a “gradient of progress”) appears to be quite fluid. The reason is the positive feedback between the consecutive departures from the initial state in subsequent generations. By the way, the same feedback exists in inanimate nature. For example, an increase in glacier mass due to a purely random fluctuation (two or three consecutive winters that happen to be colder than average) causes an additional increase that is now nonrandom. The reason is that the accumulation of ice that could not melt during the summers and return the glacier to the state before the fluctuation induces positive feedback: more ice leads to still more ice, and this gradient, manifested by the glacier’s descent into valleys, persists until the next statistical fluctuation in the opposite direction (a few unusually warm summers), when the glacier retreats. Similarly, a purely random fluctuation in a population that results in a higher number of individuals with a specific trait, leads to an increase in the number of such individuals in the next generations. The trait does not have to confer any adaptive advantage; it can be neutral in this respect (i.e., biologically harmless).

7. This mechanism could be termed an accident growing into a regularity or a random independent variable turning into a (stochastically) dependent variable, and according to Lyapunov, this explains the diversity of life, which, intuitively, biologists have long considered excessive with respect to the classical dual engine of evolution—variability, adaptively filtered by selection—in the sense that the observed diversity is greater than would be expected if the differentiating, species-forming factors were limited just to the Darwinian dyad. I am talking about the so-called genetic drift, that is, the differentiation caused by intragenetic processes whose results are not actively regulated (through some limitations, for example) by the environment, because they are neutral vis-à-vis the environment. In other words, the evolving complex system may have certain margins of freedom in which random configurations can be realized and turned into an (orthoevolutionary) regularity,8 where an introduction of a mutation into the picture does not significantly change it as long as the frequency of such “genotypic innovations” is sufficiently low.

8. Differentiation in primitive cultures, which is “redundant” with respect to the environmental (climatic-geographic) and the instrumental (societally realized technological activities) factors, could have risen in a similar manner and this redundancy or superfluousness can be explained with a stochastic model. A culture’s material basis does affect its structure but is not a determining factor; it only forms a space in which variation can be manifested according to the Markovian game of its elements. Biological differentiation due to genetic drift starts with concrete traits already present in a population’s genotypic distribution; likewise, societal differentiation starts with basically stabilized relations that can depart from the actual state, branching, increasing in complexity, and “random walking” in the configuration space of possible states. Of course this space is entirely different from that in bioevolution, but the point here is not to reduce the societal type of transformation to the biological but rather to explain the dynamic mechanism that is common to both. It is possible that the crystallization nuclei that “ornamentally” and, in culture, symbolically and signifyingly created intracommunal relations also included, in addition to the relations of cooperation, the relations between the sexes, since procreation and meeting basic needs must have been processes that the evolving group carried over from the biological, precultural realm into the beginning of socioevolution. Differentiation originating from within (i.e., whose driving force does not come from the “game against nature”) is in both bioevolution and socioevolution bounded by the initial distribution of the elements (genetic and precultural) and the conditions imposed by the environment, which must be met as a conditio sine qua non for survival. Survival is of course not guaranteed: if the conditions are not met during the system’s evolution, its direction of the biological or cultural development can lead to self-destruction.

9. The Markovian scheme presupposes a finite number of possible states, but we do not know whether the tree of bioevolution or the tree of cultures actually “tried” all the forms possible. Numerical simulation cannot provide an answer, because the complexity of phenomena that it can access (given the limitations of our knowledge of biology, of machine memory, and of our programming skills) pales in comparison with that in the real world.

10. So the answer to the question of why in some societies “Spartan” ethics operate while in others “Dionysian” or “Apollonian” dominate, why some groups subordinate individuals to the group structure while others, more liberal, value the individual more than the group as a whole, why cultural models of personality are sometimes marked by kindness and other times by cruelty, and why behavioral patterns sometimes privilege and amplify the expression of emotions and sometimes suppress it as reprehensible is that those particular results—after a very long series of steps in a Markovian process—were chosen from many throws of the dice in the “game of society” by random factors that elevated them to the status of the rules of the game.

11. This leads to the conclusion that the plasticity of “human nature” is principially directionless, and in order to form a social group the members must meet conditions that are necessary for the group’s stability but insufficient to explain the existence of multiple ethics. An external “ethical selection” factor cannot be ruled out. One can assume that a society’s passage through a “catastrophe bottleneck,” such as a famine, an epidemic, or other autonomous environmental perturbation, will amplify the principle that man is a wolf to man, granting ruthlessness, guile, and competition a status of rules necessary for survival. But assuming a causative event is responsible for the selection of a cultural feature leads us to a methodologically interesting dilemma that is specific to Markovian models.

12. Establishing a “pure” Markovian path for bioevolution is nothing else than acting in accord with Ockham’s razor. We cannot rule out the participation of an environmental factor in the creation of a new cultural feature, when this factor disappears afterward and no research can uncover it. All we can do is show, by simulating the evolutionary processes, that the species-forming divergence may occur in the absence of such a factor. Which is not the same as saying that this indeed happened in each particular case; because for no feature we can say with certainty if it was stabilized with the participation of the environment or arose as a result of a “pure” fluctuation that was subsequently amplified by Markovian feedback. Obviously by increasing the complexity of our model, that is, by introducing environmental factors with various fluctuation frequencies, we can obtain a considerable number of bioevolutionary trajectories. But if we find that the same feature stabilizes in the population, say, 35 percent because of “pure” Markovian selection and 20 percent, 35 percent, and 10 percent because of three external factors that were manifestations of environmental fluctuations that left no trace, which explanation should the researcher choose and on what basis? Gibbs noted that retrospection in sequential probabilistic processes is treacherous.9 When we are dealing with an ergodic process, which “covers the tracks” of the particular path it took and a given state can be reached through very different sequences of past states, no amount of modeling can determine the real (diachronic) trajectory of the phenomenon. Research can only establish a set of possible trajectories and accept the indeterminacy of what actually happened. This uncertainty differs from that in quantum mechanics, because the process here was definitely not separated into a “fuzzy” linear combination of many paths; it took just one path, only we cannot know which.

13. The hypothesis of ethics arising as a result of an accident becoming a stereotype (or a deviation from the initial state turning into a regularity repeated through generations) is prima facie methodologically “better” than that of ethics arising as a result of a “disappearing” cause (e.g., the passage through a “catastrophe bottleneck”), because it offers a mechanism that is more economical, requiring fewer postulated factors (Ockham’s “entities”). Yet the disappearing-cause hypothesis is easier for the humanist to accept, because it argues that something external to human beings is responsible for the creation of a particular ethical system. But of course if we accept it, we get to a dilemma: do “more humane” ethics arise in the absence of perturbations (catastrophes), that is, indeed anima humana naturaliter bona est, or do those ethics require the presence of a “positive-ethics selection” factor? These hypotheses cannot be entertained in such simple forms, however: a linear distribution of the “ethics-selection” factor (on a scale between lack and excess, between the “cruelty” and “kindness” of the environment, between misery and bliss) does not capture the multidimensional “spectrum” of the ethics actually observed in primitive cultures. Because ethics cannot be unequivocally attributed to a causative environmental factor, and we therefore cannot reasonably claim that “humane” ethics arise in “better” living conditions (and vice versa), we are back to the model in which continuous ethical systems arise due to interference of discrete perturbations coming from the environment, which again reveals the Markovian nature of the phenomenon and makes ethicogenesis a probabilistic walk of communal customs through a series of consecutive states until a state that happens to be stationary, that is, “absorbing,” is reached.

14. A Markovian process is principially devoid of memory and represents a manner of “learning” that is highly uneconomical. Indeed, historical memory in a primitive culture operates without precision or certainty. Ethicogenesis in it was such a slow process that society could not perceive the manifestations of the very effects that formed its behavior. As we saw, identification of a stochastic mechanism that stabilizes cultural patterns is questionable, but if the environment participates in the generation of the Markov chain, that participation cannot be reduced to a simple scheme where good conditions create “good” (humane) ethics and bad conditions “bad” ethics. The location of the stochastic generator is unimportant here, and so is the “ethical minimum” (which we barely touched in our discussion), which can be reduced to the principle of a group cooperation that enables biological and later societal survival of the primitive community and in which first instrumental activities appear. The stochastic generator for us is simply a mechanism that randomly selects, from all the possible elements of human behavioral patterns, those that form an integral unity to which the members of the culture assign special significance. This process has both a physical and a semantic-cultural aspect. Studying the physical aspect, that is, revealing the purely structural relationships in the formal models of cultures, is analogous to linguistic research, in which we attempt to replace the understanding of a language with its grammar and syntax, which can be algorithmically formalized.

15. Views of history range from the idea that it is a fundamentally directionless sequence of states, devoid of regularities (trends, gradients), to the idea that it is a developmental flow with distinct teleological regularities. These opposing ideas can be reconciled if we recognize that the flow of history is not a homogeneous process. There are at least three kinds of processes, variably interconnected: Markovian, cumulative, and random. The Markovian, processes with only a “single-step” memory, are the transitions from a biological species to a culture-forming one. The process of human socialization differs from that observed in animals in that information must be transmitted extragenetically: because at birth, ants, but not humans, already have in them a preprogrammed plan for a societal structure. (In this sense, the species Homo sapiens is regulated by two channels: genetic messaging and cultural messaging.) The evolution of societal systems, which depends on (non-Markovian) technoevolution, is also Markovian. Although the process changes its memory from a “single-step” to a deeper kind after the development of an alphabet and historical chronicles, the regulatory effects of that memory on the probabilities of transitions to the next states are quite small. Until the emergence of the theory of socialism, that memory has not been effectively used for regulatory purposes, so from the physical point of view the process has remained Markovian: a memory not utilized does not exist (is not functional). Therefore—and this is methodologically important—technoevolution exhibits behavior more directed than a Markovian process because a continuous accumulation of achievements (learning) is taking place in it. This evolution has a “regulatory memory,” although its effects on the Markovian sequence of organic transformations are random in the eye of an observer outside this Markov chain. This is a special case of a general phenomenon: if we have two loosely (e.g., stochastically) coupled systems that do not share their regularities, and if what in the first system is regular affects the second system, its causal effects will be considered random in the second system, because the second system has its own set of regularities which cannot predict the interventions. For example, two cars run side by side, and the behavior of the first driver, which results in their collision, is in the eyes of the second driver random, although it was caused by a characteristic regularity present in the first car—the first driver has slow reflexes. Therefore, whether a series is considered random or regular can be, to some extent, arbitrary, depending on whether or not the observer is a part of the given sequence.

16. Apart from such mass aspects, history has also a singular one, known as the notorious Great Man theory. When transferred into the sphere of cybernetics, it reveals its indetermination, because it is the system’s governing characteristic that decides whether someone “governs” it; moreover, “to steer or govern” and “to regulate” are not synonyms. A driver steers a bus (and holds the passengers’ lives in his hands), but the queen bee governs nothing in the beehive, although her presence is essential for the hive’s survival—hence her influence is regulatory but not steering. Further, the extent to which a governor’s personality can affect the dynamic trajectory of a system depends on the system’s structure: some systems will “amplify” the personality; others will suppress or entirely eliminate the individual variability of the ruler’s character traits. A governor’s actions also can be representative of the system and maintain the values of systemic parameters within the beneficial range without the need of any special talent. But a configuration of conditions can arise in which acts of steering will be random with respect to the system, that is, unpredictable from the viewpoint of its mass-statistical regularities and devoid of any regularity that can be deduced from the overall dynamics.

17. The proposed three ways of describing the dynamics of a society could be further multiplied. When we face a sufficiently complex system, with various “subsets” that are nonuniformly coupled with one another, we can select the description that maximizes our knowledge about the system (and its future states). Different descriptions may in fact be complementary to each other. Some descriptions, especially those that smuggle in “noninstrumental” valuations, are epistemologically inappropriate, as are those that lead to comparisons between, and affording equal treatment to, phenomena that operate on different levels or are of different kinds or use nebulous analogies (e.g., a biological system as an analogy for the too-well-known “parallels” between a biological and a societal system). The three kinds of variability, which can be detected in the flow of history, are not easily integrated: the leading role is sometimes played by statistical processes (the same kind that statistical mechanics studies in thermodynamics, for example), sometimes by cumulative and teleological processes, and sometimes by singular processes. Therefore a historian’s language is usually a mix of at least three different languages—due to the interlacing of the three aspects, which actually operate on different levels.

18. In the modeling approach that takes a system’s structure as given, this structure corresponds to something like a country’s network of roads understood in technical and technological terms, while ethics, or more generally—cultural norms, is the traffic code, that is, the full set of rules of “proper” conduct for drivers, and the roads and the code variously interconnect. The traffic code must accommodate the real state of the road network; if the code were “unrealistic” or downright unrealizable, it would cause a divergence between the theory that demands and the practice that is observed. We all know how quickly, because of the increase in car ownership, the rules of the road become obsolete. Change that is dynamically similar occurs when ethics fails to keep up with technological innovation. The traffic controller knows that even though the drivers are people, the rules that govern the behavior of large number of cars on the roads after a threshold of “density” has been reached, reflect less and less the element of individual psychology and more and more something like molecular kinematics. The controller is familiar with the phenomena of pulse jams and propagating waves. In such circumstances, appeals to the drivers have little effect even if all those individuals understand the problem and are willing to follow the rules. Once a vehicle ceases to be a “molecule of movement” whose trajectory can be interpreted in terms of psychology, all appeals to conscience will be useless; it becomes necessary either to change the road system (by increasing its capacity or building anti-collisional, ramped interchanges instead of intersections) or to institute new traffic rules, which will have a peculiar consequence if the road capacity is limiting: the new rules cannot avoid discriminating against a fraction of road users.

19. Attempts to justify the principles of ethical behavior have been based on various authorities: transcendental, logical, utilitarian, even psychobiological. The neopositivists eventually concluded that ethics is nonempirical, because, as Carnap noted in the 1930s, the sentence “Murder is bad” offers no consequences falsifiable by experiment: after a murder, the corpse is evident, but the “evil” is nowhere to be seen.10 It is surprising that Reichenbach, who worked in the area of probabilistic laws, albeit only in physics, also supported that thesis. Had the neopositivist philosophers turned to technology instead of physics, they would have noticed that there are no true or false machines, only good or bad ones, or rather better or worse ones. “Good” in this sense is a machine or any material system that meets certain criteria of purely instrumental valuation. Technology also has binding directives, and they are a consequence of accepting such criteria. It is therefore possible in principle to compare the value statement of the railroad engineer that “Train crashes are bad” with the ethical valuation “Murder is bad” because they are isomorphic. But both have the same problem: they introduce a criterion that is neither right nor wrong; “bad” denotes only a state that should be avoided, since trains on rails, like people in society, should move without collisions. Ethical evaluations are supposed to differ from instrumental evaluations by being unjustifiable, but this difference does not hold in reality. No doubt the engineer whose train derails at a switch is confirming, among other things, certain laws of physics, owing to which kinetic energy converts to heat, deforms the wagons and locomotives, and so on, yet he is not exclaiming, “Physics is true!,” but rather, “The switch is bad!,” that is, faulty or poorly constructed. So there are empirical quality tests of material systems that are not identical to tests used in physics. We all agree that the laws of physics are independent of the nonphysical opinions of the people who work with them, but if a locomotive engineer also happens to be a guerilla fighter, he may hold the view that “A train crash is a good thing.” That may be true, except that he is then trading the instrumental directives of his technology, with their implied value judgments, for directives that are not technological. Similarly, the “social engineer,” who treats a society as a complex machine (in the cybernetic sense), can valuate it according to the corresponding instrumental criteria as “better” or “worse” than another society, either as a whole or just in selected parameters. In his eyes, ethics reduces to a “cooperative minimum,” without which no society could function, for a society in which everyone deceives, kills, or robs everyone else cannot exist. Ethics in societies operates as a probabilistic rule (or rather a system of such rules), manifested—in a purely instrumental approximation—as an average of a very large number of individual processes, like the temperature of a gas. One cannot go from this integral “ethical mechanics” to the ethics of an individual, just as one cannot use statistical mechanics to define the temperature of a single atom.

20. The applicability limits of this approach lie where sets of atoms cease to be homeomorphic with sets of people because the latter are a special case of systems whose rules depend on their histories. Here having a history means that the future trajectory is affected (probabilistically) by the past trajectory. Indeed, if the rules of atoms’ behavior depended on their history, there would be no fundamental difference between a set of atoms and a group of people. But since the set of atoms is a system whose elements have been indelibly preprogrammed “once and for all,” it is a limiting case in the distribution of rule variation, which spans from total, agenetic determinism through Markovian systems to the teleological and diachronic. And vice versa, human society can be considered as a system of particles whose rules are a function of time; we are atoms with memory and the ability to learn—though we have not yet made the best use of that ability.

21. Another special aspect of ethics as rules governing human sets is the selection of the “proper code.” Formulated in terms of our modeling, the question is: Is it possible to equate a certain “appropriate ethics” with a class of dynamically “optimal solutions” in a purely instrumental sense, or must we resort to subjective experience and such terms as “conscience,” “honor,” “compassion,” “sympathy,” but also “aggressiveness,” “death instinct,” and “hunger for power,” and so on, to describe ethics both synchronically and diachronically (i.e., in its operation and its development)? The answer may come from the simulation of societal phenomena: it is not necessary to assume a priori that the material from which the societal machine was constructed played a decisive role, meaning that a society is as good or bad as its members, that the system just amplifies “human nature.” It is even possible that the material is of no importance at all, as in the model of the brain, as long as the model meets certain simple requirements (e.g., pseudoneurons having two distinct states). Someday we may be able to simulate “sociogenesis,” first starting with “molecules that are immanently good,” and then with molecules that are “immanently bad.” My guess is that it will make no difference, because sociogenesis is a process that is ergodic with respect to its initial state. In other words, a societal system is independent of what is “good” or “bad” in human beings. Imagine a network of roads on which we release a swarm of drivers instructed to be as aggressive as possible toward the others (not observing the right of way, not yielding, etc.), and then the opposite, to be as courteous as possible to all of their fellow road users. Undoubtedly, in the initial stages of the first experiment there will be many more accidents than in the second, but once a certain degree of saturation (“traffic density”) is reached, road safety will stop being a matter of personal behavior: physical rules will overtake the “ethical settings.” So although different paths are taken to reach an equifinal state, at different costs in terms of accidents, the end result, especially with statistical averaging, will be pretty much the same.

22. The relation between the “ethical” and “physical” dynamic aspects of a societal system can be modeled also in another way: the existing models take individual psychology into account to different degrees. Some minimum psychology must be taken into account, because a society cannot exist if its existence is not in the interests of its members. But requirements of different structures are different and most probably they all feature redundancy with respect to the personal integrity. In order to model these phenomena, at least two alternative (discrete) states of societal elements are needed, Markovian and non-Markovian, that is, without memory (hence with regulatory autonomy) and with it. But a nonbinary approach considering those two states as scale limits within which states can vary continuously would be closer to reality. When tyranny is absolute, all elements are “devoid of memory” because the only thing that counts is regulations, directives, orders of the day; when a system is “ideal,” individual memory operates in full freedom. The former is linear, the latter nonlinear. We could then distinguish among various types of tyranny: one type may stabilize the structure by physical force, another by informational means—the former stabilizes the whole by brute force while the latter by a high effectivity coefficient of internalization of the informational directives; the former is like a military occupation and the latter is something like the Jesuit order. A biological organism is—despite of what we sometimes hear—a curious mix of the two. But if organisms manage fine in nature, tyrannies tend not to last without an aid of certain procedures that, luckily, are so far instrumentally unrealizable, since people cannot be turned into elements that are a hundred-percent Markovian (devoid of memory), as the cells of an organism can.

23. The above model is simplified, as it does not take into account the residual memory that people retain in tyranny. Tyranny is, so to speak, organically incompatible with the biologically determined human nature, in that tyranny does not allow room for the regulatory function of biography, character, abilities, skills, and so on. Even a model that takes into account these “local” properties would still be incomplete, remaining “acultural”: in such simulations, cultures would be “energetically similar” but “informationally different” states from the set of all possible states that can be realized in the given systemic structure (for example, many traffic codes can exist for the same network of roads and the same type of vehicles). This would be the next step in our modeling, yet still just the second approximation, because it does not include a feature of a culture that, like ours, uses the memory of all the cultures that preceded it and the knowledge of cultures that have ever been observed, even in lands that we consider exotic. With such a method of successive approximations, we can build dynamic models that approach reality.

24. Yet do not such models omit the specific values to which ethical phenomena can be distilled? If we are studying—say—the struggle for power among the ruling elite, can we neglect personal attitudes, motivations, and intentions? Certain people may derive satisfaction from occupying a privileged position, but the society modeler can safely dismiss that satisfaction, along with the so-called immanent evil of human nature altogether. The modeler will be like an oncologist who calls a tumor malignant but it does not mean that he ascribes to it any malicious intention or wonders if the tumor gets satisfaction from attacking healthy tissues and destroying their local autonomy. The doctor’s task is only to combat such deviations from the organism’s homeostasis.

25. The simulation experiments discussed above have not yet been conducted, so we can only assume that as the “rigidity” of organizational relations among people increases, the effect of preprogramming in an individual’s behavior weakens. We do not come into this world with ethics, only with the ability to respond emotionally. The newborn responds to a smile with a smile, and that forms the nucleus of the so-called higher emotions, which are plastic in childhood, a time spent, in practically all cultures, within the family. The family is where the first “ethics lessons” are given, in parallel with the acquisition of language, and those lessons are later extrapolated to larger circles of people, the process becoming more and more determined by the culture (family relations being relatively least affected by influences of the culture as a whole).

Yet just as a child’s memory is not adequate for the demands placed on the individual by the system structure, the form of communal memory that we call culture may be inadequate vis-à-vis the overall systemic changes in the human ensemble which can be initiated by, among other things, technology. As the rate of evolution increases, the ensemble behaves as if it has lost its memory: it becomes Markovian, and its future states depend only on the present one. The “childhood” preprogramming has an anti-Markovian effect, resisting the memory loss, but the extent of that resistance depends on the culture that formed the family (we are dealing here with a hierarchical series of feedback loops: parents teach the child what they learned about “ethics” from their parents). In this individual aspect (but only in it) personal characteristics should not be omitted. In this way the tendency to manifest ethical behavior arises, which I would call “weak local interactions.” By “local” I speak not of physical distance but of situations in which the acting individual represents himself rather than a larger group (a class, an army, an institution, or a government). But no kind of “resistance” can free the individual from the “strong interactions,” which are determined by his membership in the larger groups. That is why the noble dream that to prevent a war it would be enough for all mobilized men to refuse to obey the government is utopian. It has never happened; the relations that govern the system cannot be “corrected” by appeals to the heart, as Marxists have known for a long time.

26. By the “technology of ethics,” I have in mind specific simulations of ethical phenomena by technological means. But just as we cannot simulate emotions separated from their neuronal—or pseudoneuronal—substrate to obtain “sadness” or “nostalgia” in a test tube, so we cannot simulate what is ethical separated from society. Then a simulation can aim either to model particular processes (say, the “Markovian” formation of ethics in a primitive society), or to produce results that would not be epistemological but instrumental. In either case, the question is whether the simulation allows for a homeostatically rational selection from among “a variety of ethics.” An extreme version of this approach considers ethics to be like a traffic code, a set of rules that are impossible to deduce from the description of the material state by pure logic, yet originating from certain instrumentalisms in the form of a multidimensionally optimal solution. Like the traffic code, ethics is supposed to minimize the number of collisions, and, what’s more, it should do this “naturally,” that is, in a way that benefits all while at the same time not inconveniencing individuals to the point where they begin breaking the rules.

27. But can a rationally thinking humanist “technologize” his love for the common good so much? Calculations show that peacefully united humanity is not just “a good thing in itself” (the ineffectuality of this statement is hopefully obvious) but it also represents the most efficient and dynamically stable system with the highest resistance to disturbances—possibly even at a cosmic scale. Hence an ethics supported by an instrumental, economic, and informational calculus is precisely the one that the humanist would choose. But cannot someone argue that an ethics with divisions, segregation, violence, and exploitation could work equally well in a purely operational and instrumental sense? It is reality, not I, that says that on a purely instrumental basis we cannot put an equals sign between the two ethics. The instrumental arguments in support of humanistic activities are ineffectual in stationary cultures that are governed by either general kindness or cruelty because it may happen that the stability of the overall equilibrium is the same in both. And purely humanistic arguments are disallowed in my operational approach, not because of the fashionable obsession with cybernetics but because they are culturally relative. If what is good in behavior coincided with what guarantees—metaculturally—efficiency and stability, we would have an ethical compass that would serve even in the most drastically changed future.

In a technologically oriented civilization, antiegalitarianism cannot have the same weight as egalitarianism. First, and trivially, the power that human slaves can supply is nothing compared with the power available in nature. Next, states based on division (“we” vs. “they,” “higher” vs. “lower”) are always unstable; even if we don’t consider the inevitable social antagonisms and the reliance on the use of force, structures stabilized by those means survive no longer than the interval between one industrial revolution and the next. For example, the new era of information, initiated when transmission satellites were put into stationary orbits, makes the total information blackout at any place on the planet practically impossible (in a technical sense). Thus as information-transmission technology progresses, it becomes more and more difficult to keep people uninformed. Economically, a civilization that masters the harvesting of energy from its maternal star or nuclear fusion but at the same time maintains the privilege of private ownership of that technology is—in purely instrumental terms—acting irrationally: it will need to combat an increasing number of difficulties that are unnecessary because they will disappear as soon as the principle of private ownership is abandoned.

True, calculations will not yield this result if they are limited only to short time intervals. Technoevolution cannot substitute for summary justice that punishes the bad and rewards the good—even though it appears to do precisely that in the long run. A homogeneous tyranny might freeze progress by employing brutal but at the same time technically refined means and acting globally (though its existence could be endangered by the exhaustion of the resources behind the technology employed to “freeze” the society). But the reality is that technoevolution is accelerating, which increases the weight of purely instrumental calculations, turning civilization more and more clearly into an energy-information machine whose global equilibrium is increasingly dependent on local equilibria. Therefore, from the global perspective (favored by technological integration), ethical behavior is at the same time rational, in accord with the developmental gradients, because any other behavior will sooner or later destroy the underlying societal order. Such destruction, we might add, could claim the entire species, a global finale to local shortsightedness or plain stupidity.

28. In part I of this essay I discussed the harm to social values caused by some technologies. The question arises: Can technology also act as an ally or amplifier of ethics, hence as an optimizing regulator of interpersonal relations? It is possible, within limits, because technological means can have moderating influence on the interactions between people. A purely physical example would include appropriately oriented production, construction, and other technologies that can alleviate the modern problem of overcrowding, which is evident in shopping, traveling, and housing. But technological solutions do not work without conflicts. In prosperous countries, the problem of pedestrian overcrowding has only transitioned to the problem of automotive overcrowding: more roads need to be built, but this is not possible even in the most prosperous countries. But what appears possible is isolating each person from the others by technology-based “distancing” or encapsulation that allows one to maintain individual dignity (not always easy to do in a crowd) and prevents friction with others among the inconveniences of everyday life. You could argue that technology, by keeping a person’s morality from being tested too often and simply making it not worth a person’s while to be a brute or swine, is not an ethics amplifier but more a shock absorber in human contacts, prophylactically removing the possibility of a conflict escalation. (Obviously such devices would lose their moderating effect if they were for a privileged few instead of everyone.) If used properly, this technology is a perfect moderator, ethically neutral because it only prevents evil acts. An owner of a home and a car may not be a better person, in ethical sense, than a homeless pedestrian—the former just does not have that many situational opportunities to act in a morally questionable way (though he can of course find them, if he wants to). But surely, any little prophylactic function of technology would be not a bad thing.

29. I cannot imagine how technology could help people internalize morality. Yet in the form of social engineering, it can stabilize a society’s equilibrium in such a way that the behavior of its members becomes irreproachable—externally. And the simple fact of belonging to that society might contribute to the internalization of morality.

The critical points in a societal structure where some people may be harmed by others are those at which one individual becomes the victim of another’s freedom. It is not difficult to find such points. But, again, technology can reduce that possibility. (What may be a common dream of modern times is a totally automatic administration, which would turn alienating bureaucracy into an efficient machine.) A solution could be not a personal capsule mentioned above but a kind of filtering distribution system operating throughout the society, faultlessly directing the right people to the right positions, objectively determining the criteria for professions as well as the conditions of work and salary, acting as a universal regulator free of favoritism or malice. In this way we can obtain a structure in which an individual would be protected not only by his personal “technological environment” (home, car, office) but also by the societal system of choices guiding his life’s path. With technology acting as a barrier that selectively disallows certain (ethically reprehensible) interactions, an ideal society (from the engineering viewpoint) could be constructed in which we do not have to “do good” to our neighbor because he does not need it—except in rare circumstances (a natural catastrophe, an industrial accident)—and “evil” is not perpetrated on anyone either because it simply is not worth the trouble when it does not provide any benefit (aside from the pleasure that people sometimes feel when they harm someone). I confess that I am not an apologist for this model, although it has merits that contemporary societies generally lack. The reason is that it is built on the hidden (not perfectly) premise of having no faith in people, which is unfortunate, although perhaps rational.

30. The main problem of this model is its stationary character. Only in a culture that is stationary is it unimportant whether behavior results from internal or external pressures, that is, whether it comes from the heart or from instrumental necessity. (The distinction may be important in general but we only deal with practice here, and in practice, morality that results from a drill or a custom is usually indistinguishable from morality that results from the love of virtue.) A nonstationary culture must ceaselessly adjust its fragile equilibrium, given the continual emergence, growth, and transformation of its institutions (in production, education, distribution, etc.), and it is hard to imagine that a technology of the “ethical neutralization” of interpersonal relations could keep pace with all those changes. Ethical values, once internalized, cannot be changed. But the premise of no faith in people requires foresight to keep new inventions or technologies from being abused. However, the unequal development of science and technology, as well as its long-term unpredictability, makes the full success of any such safeguarding impossible.

31. I have focused here on the search for technological ways to protect humankind from itself, that is against actions that may be fatal for individuals, communities, societies, or the entire species. I note that the less numerous the group in question, the more difficult it is to establish instrumentally the rationality of ethical behavior—at all levels (from a family to a country). The most difficult, perhaps impossible, is to prove the irrationality of misdeeds of individuals—we know how many rogues go unpunished. The increasing success of forensic technology in identifying perpetrators is not a good argument in support of society growing more virtuous; it can also be an argument for the need for criminals to improve their skills.

32. Technology as an aid for ethics can accomplish much in the field of social engineering, if only by introducing “dampers of evil” into existing structures or by making gradual innovations. The ideal would be a structure with three features: societal impermeability of the negative actions of individuals or groups; transmissibility, with amplification, of the positive actions; and, between these two, the maximum number of degrees of personal freedom. This “smart” structure need not be made of “smart” components—by “smart” I mean features that a living organism possess: the ability to repair itself, ultrastable equilibrium, and energy efficiency that does not depend only on the “smartness” concentrated in the nervous system. Bioevolution would certainly not be possible if adaptive success depended on whether or not each animal “figured out” that to survive it must breathe or use this protein and not another to counteract a bacterial toxin. As we know, it is possible to have the brain of a chicken and still prosper, but as of now no governmental organism can be run by managers with the intelligence of a chicken and flourish politically and economically. The obvious drawback of societal structures is that they act rationally only when they have rational managers. The relevance of this random factor, which today cannot be avoided, could be decreased by an appropriate reorganization. Bioevolution formed only “good,” that is, “rational” structures because it had enough time at its disposal, in which even a Markovian process without cumulative memory could find dynamically optimal solutions. In contrast, the period of trial and error in the construction of societal systems, lasting barely a few millennia, has not yielded such success.

33. Moreover, those trials did not include any theoretical planning, which, as we know, can tremendously accelerate progress. Nevertheless, the question arises whether such a perfect “sieve” that permits only the “good” to pass and blocks the “bad”—provided it can be created out of human atoms—is worth realizing and would even be possible to realize socially, regardless of any technical issues. There are three kinds of difficulties on the path to the modeling of societal phenomena, which must precede its realization. The first is formal and technical: the selection of a language (or languages) of description, essential parameters, and quality criteria for the obtained results. A particular problem here is how to make the “optimal structure” invariant with respect to all the unpredictable transformations that the future technological revolutions will cause. What is easiest to simulate—a rigid development of the orthoevolutionary type—is as trivial as it is useless, whereas the more interesting systems, with high levels of complexity and individual freedom at the same time, are nonlinear. Thus what is most predictable, due to its clear regularity, is not worth realizing and what is worth realizing is hard to represent statistically. And there are many other similar problems. For example, large system aggregates may pass through a series of developmental critical points, in which the effect of a random parameter dominates the internal feedback regularities (i.e., the system suddenly becomes oversensitive to local fluctuations), in which case a great many alternative paths will open up as potential radiations. Randomness can certainly be simulated but only in an overall, not particular, way, and when the number of possible solutions exceeds a certain limit, the problem will become impossible to solve, even though solutions may theoretically exist. On the other hand, the orthoevolutionary tendency of a technologically oriented culture will facilitate the simulation. Future experts will have to consider all these issues.

The second kind of difficulty is the widespread attitude that the whole endeavor is impossible. This is slowly changing, but a climate of support is lacking, and so is the understanding that meaningful research in this area will require large teams which must include anthropologists, sociologists, mathematicians, and others. It is high time we recognized the fundamental priority of this field. It needs appropriate channeling of efforts, inflow of investment, and an orienting of the best minds, simultaneously brilliant and properly trained, in the right direction. A related difficulty is political opposition to such a project. Imagine that a team of researchers presents a solution to the Vietnam problem that contradicts the United States military doctrine. Any dialogue between the team and the government, not to mention a fair consideration of the proposed solution, would be out of the question, since the strategic doctrine of the United States implies that the possibility of “losing face” is unacceptable, whereby a superpower’s fear of being embarrassed outweighs the fate of the species. It is equally clear what would happen with a mathematical discovery on the topic of, say, operation of selection filters in the power elite circles.

The third kind of difficulty regards the principial uncertainty of the simulation results. In an “optimal” culture, people are supposed to be “perfectly satisfied.” But in modern civilization, the control of societal parameters can be such that to an external observer a situation will seem to be what, in reality, is the complete opposite. Manifestations of mass enthusiasm, anger, or chaos can be carefully arranged and imposed. Consider the “mess” of the road signs in southern England during the Second World War; it was artificially created with the aim of confusing the Germans in case they invaded. There is nothing easier than to create a state in which everyone claims to be completely satisfied. One problem is that if the pretense is maintained long enough, it may, for all its monstrousness, become a sui generis truth. Some people who lived for a long time in the concentration camps forgot their previous life to such an extent that their response to being liberated was shock, passivity, frustration, even despair, because they feared this new, free life more than the awful but familiar conditions to which they had adapted. Metaphorically, to make the slaves free, it is not always enough to break their chains. We therefore cannot a priori rule out that if the theoretically optimal model were realized in life, it might be like the bed of Procrustes, but people’s extraordinary adaptive plasticity would prevent us from finding that out: being stretched on the bed, people would still insist—with sincerity—that their life is perfectly fine, and if there was any discomfort, the fault lay in their own bodies or in their nearest neighbor.

34. Why do I keep repeating ad nauseum that the time is ripe to undertake socioevolutionary modeling, in which computers will play a leading role and enable the simulation, in accelerated time, of societal processes? Because without this effort, which may be rewarded with some success in the next century, the technological acceleration will probably make our planet’s equilibrium even more fragile than it is now, and then it will be too late to do anything. All the variants of “physicalization” of our topic that I offered above are hopelessly primitive, but there can be no discovery without an agreement on methodology and without plans for work teams of appropriately educated and trained experts, for which I plead. Had it not been for the great mobilization of minds and means in response to war, the task of liberating the atomic energy might have remained unsolved up to this day. The same if not greater accumulation of effort is needed here. No matter how utopian it sounds, I repeat what was said before: “There are still no sociologists that would, following the physicists, demand billions for machines to ‘simulate societal processes,’ not to speak of ethologists who are today just Pascalian reeds in the world’s gales.11 Yet we ought to have faith that someday the situation will radically change.”

Summary

1. I expressed my conviction that the effects of metaethical factors, such as technology, on the formation and functioning of ethical systems, as well as the effects in the opposite direction—of ethics on what is nonethical—can be studied empirically, in a rigorous manner, and that the results of such investigation, supported by modeling these phenomena in a nonsocietal and nonhuman substrate (such as computers), may provide important directives for instrumental behavior that asymptotically approaches the creation of “the ideal societal structure.”

2. In particular, I consider ethics a result of the averaging and embedding in the societal realm of an enormous number of elementary (discrete) acts of personal behavior, which create, on the one hand, moral norms and, on the other hand—by idealizing these norms—axiomatic generalizations expressed as an obligatory-axiological formula. One of the essential parameters of culture may be the degree to which the real behavior in it diverges from that formula; this can be observed only in the form of a statistical distribution.

3. The individual-psychological (experiential) aspect of ethical behavior has been omitted here out of principle, as replacing that form of description with the “physicalized” one greatly simplifies the future simulation of these phenomena. But the omission does not mean that this aspect is insignificant; my approach has been similar to that of medicine, which, in the presentation of a disease, dedicates little space to the suffering (experienced introspectively) that the disease causes, even though the purpose is to remove precisely that suffering.

4. According to the presented hypothesis, what is “ethical” constitutes a part of the regulatory characteristic of the group behavior that has the highest probability to be realized in equivalent situations, a part which—just like the whole group-behavioral programming—is a resultant of at least three things that participate in the stabilization of given behavior: random circumstances (such as climate fluctuation), Markovian processes (which stabilize the results of random deviations from the initial state through positive feedback), and cumulative developments (e.g., technoevolution). These three create an intracultural model of “human nature” and stabilize the related system of norms and ethical values that for the members of the given culture is not simply a collection of probabilistic preferences but carries symbolic significance.

5. Ethics is thus cocreated also by metaethical factors that are essential for a group’s survival, are either factually or in the group members’ opinion necessary for maintaining a group’s continuity, and can be physically realized (a custom of flying or a norm that prescribes flying can never arise due to the lack of physiological mechanisms and instrumental means).

6. I have compared the development of a precultural group with the evolution of a species population, using a Markovian model, and pointed out the similarity in the radiations of variability in biology and cultural anthropology: in both cases, the variability is superfluous with respect to selection factors. Another source of analogy is the existence of stable (“absorbing”) states in a Markovian process; the “freezing” of biological and cultural forms at certain stages of development can find justification here. I also introduced the idea that a genotypic “permanent instability,” acting as a reservoir of regulatory (and potentially adaptive) variability and thus enabling the continuous evolution of biotypes, may be like the “permanent instability” of a technologically oriented culture, which maintains its relative equilibrium only thanks to the accelerating technoevolution that takes place in it.

7. I presented, in a quite primitive way, a possible “physical” model of society in which one can divide ethical phenomena into “weak local interactions” and “strong nonlocal interactions” which arise in and among large ensembles; the strong principially dominate the weak. This picture does not mean that there is no conflict between the norms “molded” by the strong interactions and the ethical, weak norms in individuals and that the latter cannot gain the upper hand in personal behavioral regulation. As a matter of principle, I was interested not in what the members of societal organizations or the representatives of institutions feel when they execute orders but in their objective behavior, and only in statistical average at that.

8. In an example I showed the effect of a narrowly defined technological interference in human “biological nature” on the functioning of ethics (sexual) in a situation that is characteristic of a highly developed civilization. The interference indirectly destroyed values that had been traditionally considered virtuous. The conclusion was that ethical, moral, and generally cultural predictions are needed before introducing any technology that can change the natural parameters of the functioning human body.

9. I also presented a technological means whose widespread utilization could help individuals, by acting as a (prophylactic) suppressor or filter, avoid unethical behavior. This technology is “ethically neutral,” as its operation is limited to “removing the opportunity” for actions that can harm others.

10. It appears that a stationary culture, that is, one that does not significantly change over several generations, can be modeled as a hierarchical whole in which “strong interactions” mold the “weak local interactions” unidirectionally. Feedback effects of the “weak” on the “strong” are negligible, which means that under the influence of accepted ethical norms the main societal gradients are not subject to correction from negative feedback, and that therefore the system structure (which generates these gradients) is insensitive to local ethical interactions. Even though both strong and weak interactions are random dependent variables, the former rule the latter. At the same time, the stability in all cultures of their elementary units, the families, can make parts of the “weak interaction” programs, such as those inherited from earlier cultures, resistant to the influence of the strong interactions. Because of their multilevel nature, these relations are similar to the bioevolutionary schematic of the emergence of a species, where the genotypic and phenotypic factors are correlated and a (Markovian) circulation of information is taking place at different levels (at the genotypic “microlevel” and at the “macrolevel” of phenotypes, i.e., mature individuals). A stationary culture is then ultrastable, as is a perfectly adapted species. Ethical “inventions,” “improvements,” or “easements” carried out in groups with few members or by individuals usually are not adopted by the society as a whole. Behavioral traits that arise in individuals and are not assimilated culturally are similar to the bioevolutionary schematic in which an acquired trait is not inherited.

11. In a technologically oriented culture, the exponential acceleration in the variability of living conditions often has consequences in the moral and ethical sphere. When the acceleration in variability changes exceeds a certain limit, the intergenerational transmission of norms (both instrumental and noninstrumental) can break down, as the norms of the parents become obsolete and cannot accommodate the situations of the children. The result may be a kind of a societal drift of values in the stream of perturbations caused by the technological acceleration. (I omitted the phenomena of so-called mass culture and its moral derivatives due to spatial and thematic constraints: they would require too much room to discuss and besides are the topic of numerous works in the field.)

12. I was deliberately one-sided in presenting the intracultural functioning of individuals subject to “strong interactions,” as if the “material” of which the society is “constructed” were not significant. I suggested that “personal” parameters, that is, what we normally call character, resistance to stress, intelligence, drive, extra- or introversion, and emotional sensitivity may not find good correlates in the model of a society as a complex system (in the cybernetic sense). I spoke of the irrelevance of the “material” in the extracerebral simulations of the brain processes. This point of view resulted from the assumption that the maximum economy of means (and Ockhamian “entities”) is needed to make the simulations feasible. Taking this view does not mean that I consider personal parameters “irrelevant”; on the contrary, I firmly believe that a society ought to exist for individuals, not vice versa. The “reduction” of human individuals to points in a kind of “configurational space” should thus be understood only as an extremely simplified method of description.

13. Finally, I listed the principal obstacles on the proposed modeling path, difficulties that are partly technical, partly epistemological, and partly methodological, because such is the problem of antinomicity or equivocality (“indetermination” or “uncertainty”) of the experiments in which the elements—people in the experimental societal system—are subjected to empirical tests that are supposed to determine whether their life in that system is “good” or “bad.”

14. I conclude that it is practically impossible to model large historical processes, such as the evolution of terrestrial civilization. The Markovian nature of the phenomena, forbidding backward extrapolation in any distinctly ergodic process, thwarts the endeavor so thoroughly that other reasons for this impossibility do not need to be given. But this impossibility, which equally applies to the path of the terrestrial bioevolution, does not preclude the modeling of parts of the process. A technologically oriented civilization, because of its markedly teleological character, should be more suitable for this modeling than cultures that are instrumentally primitive. This at least gives reason for some optimism as we face the future.