CYBERNETICS/FEEDBACK

Cybernetics is a term that does not ring bells anymore for most people. It refers to a formerly in-vogue science that the MIT engineer and mathematician Norbert Wiener originated and about which he wrote a surprising best seller in 1948. According to recent sources, cybernetics is notable for marking a “moment” that slipped by and subsequently gave way to the “information age” as a way of defining the present. Still, when examined as a science that unified natural and social methods of study at mid-twentieth century—one that put feedback at the center of its understanding of reality—cybernetics was a paradox, a field that died unmourned but lives again around the globe, arguably, in almost everyone’s increasingly feedback-driven twenty-first-century lives. In insisting on the centrality of feedback, cybernetics made critical contributions to the modern concepts of information and of an information society.

Although Wiener did not coin the term cybernetics until after World War II, its origins lie two decades earlier. In the late 1920s, a Cambridge anthropology graduate student with outstanding biology credentials—he was the second son of a famous British naturalist—sailed to a fieldwork site along the Sepik River in Papua New Guinea to work with the Iatmul people. The student was named Gregory Bateson, and he was on his way to a legendary career that would play out at the intersection of anthropology, biology, and psychiatry. When he reached Iatmul territory, however, Bateson knew neither the word cybernetics (for of course it did not exist) nor feedback. Still, he was struck by a dynamic that seemed both familiar and strange. The Iatmul were absorbed in their own thoughts and responsive actions, which he called “logical tangles.” Their actions intensified in reaction to others’ actions, and the result was a kind of recursively intensifying loop of responses, as he described in his book Naven. While Iatmul men, as the result of these loops, became ever more aggressive, Iatmul women became ever milder and more retiring. Iatmul society emerged out of this intersecting behavior as a kind of arms race. Bateson called this toxic dynamic “schismogenesis” (“complementary schismogenesis” referred to male-female oppositional dynamics whereas in-gender competition was called “symmetrical schismogenesis”). Ratcheting between male and female factions, in Bateson’s view, would lead to unending, extremely disruptive polarization in the village.

Yet there was an escape hatch: each year, in a ceremony called “Naven,” men and women reversed their roles for a short period, the women leaping to become aggressive by dressing and acting like men, the men melting into subordination and even submitting to ritualized rape and humorous humiliation. This transformed Iatmul schismogenesis into a self-regulating system, and Naven’s effect, like a thermostat being adjusted, was to bring down the temperature in the Iatmul world. Note that the concepts of negative feedback and positive feedback existed already, having been used by the engineer James Clerk Maxwell in a publication in the 1860s to describe steam engines and how “steady states” could be achieved by regulation through a “governor” valve that would let off steam when necessary; further, in technologies dating to ancient Greece, China, and Korea—such as water clocks—closed-circuit and open-circuit control mechanisms could be found. And although in the 1920s Bateson was not familiar with “feedback” concepts, he would apply them, soon, to areas beyond steam engines—to human society itself. The insights provided from this fieldwork experience would grow in other sites. Bateson continued to work in New Guinea with his new wife, the anthropologist Margaret Mead, and later, also with Mead, in Bali.

The term schismogenesis took off—although not particularly in anthropological studies, nor in understandings of Melanesian society. Rather, Bateson during World War II became one of the founders of cybernetics. By the time he and Mead amicably divorced, the two were leaders of the new cybernetics movement. Its moment had arrived.

Following Bateson in Papua New Guinea has served to sketch the origins of cybernetics. One reason for this is that cybernetics’ official beginnings are somewhat murky. In certain accounts it is said to lack a single, clear moment of genesis: “It is difficult to say precisely where and how cybernetics began,” argues Geoffrey Bowker. Others, however, stress the high-demand settings of World War II and the collaborations the war fostered across disciplines, when experts in different fields pursued common practical aims such as learning how to chart the behavior of an aircraft pilot confronted in midair with enemy planes—how could evasions of enemy gunfire be graphed and anticipated? How could airmen’s fatigue be predicted and avoided? Engineers, mathematicians, sociologists, and psychologists collaborated to solve problems. Success in such endeavors led to scholars wanting to enter into more collaborations. Most historians agree cybernetics emerged out of the environment of new technological possibilities occasioned by the war, and a new sense of possibility for academics to engage in practical problem solving. Cybernetics did not depend on a definition of information as essentially *digital (contra Claude Shannon’s view); rather, Wiener envisioned a grand interdisciplinary science encompassing human and nonhuman elements in a penetrating communication network.

Cybernetics from the outset was characterized by its tendency to combine many fields. In fact, cybernetics has been described as a sort of long and multisited conversation that took place over many years and involved people talking to each other who almost never talked to each other. It never became centralized; it never got its own laboratories in one spot; it never got its own institutions but relied on threads of ongoing discussion and networks of research relations. Yet despite its lack of centralization and institutional solidity, cybernetics can be considered a significant contributor to the birth of the modern information age.

The legendary Macy Conferences, held from 1946 to 1953, were key arenas for the emergence of cybernetics, but in fact there was an important precursor to these “official” meetings that brought cybernetics more clearly into being, in effect: a kind of pre-Macy Macy meeting took place in 1942, in New York. The topic was one on the surface unlikely to lead to a grand synthesis of human social and biological systems: cerebral inhibition and the workings of hypnosis. An interdisciplinary group gathered, including the Mexican-born, Harvard-trained physiologist Arturo Rosenblueth, the neurologist and poet Warren McCulloch, the philanthropic foundation officer Frank Fremont-Smith, and the intellectual entrepreneur Lawrence K. Frank, as well as the husband-wife team of Mead and Bateson. At the New York meeting, Rosenblueth came forward to present a preview of the basic ideas for a paper he was to publish the next year, 1943, along with the mathematician Norbert Wiener and the computer engineer Julian Bigelow. Its seemingly unremarkable title, “Behavior, Purpose and Teleology,” belied its boldness: in fact, it was a bridge spanning existing behaviorist research and the new unnamed idiom that would soon be called (by Wiener himself) cybernetics. The published paper by the three begins, in fact, by redefining “behavior” in terms of input and output rather than stimulus and response. Quite radically, Rosenblueth and his collaborators were rescuing teleology and purpose, concepts “rather discredited at present,” and showing them “to be important.” In this, the three felt they had sketched the lines of a new approach to the engineering of living beings in concert with machines: they argued that a uniform behavioristic analysis would apply to any machine or living organism, no matter its degree of complexity. The paper was really a manifesto arguing that one could now talk about teleological behavior (that is, behavior that led to an ultimate purpose) in any system while also remaining true to the demands of adequate explanation. This little presentation was in fact the “seed” of all future cybernetics meetings.

On exposure to this paper heralding—in essence—the possibility of a dynamic “control system” applied to human societies, several attendees of the 1942 conference felt something momentous had occurred. Margaret Mead, for one, was so excited (she reported later) that she broke her tooth and didn’t notice until after the meeting was over. Bateson, too, saw immense possibilities. What came across was this message: Owing to negative feedback, also known as teleological mechanisms or servomechanisms (essentially, akin to the “Naven” ceremony Bateson had described twenty years earlier), one didn’t have to throw out the baby (of adequate scientific models) or the bathwater (of the formerly metaphysical realm of teleology and purpose). Machines and mice were no longer models for human function, as they had been in earlier behaviorist experiments; they were animated by the same mechanism.

In the view of the Yale philosopher F.S.C. Northrop, commenting on the paper by Rosenblueth and his colleagues, it had shown that negative feedback (inverse feedback) mechanisms can be built to carry out social-engineering purposes. This mechanism could work for humans or robots, or some combination of the two. Cybernetics would mean the possibility of revolutionizing all “traditional theories” of human activity and upending philosophy itself. Some saw this as the dawn of an age of ideological engineering by means of these mechanisms. Citing the work of several early cybernetics researchers (including neural networks research by Walter Pitts and Warren McCulloch), Northrop declared that, now, social theories could be programmed into humans via the “firing of motor neurons.” Social and institutional *facts would be determined—as Northrop put it—“literally.” Needless to say, such an ability to engineer programmed beliefs within human nervous systems (in effect) bore implications for the fight in the war on communism—for if normative belief itself and the strength of ideas in guiding behavior were capable of being engineered, this was big news in the age of the Manichaean struggle between Soviet and Western worldviews.

The 1942 event inspired the creation of a series of later meetings under the rubric and title of “Circular and Causal Feedback Mechanisms in Biological and Social Systems”—later changed, but not until 1947, to the single term cybernetics. (These are now referred to as the Macy Conferences.) It was because of Bateson’s formative presence and his anthropological concerns, in particular, that the conference planners added a fateful “and Social” to the title. Soon there were enthusiastically collaborative meetings under the banner of “Teleological Mechanisms”—including a conference in 1946 expressly for social scientists titled “Teleological Mechanisms in Society” and another in 1948 hosted by Lawrence K. Frank of the Caroline Zachary Institute of Human Development. To these events flocked many of the most creative social scientists of the day (including Mead and Bateson, Clyde Kluckhohn and Talcott Parsons, and, from Columbia, Paul Lazarsfeld and Robert K. Merton), as well as some of the most powerful mathematical theorists (Norbert Wiener, Warren McCulloch, Claude Shannon, and John von Neuman; the British mathematicians Alan Turing, Grey Walter, and W. Ross Ashby were also closely involved).

In 1948, the cybernetics movement achieved its most audible public voice with a popular book by Wiener. Announcing the invented word cybernetics, or “the art of the helmsman,” Wiener argued in The Human Use of Human Beings that in the face of an onslaught of chaotic forces—the implacable second law of thermodynamics, dictating that, in essence, things fall apart and entropy rules—men and machines shared the capacity to fight back, to create local zones of order against entropy by “contribut[ing] to a local and temporary building up of information.” As a result of this call to arms, many social science and behavioral science endeavors arose in the United States to attempt to build such shored-up information zones, which took the form of new social- and political-planning projects. Three US-based examples: the Nobel Prize–winning organizational expert Herbert Simon applied feedback principles to create an “ultrastable” homeostatic model for management systems; physiologist Hans Selye reenvisioned the dynamics of stress as a continual adaptation to the expectations of others; and the Harvard sociologist Robert Freed Bales designed a “special room” for research in which human-to-human interactions could be tracked and adjusted through feedback mechanisms. Essentially each human entity was the sum of their information exchanges within a system. In a variety of fields, what was called the “human factor” arose as a potential object of engineering.

In the two decades after World War II, cybernetics spread around the globe and grew its own independent “cultures,” notably among left-leaning and communist regimes (whereas it had tended toward conservatism or middle-of-the-road liberalism in the US contexts). Soviet cybernetics, decrying the “bourgeois” cybernetics of the West, thrived from the 1950s through the 1970s; French cybernetics influenced the development of computer-education programs for children; and Chilean cybernetics embraced the task of developing a central-command-style cockpit—called Project Cybersyn under Salvador Allende—that would aggregate all existing information and feed it back into local programs. Most “applied” cybernetic programs eventually fell out of fashion because they were ineffective, despite ambitious aims to revolutionize human society. Second-wave cybernetics arose to add a partially Buddhist-inspired twist to cybernetics, characterizing “self-organizing systems” as forms of interdependently co-arising phenomena (in contrast to the command-and-control approach of the 1950s).

In recent years, a New York Times review firmly dismissed cybernetics as an obscure dead end—a “science [that] simply failed in the court of ideas.” Most people know cybernetics, if at all, through the notion of cyberspace, a term the writer William Gibson thought up in 1982; Gibson recalls finding the cyber prefix “weird”—yet “I thought it sounded like it meant something while still being essentially hollow.” His neologism now widely stands for the digital world as a whole. Another thread connects early cybernetics with Silicon Valley pioneers, as when, in 1976, Whole Earth Catalog founder Stewart Brand conducted a wide-ranging conversation with Gregory Bateson and Margaret Mead about the roots of cybernetic research. Bateson recalled gradually realizing, through the Macy meetings, that “the whole of logic would have to be reconstructed for recursiveness.”

A more direct link with cybernetics today can be made by looking more closely at the ubiquitous role of feedback in technological transactions, especially those facilitated by social media encounters. Every aspect of communications, it is probably fair to say, is being remade by recursive feedback to an extent that might not have surprised Bateson. “Pernicious feedback loops” appear in the growing use of predictive-policing algorithms—those addressing vagrancy, “broken windows” crimes, sentencing, and “heat lists” generated to identify juveniles likely to become criminals. Where certain neighborhoods are targeted, more crime is found, more data is generated, and the result can be more proof of justified targeting, which results in more targeting, a kind of schismogenesis. Political strategists use feedback data from online interactions to choreograph twenty-five thousand (or many more) iterations of a candidate’s ad message, each shaped by feedback. As *“big data” becomes increasingly entwined in real-time pervasive data gathering on ever-more-intimate areas of human experience, the use of feedback to amplify the power of machine learning and AI systems grows. The effects of this neocybernetic practice, however, are as yet unknown.

Rebecca Lemov

See also computers; data; databases; digitization; files; networks; platforms

FURTHER READING

  • Gregory Bateson, Naven: A Survey of the Problems Suggested by a Composite Picture of the Culture of a New Guinea Tribe Drawn from Three Points of View, 1958; Geoffrey Bowker, “How to Be Universal: Some Cybernetic Strategies, 1943–1970,” in Social Studies of Science 23, no. 1 (1993): 107–27; F.S.C. Northrop, “The Neurological and Behavioristic Psychological Basis of the Ordering of Society by Means of Ideas,” Science 107 (1948): 411–17; Arturo Rosenblueth, Julian Bigelow, and Norbert Wiener, “Behavior, Purpose and Teleology,” Philosophy of Science 10 (1943): 18–24; Norbert Wiener, The Human Use of Human Beings: Cybernetics and Society, 1954 (1948).