Late in the afternoon of January 23, 1992, during a campaign stop at the American Brush Company in Claremont, New Hampshire, the ABC political reporter Jim Wooten asked then-candidate Bill Clinton about allegations being made by an ex-cabaret singer named Gennifer Flowers. While rumors of Clinton’s womanizing had been rampant among the press corps, Wooten’s question was the first time the young Democratic front-runner had been asked about a specific woman. “She claims she had a long-standing affair with you,” Wooten said with cameras running. “And she says she tape-recorded the telephone conversations with you in which you told her to deny you had ever had an affair.”
Wooten said later that Clinton took the question as though he’d been practicing his answer for months. “Well, first of all, I read the story. It isn’t true. She has obviously taken money to change the story, a story she felt so strongly about that she hired a lawyer to protect her good name not very long ago. She did call me. I never initiated any calls to her. . . .” The candidate’s denials went on for another five minutes, and then the exchange was over. Clinton had responded to the question, but was it news? Across the country, a furious debate on journalistic ethics erupted: Did unproven allegations about the candidate’s sex life constitute legitimate news? And did it matter that the candidate himself had chosen to deny the allegations on camera? A cabaret singer making claims about the governor’s adulterous past was clearly tabloid material—but what happened when the governor himself addressed the story?
After two long hours of soul-searching, all three major television networks—along with CNN and PBS’s MacNeil/Lehrer show—chose not to mention Wooten’s question on their national news broadcast, or to show any of the footage from the exchange. The story had emphatically been silenced by some of the most influential figures in all of mass media. The decision to ignore Gennifer Flowers had been unanimous—even at the network that had originally posed the question. Made ten or twenty years before, a decision of that magnitude could have ended a story in its tracks (assuming the Washington Post and the New York Times followed suit the next morning). For the story to be revived, it would need new oxygen—some new development that caused it to be reevaluated. Without new news, the Flowers story was dead.
And yet the following day, all three networks opened with Gennifer Flowers as their lead item. Nothing had happened to the story itself: none of the protagonists had revealed any additional information; even Clinton’s opponents were surprisingly mute about the controversy. The powers that be in New York and Washington had decided the day before that there was no story—and yet here were Peter Jennings and Tom Brokaw leading their broadcasts with the tale of a former Arkansas beauty queen and her scandalous allegations.
How did such a reversal come to pass? It’s tempting to resort to the usual hand-wringing about the media’s declining standards, but in this case, the most powerful figures in televised media had at first stuck to the high road. If they had truly suffered from declining standards, the network execs would have put Jim Wooten on the first night. Something pushed them off the high road, and that something was not reducible to a national moral decline or a prurient network executive. Gennifer Flowers rode into the popular consciousness via the system of televised news, a system that had come to be wired in a specific way.
What we saw in the winter of 1992 was not unlike watching Nixon sweat his way through the famous televised debate of 1960. As countless critics have observed since, we caught a first glimpse in that exchange of how the new medium would change the substance of politics: television would increase our focus on the interpersonal skills of our politicians and diminish our focus on the issues. With the Flowers affair, though, the medium hadn’t changed; the underlying system had. In the late eighties, changes in the flow of information—and particularly the raw footage so essential to televised news—had pushed the previously top-down system toward a more bottom-up, distributed model. We didn’t notice until Jim Wooten first posed that question in New Hampshire, but the world of televised news had taken a significant first step toward emergence. In the hierarchical system of old, the network heads could willfully suppress a story if they thought it was best for the American people not to know, but that privilege died with Gennifer Flowers, and not because of lowered standards or sweeps week. It was a casualty of feedback.
* * *
It is commonplace by now to talk about the media’s disposition toward feeding frenzies, where the coverage of a story naturally begets more coverage, leading to a kind of hall-of-mirrors environment where small incidents or allegations get amplified into Major Events. You can normally spot one of these feedback loops as it nears its denouement, since it almost invariably triggers a surge of self-loathing that washes through the entire commentariat. These self-critical waters seem to rise on something like an annual cycle: think of the debate about the paparazzi and Princess Di’s death, or the permanent midnight of “Why Do We Care So Much About O.J.?” But the feedback loops of the 1990s weren’t an inevitability; they came out of specific changes in the underlying system of mass media, changes that brought about the first stirrings of emergence—and foreshadowed the genuinely bottom-up systems that have since flourished on the Web. That feedback was central to the process should come as no surprise: all decentralized systems rely extensively on feedback, for both growth and self-regulation.
Consider the neural networks of the human brain. On a cellular level, the brain is a massive network of nerve cells connected by the microscopic passageways of axons and dendrites. A flash of brain activity—thinking of a word, wrestling with a concept, parsing the syntax of the sentence you’re reading now—triggers an array of neuronal circuits like traffic routes plotted on the map of the mind. Each new mental activity triggers a new array, and an unimaginably large number of possible neuronal circuits go unrealized over the course of a human life (one reason why the persistent loss of brain cells throughout our adult years isn’t such a big deal). But beneath all that apparent diversity, certain circuits repeat themselves again and again. One of the most tantalizing hypotheses in neuroscience today is that the cellular basis of learning lies in the repetition of those circuits. As neurologist Richard Restak explains, “Each thought and behavior is embedded within the circuitry of the neurons, and . . . neuronal activity accompanying or initiating an experience persists in the form of reverberating neuronal circuits, which become more strongly defined with repetition. Thus habit and other forms of memory may consist of the establishment of permanent and semipermanent neuronal circuits.” A given circuit may initially be associated with the idea of sandwiches, or the shape of an isosceles triangle—and with enough repetition of that specific circuit, it marks out a fixed space in the brain and thereafter becomes part of our mental vocabulary.
Why do these feedback loops and reverberating circuits happen? They come into being because the neural networks of the brain are densely interconnected: each individual neuron contains links—in the form of axons and synapses—to as many as a thousand other neurons. When a given neuron fires, it relays that charge to all those other cells, which, if certain conditions are met, then in turn relay the charge to their connections, and so on. If each neuron extended a link to one or two fellow neurons, the chance of a reverberating loop would be greatly reduced. But because neurons reach out in so many directions simultaneously, it’s far more likely that a given neuron firing will wind its way back to the original source, thus starting the process all over again. The likelihood of a feedback loop correlates directly to the general interconnectedness of the system.
By any measure, the contemporary mediasphere is a densely interconnected system, even if you don’t count the linkages of the online world. Connected not just in the sense of so many homes wired for cable and so many rooftops crowned by satellite dishes, but also in the more subtle sense of information being plugged into itself in ever more baroque ways. Since Daniel Boorstin first analyzed the television age in his still-invaluable 1961 work, The Image, the world of media journalism has changed in several significant ways, with most of the changes promoting an increase of relays between media outlets. There are far more agents in the system (twenty-four-hour news networks, headline pagers, newsweeklies, Web sites), and far more repackagings and repurposings of source materials, along with an alarming new willingness to relay uncritically other outlets’ reporting. Mediated media-critique, unknown in Boorstin’s less solipsistic times, and formerly quarantined to early-nineties creations such as CNN’s Reliable Sources and the occasional Jeff Greenfield segment on Nightline, is now regularly the lead story on Larry King and Hardball. The overall system, in other words, has shifted dramatically in the direction of distributed networks, away from the traditional top-down hierarchies. And the more the media contemplates its own image, the more likely it is that the system will start looping back on itself, like a Stratocaster leaning against the amp it’s plugged into.
The upshot of all this is that—in the national news cycle at least—there are no longer any major stories in which the media does not eventually play an essential role, and in many cases the media’s knack for self-reflection creates the story itself. You don’t need much of an initial impulse to start the whole circuit reverberating. The Gennifer Flowers story is the best example of this process at work. As Tom Rosenstiel reported in a brilliant New Republic piece several years ago, the Flowers controversy blossomed because of a shift in the relationship between the national news networks and their local affiliates, a shift that made the entire system significantly more interconnected. Until the late eighties, local news (the six- and eleven-o’clock varieties) relied on the national network for thirty minutes of national news footage, edited according to the august standards of the veterans in New York. Local affiliates could either ignore the national stories or run footage that had been supplied to them, but if the network decided the story wasn’t newsworthy, the affiliates couldn’t cover it.
All this changed when CNN entered the picture in the mideighties. Since the new network lacked a pool of affiliates to provide breaking news coverage when local events became national stories, Ted Turner embarked on a strategy of wooing local stations with full access to the CNN news feed. Instead of a tightly edited thirty-minute reel, the affiliates would be able to pick and choose from almost anything that CNN cameras had captured, including stories that the executive producers in Atlanta had decided to ignore. The Flowers episode plugged into this newly rewired system, and the results were startling. Local news affiliates nationwide also had access to footage of Clinton’s comment, and many of them chose to jump on the story, even as the network honchos in New York and Washington decided to ignore it. “When NBC News political editor Bill Wheatley got home and turned on the eleven P.M. local news that night, he winced: the station NBC owned in New York ran the story the network had chosen not to air the same evening,” Rosenstiel writes. “By the next afternoon, even Jim Lehrer of the cautious MacNeil/Lehrer NewsHour on PBS told the troops they had to air the Flowers story against their better judgment. ‘It’s out of my hands,’ he said.”
The change was almost invisible to Americans watching at home, but its consequences were profound. The mechanism for determining what constituted a legitimate story had been reengineered, shifting from a top-down system with little propensity for feedback, to a kind of journalistic neural net where hundreds of affiliates participated directly in the creation of the story. And what made the circuit particularly vulnerable to reverberation was that the networks themselves mimicked the behavior of the local stations, turning what might have been a passing anomaly into a full-throttle frenzy. That was the moment at which the system began to display emergent behavior. The system began calling the shots, instead of the journalists themselves. Lehrer had it right when he said the Gennifer Flowers affair was “out of my hands.” The story was being driven by feedback.
* * *
The Flowers affair is a great example of why emergent systems aren’t intrinsically good. Tornadoes and hurricanes are feedback-heavy systems too, but that doesn’t mean you want to build one in your backyard. Depending on their component parts, and the way they’re put together, emergent systems can work toward many different types of goals: some of them admirable, some more destructive. The feedback loops of urban life created the great bulk of the world’s most dazzling and revered neighborhoods—but they also have a hand in the self-perpetuating cycles of inner-city misery. Slums can also be emergent phenomena. That’s not an excuse to resign ourselves to their existence or to write them off as part of the “natural” order of things. It’s reason to figure out a better system. The Flowers affair was an example of early-stage emergence—a system of local agents driving macrobehavior without any central authority calling the shots. But it was not necessarily adaptive.
Most of the time, making an emergent system more adaptive entails tinkering with different kinds of feedback. In the Flowers affair, we saw an example of what systems theorists call positive feedback—the sort of self-fueling cycles that cause a note strummed on a guitar to expand into a howling symphony of noise. But most automated control systems rely extensively on “negative feedback” devices. The classic example is the thermostat, which uses negative feedback to solve the problem of controlling the temperature of the air in a room. There are actually two ways to regulate temperature. The first would be to design an apparatus capable of blowing air at various different temperatures; the occupant of the room would simply select the desired conditions and the machine would start blowing air cooled or heated to the desired temperature. The problem with that system is twofold: it requires a heating/cooling apparatus capable of blowing air at precise temperatures, and it is utterly indifferent to the room’s existing condition. Dial up seventy-two degrees on the thermostat, and the machine will start pumping seventy-two-degree air into the room—even if the room’s ambient temperature is already in the low seventies.
The negative feedback approach, on the other hand, provides a simpler solution, and one that is far more sensitive to a changing environment. (Not surprisingly, it’s the technique used by most home thermostats.) Instead of pumping precisely calibrated air into the room, the system works with three states: hot air, cool air, and no air. It takes a reading of the room’s temperature, measures that reading against the desired setting, and then adjusts its state accordingly. If the room is colder than the desired setting, the hot air goes on. If it is warmer, the cool air flows out. The system continuously measures the ambient temperature and continuously adjusts its output, until the desired setting has been reached—at which point it switches into the “no air” state, where it remains until the ambient temperature changes for some reason. The system uses negative feedback to home in on the proper conditions—and for that reason it can handle random changes in the environment.
Negative feedback, then, is a way of reaching an equilibrium point despite unpredictable—and changing—external conditions. The “negativity” keeps the system in check, just as “positive feedback” propels other systems onward. A thermostat with no feedback simply pumps seventy-two-degree air into a room, regardless of the room’s temperature. An imaginary thermostat driven by positive feedback might evaluate the change in room temperature and follow that lead: if the thermostat noted that the room had grown warmer, it would start pumping hotter air, causing the room to grow even warmer, causing the device to pump hotter air. Next thing you know, the water in the goldfish bowl is boiling. Negative feedback, on the other hand, lets the system find the right balance, even in a changing environment. A cold front comes in, a window is opened, someone lights a fire—any of these things can happen, and yet the temperature remains constant. Instead of amplifying its own signal, the system regulates itself.
* * *
We’ve been wrestling with information as a medium for negative feedback ever since Norbert Wiener published Cybernetics in 1949, and Wiener himself had been thinking about the relationship between control and feedback since his war-related research of the early forties. After the Japanese bombed Pearl Harbor and the United States joined the war in earnest, Wiener was asked by the army to figure out a way to train mechanical guns to fire automatically at their targets. The question Wiener found himself answering was this: Given enough information about the target’s location and movement, could you translate that data into something a machine could use to shoot a V-2 out of the sky?
The problem was uniquely suited for the adaptability of negative feedback: the targets were a mixture of noise and information, somewhat predictable but also subject to sudden changes. But as it happened, to solve the problem Wiener also needed something that didn’t really exist yet: a digital computer capable of crunching the flow of data in real time. With that need in mind, Wiener helped build one of the first modern computers ever created. When the story is told of Wiener’s war years, the roots of the modern PC are usually emphasized, for legitimate reasons. But the new understanding of negative feedback that emerged from the ENIAC effort had equally far-reaching consequences, extending far beyond the vacuum tubes and punch cards of early computing.
For negative feedback is not solely a software issue, or a device for your home furnace. It is a way of indirectly pushing a fluid, changeable system toward a goal. It is, in other words, a way of transforming a complex system into a complex adaptive system. Negative feedback comes in many shapes and sizes. You can build it into ballistic missiles or circuit boards, neurons or blood vessels. It is, in today’s terms, “platform agonistic.” At its most schematic, negative feedback entails comparing the current state of a system to the desired state, and pushing the system in a direction that minimizes the difference between the two states. As Wiener puts it near the outset of Cybernetics: “When we desire a motion to follow a given pattern, the difference between this pattern and the actually performed motion is used as a new input to cause the part regulated to move in such a way as to bring its motion closer to that given by the pattern.” Wiener gave that knack for self-regulation a name: homeostasis.
Your body is a massively complex homeostatic system, using an intricate network of feedback mechanisms to keep itself stable in the midst of dynamically changing situations. Many of those feedback mechanisms are maintained by the brain, which coordinates external stimuli received by sensory organs and responds by triggering appropriate bodily actions. Our sleep cycles, for instance, depend heavily on negative feedback. The body’s circadian rhythms—accumulated after millions of years of life on a planet with a twenty-four-hour day—flow out of the central nervous system, triggering regular changes in urine formation, body temperature, cardiac output, oxygen consumption, cell division, and the secretions of endocrine glands. But for some reason, our body clocks are set a little slow: the human circadian cycle is twenty-five hours, and so we rely on the external world to reset our clock every day, both by detecting patterns of light and darkness, and by detecting the more subtle change in the earth’s magnetic field, which shifts as the planet rotates. Without that negative feedback pulling our circadian rhythms back into sync, we’d find ourselves sleeping through the day for two weeks out of every month. In other words, without that feedback mechanism, it would be as though the entire human race were permanently trapped in sophomore year of college.
Understanding the body and the mind as a feedback-regulated homeostatic system has naturally encouraged some people to experiment with new forms of artificial feedback. Since the seventies, biofeedback devices have reported changes in adrenaline levels and muscle tension in real time to individuals wired up to special machines. The idea is to allow patients to manage their anxiety or stress level by letting them explore different mental states and instantly see the physiological effects. With a little bit of practice, biofeedback patients can easily “drive” their adrenaline levels up or down just by imagining stressful events, or reaching a meditative state. Our bodies, of course, are constantly adjusting adrenaline levels anyway—the difference with biofeedback is that the conscious mind enters into that feedback process, giving patients more direct control over the levels of the hormone in their system. That can be a means of better managing your body’s internal state, but it can also be a process of self-discovery. The one time I tried conventional biofeedback, my adrenaline levels hovered serenely at the middle of the range for the first five minutes of the session; the doctor actually complimented me on having such a normal and well-regulated adrenal system. And then, in the course of our conversation, I made a joke—and instantly my adrenaline levels shot off the charts. At the end of my visit, the therapist handed me a printout of the thirty-minute session, with my changing adrenaline levels plotted as a line graph. It was, for all intents and purposes, a computer graph of my attempts at humor over the preceding half hour: a flat line interrupted by six or seven dramatic spikes, each corresponding to a witticism that I had tossed out to the therapist.
I walked away from the session without having improved myself in any noticeable way, and certainly I hadn’t achieved more control over my adrenaline levels. But I’d learned something nonetheless: that without consciously realizing it, I’d already established a simple feedback circuit for myself years ago, when my body had learned that it could give itself a targeted adrenaline rush by making a passing joke in conversation. I thought of all those office meetings or ostensibly serious conversations with friends where I had found myself compulsively making jokes, despite the inappropriate context; I thought of how deeply ingrained that impulse is in my day-to-day personality—and suddenly it seemed closer to a drug addiction than a personality trait, my brain scrambling to put together a cheap laugh to secure another adrenaline fix. In a real sense, our personalities are partially the sum of all these invisible feedback mechanisms; but to begin to understand those mechanisms, you need additional levels of feedback—in this case, a simple line graph plotted by an ordinary PC.
If analyzing indirect data such as adrenaline levels can reveal so much about the mind’s ulterior motives, imagine the possibilities of analyzing the brain’s activity directly. That’s the idea behind the technology of neurofeedback: rather than measure the results of the brain’s actions, neurofeedback measures brain waves themselves and translates them into computer-generated images and sounds. Certain brain-wave patterns appear in moments of intense concentration; others in states of meditative calm; others in states of distraction, or fear. Neurofeedback—like so many of the systems we’ve seen—is simply a pattern amplification and recognition device: a series of EEG sensors applied to your skull registers changes in the patterns of your brain waves and transforms them into a medium that you can perceive directly, often in the form of audio tones or colors on a computer screen. As your brain drifts from one state to another, the tone or the image changes, giving you real-time feedback about your brain’s EEG activity. With some practice, neurofeedback practitioners can more readily drive their brains toward specific states—because the neurofeedback technology supplies the brain with new data about its own patterns of behavior. Once you’ve reached a meditative state using neurofeedback, devotees claim, the traditional modes of meditation seem like parallel parking without a rearview mirror—with enough practice, you can pull it off, but you’re missing a lot of crucial information.
* * *
Were he alive today, I suspect Wiener would be surprised to find that biofeedback and neurofeedback technology are not yet mainstream therapeutic practices. But Wiener also recognized that homeostatis was not exclusively the province of individual human minds and bodies. If systems of neurons could form elaborate feedback mechanisms, why couldn’t larger human collectivities? “In connection with the effective amount of communal information,” Wiener wrote, “one of the most surprising facts about the body politic is its extreme lack of efficient homeostatic processes.” He would have diagnosed the pathology of Gennifer Flowers in a heartbeat. The Flowers episode was an instance of pure positive feedback, unchecked by its opposing force. Each agent’s behavior encouraged more like-minded behavior from other agents. There was nothing homeostatic about the process, only the “ever-widening gyre” of positive feedback.
But if positive feedback causes such a ruckus in the media world, how can the brain rely so heavily on the reverberating circuits of neurons? One answer is a familiar term from today’s media: fatigue. Every neuron in the brain suffers from a kind of regulated impotence: after firing for a stretch, the cell must go through a few milliseconds of inaction, the “absolute refractory period,” during which it is immune to outside stimulation. Along with many other ingenious inhibiting schemes that the brain relies on, fatigue is a way of shorting out the reverberating circuit, keeping the brain’s feeding frenzy in check.
It is this short circuit that is lacking in the modern media’s vast interconnectedness. Stories generate more stories, which generate stories about the coverage of the stories, which generate coverage about the meta-coverage. (Here the brain science seems wonderfully poetic: What better diagnosis for the 24/7 vertigo of media feedback than “lack of fatigue”?) A brain that can’t stop reverberating is one way of describing what happens during an epileptic fit; the media version is something like Steven Brill’s epic critique of the Lewinsky coverage in the first issue of Content: a high-profile media critic launching a new magazine with a high-profile indictment of the media’s obsession with its own reporting. If the problem stemmed from errors of judgment made by individual reporters, then a media critique might make sense. But since the problem lies in the media’s own tendency for self-amplification, it only makes the problem worse to cover the coverage. It’s like firing a pistol in the air to stop a fusillade. Once again, the Flowers affair illustrates the principle: the story wasn’t “real news”—according to the network wise men—until other outlets started covering it. The newsworthiness of a given story can’t be judged by the play the story is getting on the other channels. Otherwise the gravitational pull of positive feedback becomes too strong, and the loop starts driving the process, more than the reporters or the event itself.
It’s not overstating things to say that the story that emerged from this loop was a milestone in American history. It’s entirely possible that the Flowers controversy would have subsided had Clinton’s answer to Jim Wooten been ignored; the Clintons would never have gone on 60 Minutes, and a whole series of tropes that appeared around the couple (Clinton’s philandering, Hillary’s anti–Tammy Wynette feminism) might never have found their way into the public mind. Without Gennifer Flowers in Clinton’s past, would the Monica Lewinsky affair have played out the same way? Probably not. And if that’s the case, then we must ask: What really brought this chain of events about? On the one hand, the answer is simple: individual life choices made by individual people—Clinton’s decision to have an affair, and to break it off, Flowers’s decision to go public, Clinton’s decision to answer the question—result in a chain of events that eventually stirs up an international news story. But there is another sine qua non here, which is the decision made several years before, somewhere in an office complex in Atlanta, to share the entire CNN news feed with local affiliates. That decision was not quite a “pseudo event,” in Boorstin’s famous phrase. It was a “system event”: a change in the way information flowed through the overall news system. But it was a material change nonetheless.
If you think that Clinton’s remarks on Gennifer Flowers should never have been a story, then who are the culprits? Whom do we blame in such a setting? The traditional critiques don’t apply here: there’s no oak-paneled, cigar-smoke-filled back room where the puppeteers pull their invisible strings; it’s not that the television medium is particularly “hot” or “cold”; there was a profit motive behind CNN’s decision to share more footage, but we certainly can’t write off the Flowers episode as just another tribute to the greed of the network execs. Once again, we return to the fundamental laws of emergence: the behavior of individual agents is less important than the overall system. In earlier times, the channels that connected politicians, journalists, and ordinary citizens were one-way and hierarchical; they lacked the connections to generate true feedback; and too few agents were interacting to create any higher-level order. But the cable explosion of the eighties changed all that. For the first time, the system started to reverberate on its own. The sound was quiet during those initial years and may not have crossed into an audible range until Jim Wooten asked that question. And yet anyone who caught the nightly news on January 24, 1992, picked up its signal loud and clear.
Still, the top-heavy structure of mass media may keep those loops relatively muted for the foreseeable future, at least where the tube is concerned. Feedback, after all, is usually not a television thing. You need the Web to hear it wail.
* * *
In June of 1962, a full year after the appearance of The Death and Life of Great American Cities, Lewis Mumford published a scathing critique of Jane Jacobs’s manifesto in his legendary New Yorker column, “The Sky Line.” In her prescriptions for a sidewalk-centric urban renewal, “Mother Jacobs”—as Mumford derisively called her—offered a “homemade poultice for the cure of cancer.” The New Yorker critic had been an early advocate of Jacobs’s work, encouraging her to translate her thoughts into a book while she was a junior editor at Architecture Forum in the midfifties. But the book she eventually wrote attacked Mumford’s much-beloved Ebenezer Howard and his “garden cities,” and so Mumford struck back at his onetime protégé with full fury.
At over ten thousand words, Mumford’s critique was extensive and wide-ranging, but the central message came down to the potential of metropolitan centers to self-regulate. Jacobs had argued that large cities can achieve a kind of homeostasis through the interactions generated by lively sidewalks; urban planning that attempted to keep people off the streets was effectively destroying the lifeblood of the urban system. Without the open, feedback-heavy connections of street culture, cities quickly became dangerous and anarchic places. Building a city without sidewalks, Jacobs argued, was like building a brain without axons or dendrites. A city without connections was no city at all, at least in the traditional sense of organic city life. Better to build cities that encouraged the feedback loops of sidewalk traffic by shortening the length of blocks and supporting mixed-use zoning.
Mumford was no fan of the housing projects of the postwar era, but he had lost faith in the self-regulatory powers of massive urban systems. Cities with populations in the millions simply put too much stress on the natural homeostatic tendencies of human collectives. In The City in History, published around the same period, Mumford had looked back at the Greek city-states, and their penchant for founding new units once the original community reached a certain size—the urban equivalent of reproducing by spores. His attachment to Ebenezer Howard also stemmed from the same lack of confidence in metropolitan self-regulation: the Garden City movement—not entirely unlike the New Urbanist movement of today—was an attempt to provide the energy and dynamism of city life in smaller doses. The Italian hill towns of the Renaissance had achieved an ideal mix of density and diversity while keeping their overall population within reasonable bounds (reined in largely by the walls that surrounded them). These were street-centric spaces with a vibrant public culture, but they remained knowable communities too: small enough to foster a real sense of civic belonging. That kind of organic balance, Mumford argued, was impossible in a city of 5 million people, where the noise and congestion—the sensory overload of it all—drained out the “vitality” from the city streets. “Jacobs forgets that in organisms there is no tissue growth quite as ‘vital’ or ‘dynamic’ as cancer growths. . . . The author has forgotten the most essential characteristic of all organic growth—to maintain diversity and balance, the organism must not exceed the norm of its species. Any ecological association eventually reaches the ‘climax stage,’ beyond which growth without deterioration is not possible.”
Like many debates from the annals of urban studies, the Mumford/Jacobs exchange over the “climax stage” of city life mirrors recent developments in the digital realm, as Web-based communities struggle to manage the problems of runaway growth. The first generation of online hangouts—dial-up electronic bulletin boards like ECHO or the Well—were the equivalent of those Italian hill-towns: lively, innovative, contentious places, but also places that remained within a certain practical size. In their heyday before the Web’s takeoff, both services hovered around five thousand members, and within that population, community leaders and other public characters naturally emerged: the jokers and the enablers, the fact checkers and the polemicists. These characters—many of them concealed behind playful pseudonyms—served as the equivalent of Jacobs’s shopkeepers and bartenders, the regular “eyes on the street” that give the neighborhood its grounding and its familiarity.
These online communities also divided themselves into smaller units organized around specific topics. Like the trade-specific clusters of Savile Row and the Por Santa Maria, these divisions made the overall space more intelligible, and their peculiarities endowed each community with a distinctive flavor. (For the first few years of its existence, the Grateful Dead discussion area on the Well was larger than all the other areas combined.) Because each topic area attracted a smaller subset of the overall population, visiting each one felt like returning to an old block in a familiar part of town, and running into the same cast of characters that you had found there the last time you visited.
ECHO and the Well had a certain homeostatic balance in those early years—powerfully captured in Howard Rheingold’s book The Virtual Community—and part of that balance came from the community’s own powers of self-organization. But neither was a pure example of bottom-up behavior: the topic areas, for instance, were central-planning affairs, created by fiat and not by footprints; both communities benefited from the strong top-down leadership of their founders. That their overall populations never approached a “climax stage” reflected the slow modem-adoption curve of the general public, and the limited marketing budgets at both operations. More important, the elements of each community that did self-regulate had little to do with the underlying software. Anyone who spent any time on those services in the early nineties will tell you that community leaders and other recognizable figures emerged, but that status existed only in the perceptions of the users themselves. The software itself was agnostic when it came to status, but because the software brought hominid minds together—minds that are naturally inclined to establish hierarchies in social relationships—leaders and pariahs began to appear. The software did recognize official moderators for each discussion area, but those too were appointments handed down from above; you applied to the village chieftain for the role that you desired, and if you’d been a productive member of the society, your wish might be granted. Their were plenty of unofficial leaders, to be sure—but where the code was concerned, the only official moderators came straight from the top.
This mix of hierarchy and heterarchy was well suited to ECHO’s and the Well’s stage of growth. At five thousand members, the community was still small enough to be managed partially from above, and small enough that groups and recognizable characters naturally emerged. At that scale, you didn’t need to solve the problem of self-regulation with software tools: all you needed was software that connected people’s thoughts—via the asynchronous posts of a threaded discussion board—and the community could find its own balance. If something went wrong, you could always look to the official leaders for guidance. But even in those heady early days of the virtual community, the collective systems of ECHO and the Well fell short of achieving real homeostasis, for reasons that would become endemic to the next generation of communities then forming on the Web itself.
A threaded discussion board turns out to be an ideal ecosystem for that peculiar species known as the crank—the ideologue obsessed with a certain issue or interpretive model, who has no qualms about interjecting his or her worldview into any discussion, and apparently no day job or family life to keep him from posting voluminous commentary at the slightest provocation. We all know people like this, the ones grinding their ax from the back of the seminar room or the coffee shop: the conspiracy theorist, the rabid libertarian, the evangelist—the ones who insist on bringing all conversations back to their particular issue, objecting to any conversation that doesn’t play by their rules. In real life, we’ve developed a series of social conventions that keep the crank from dominating our conversations. For the most pathological cases, they simply don’t get invited out to dinner very often. But for the borderline case, a subtle but powerful mechanism is at work in any face-to-face group conversation: if an individual is holding a conversation hostage with an irrelevant obsession, groups can naturally establish a consensus—using words, body language, facial expressions, even a show of hands—making it clear that the majority of the group feels their time is being wasted. The face-to-face world is populated by countless impromptu polls that take the group’s collective pulse. Most of them happen so quickly that we don’t even know that we’re participating in them, and that transparency is one reason why they’re as powerful as they are. In the face-to-face world, we are all social thermostats: reading the group temperature and adjusting our behavior accordingly.
Some of those self-regulatory social skills translate into cyberspace—particularly in a threaded discussion forum or an e-mail exchange, where participants have the time and space to express their ideas in long form, rather than in the spontaneous eruptions of real-time chat. But there is a crucial difference in an environment like ECHO or the Well—or in the discussion areas we built at FEED. In a public discussion thread, not all the participants are visible. A given conversation may have five or six active contributors and several dozen “lurkers” who read through the posts but don’t chime in with their own words. This creates a fundamental imbalance in the system of threaded discussion and gives the crank an opportunity to dominate the space in a way that would be much more difficult off-line. In a threaded discussion, you’re speaking both to the other active participants and to the lurkers, and however much you might offend or bore your direct interlocutors, you can always appeal to that silent majority out there—an audience that is both present and absent at the same time. The crank can cling to the possibility that everyone else tuning in is enthralled by his prose, while the active participants can’t turn to the room and say, “Show of hands: Is this guy a lunatic or what?”
The crank exploits a crucial disparity in the flow of information: while we conventionally think of threaded discussions as two-way systems, for the lurkers that flow follows a one-way path. They hear us talking, but we hear nothing of them: no laughs, no hisses, no restless stirring, no snores, no rolling eyeballs. When you factor in the lurkers, a threaded discussion turns out to be less interactive than a traditional face-to-face lecture, and significantly less so than a conversation around a dinner table, where even the most reticent participants contribute with gestures and facial expressions. Group conversations in the real world have an uncanny aptitude for reaching a certain kind of homeostasis: the conversation moves toward a zone that pleases as much of the group as possible and drowns out voices that offend. A group conversation is a kind of circuit board, with primary inputs coming from the official speakers, and secondary inputs coming from the responses of the audience and other speakers. The primary inputs adjust their signal based on the secondary inputs of group feedback. Human beings—for reasons that we will explore in the final section—are exceptionally talented at assessing the mental states of other people, both through the direct exchanges of spoken language and the more oblique feedback mechanisms of gesture and intonation. That two-way exchange gives our face-to-face group conversations precisely the flexibility and responsiveness that Wiener found lacking in mass communications.
I suspect Wiener would immediately have understood the virtual community’s problem with cranks and lurkers. Where the Flowers affair was a case of runaway positive feedback, the tyranny of the crank results from a scarcity of feedback: a system where the information flows are unidirectional, where the audience is present and at the same time invisible. These liabilities run parallel to the problems of one-way linking that we saw in the previous chapter. Hypertext links and virtual communities were supposed to be the advance guard of the interactive revolution, but in a real sense they only got halfway to the promised land. (Needless to say, the ants were there millions of years ago.) And if the cranks and obsessive-compulsives flourish in a small-scale online community of several thousand members, imagine the anarchy and noise generated by a million community members. Surely there is a “climax stage” on that scale where the online growth turns cancerous, where the knowable community becomes a nightmare of overdevelopment. If feedback couldn’t help regulate the digital villages of early online communication, what hope can it possibly have on the vast grid of the World Wide Web?
* * *
The sleepy college town of Holland, Michigan, might seem like the last place you’d expect to generate a solution for the problem of digital sprawl, but the Web has never played by the rules of traditional geography. Until recent years, Holland had been best known for its annual tulip festival. But it is increasingly recognized as the birthplace of Slashdot.org—the closest thing to a genuinely self-organizing community that the Web has yet produced.
Begun as a modest bulletin board by a lifetime Hollander named Rob Malda, Slashdot came into the world as the ultimate in knowable communities: just Malda and his friends, discussing programming news, Star Wars rumors, video games, and other geek-chic marginalia. “In the beginning, Slashdot was small,” Malda writes. “We got dozens of posts each day, and it was good. The signal was high, the noise was low.” Before long, though, Slashdot floated across the rising tsunami of Linux and the Open Source movement and found itself awash in thousands of daily visitors. In its early days, Slashdot had felt like the hill towns of ECHO and the Well, with strong leadership coming from Malda himself, who went by the handle Commander Taco. But the volume of posts became too much for any single person to filter out the useless information. “Trolling and spamming became more common,” Malda says now, “and there wasn’t enough time for me to personally keep them in check and still handle my other responsibilities.”
Malda’s first inclination was to create a Slashdot elite: twenty-five handpicked spam warriors who would sift through the material generated by the community, eliminating irrelevant or obnoxious posts. While the idea of an elite belonged to a more hierarchical tradition, Malda endowed his lieutenants with a crucial resource: they could rate other contributions, on a scale of -1 to 5. You could browse through Slashdot.org with a “quality filter” on, effectively telling the software, “Show me only items that have a rating higher than 3.” This gave his lieutenants a positive function as well as a negative one. They could emphasize the good stuff and reward users who were productive members of the community.
Soon, though, Slashdot grew too large for even the elites to manage, and Malda went back to the drawing board. It was the kind of thing that could only have happened on the Web. A twenty-two-year-old college senior, living with a couple of buddies in a low-rent house—affectionately dubbed Geek House One—in a nondescript Michigan town, creates an intimate online space for his friends to discuss their shared obsessions, and within a year fifty thousand people each day are angling for a piece of the action. Without anything resembling a genuine business infrastructure, much less a real office, Malda needed far more than his twenty-five lieutenants to keep the Slashdot community from descending into complete anarchy. But without the resources to hire a hundred full-time moderators, Slashdot appeared to be stuck at the same impasse that Mumford had described thirty years before: stay small and preserve the quality of the original community; keep growing and sacrifice everything that had made the community interesting in the first place. Slashdot had reached its “climax stage.”
What did the Commander do? Instead of expanding his pool of special authorized lieutenants, he made everyone a potential lieutenant. He handed over the quality-control job to the community itself. His goals were relatively simple, as outlined in the Frequently Asked Questions document on the site:
1. Promote quality, discourage crap.
2. Make Slashdot as readable as possible for as many people as possible.
3. Do not require a huge amount of time from any single moderator.
4. Do not allow a single moderator a “reign of terror.”
Together, these objectives define the parameters of Slashdot’s ideal state. The question for Malda was how to build a homeostatic system that would naturally push the site toward that state without any single individual being in control. The solution that he arrived at should be immediately recognizable by now: a mix of negative and positive feedback, structured randomness, neighbor interactions, and decentralized control. From a certain angle, Slashdot today resembles an ant colony. From another, it looks like a virtual democracy. Malda himself likens it to jury duty.
Here’s how it works: If you’ve spent more than a few sessions as a registered Slashdot user, the system may on occasion alert you that you have been given moderator status (not unlike a jury summons arriving in your mailbox). As in the legal analogy, moderators only serve for a finite stretch of time, and during that stretch they have the power to rate contributions made by other users, on a scale of -1 to 5. But that power diminishes with use: each moderator is endowed only with a finite number of points that he or she can distribute by rating user contributions. Dole out all your ratings, and your tenure as a moderator comes to an end.
Those ratings coalesce into something that Malda called karma: if your contributions as a user are highly rated by the moderators, you earn karma in the system, giving you special privileges. Your subsequent posts begin life at a higher rating than usual, and you are more likely to be chosen as a moderator in future sessions. This last privilege exemplifies meta-feedback at work, the ratings snake devouring its own tail: moderators rate posts, and those ratings are used to select future moderators. Malda’s system not only encouraged quality in the submissions to the site; it also set up an environment where community leaders could naturally rise to the surface. That elevation was specifically encoded in the software. Accumulating karma on Slashdot was not just a metaphor for winning the implicit trust of the Slashdot community; it was a quantifiable number. Karma had found a home in the database.
Malda’s point system brings to mind the hit points of Dungeons & Dragons and other classics of the role-playing genre. (That the Slashdot crowd was already heavily versed in the role-playing idiom no doubt contributed a great deal to the rating system’s quick adoption.) But Malda had done something more ambitious than simply porting gaming conventions to the community space. He had created a kind of currency, a pricing system for online civics. By ensuring that the points would translate into special privileges, he gave them value. By making one’s moderation powers expendable, he created the crucial property of scarcity. With only one or the other, the currency is valueless; combine the two, and you have a standard for pricing community participation that actually works.
The connection between pricing and feedback is itself more than a metaphor. As a character in Jane Jacobs’s recent Socratic dialogue, The Nature of Economies, observes: “Adam Smith, back in 1775, identified prices of goods and rates of wages as feedback information, although of course he didn’t call it that because the word feedback was not in the vocabulary at the time. But he understood the idea. . . . In his sober way, Smith was clearly excited about the marvelous form of order he’d discovered, as well he should have been. He was far ahead of naturalists in grasping the principle of negative feedback controls.”
Malda himself claims that neither The Wealth of Nations nor The Dungeon Master’s Guide were heavy in his thoughts in Geek House One. “There wasn’t really anything specific that inspired me,” Malda says now. “It was mostly trial and error. The real influence was my desire to please users with very different expectations for Slashdot. Some wanted it to be Usenet: anything goes and unruly. Others were busy people who only wanted to read three to four comments a day.” You can see the intelligence and flexibility of the system firsthand: visit the Slashdot site and choose to view all the posts for a given conversation. If the conversation is more than a few hours old, you’ll probably find several hundred entries, with at least half of them the work of cranks and spammers. Such is the fate of any Web site lucky enough to attract thousands of posts an hour.
Set your quality threshold to four or five, however, and something miraculous occurs. The overall volume drops precipitously—sometimes by an order of magnitude—but the dozen or two posts that remain will be as stimulating as anything you’ve read on a traditional content site where the writers and the editors are actually paid to put their words and arguments together. It’s a miracle not so much because the quality is lurking there somewhere in the endless flood of posting. Rather, it’s a miracle because the community has collectively done such an exceptional job at bringing that quality to light. In the digital world, at least, there is life after the climax stage.
* * *
Slashdot is only the beginning. In the past two years, user ratings have become the kudzu of the Web, draping themselves across pages everywhere you look. Amazon had long included user ratings for all the items in its inventory, but in 1999 it began to let users rate the reviews of other users. An ingenious site called Epinions cultivates product reviews from its audience and grants “trust” points to contributors who earn the community’s respect. The online auction system of eBay utilizes two distinct feedback mechanisms layered on top of each other: the price feedback of the auction bids coupled to the user ratings that evaluate buyers and sellers. One system tracks the value of stuff; the other tracks the value of people.
Indeed, the adoption rate for these feedback devices is accelerating so rapidly that I suspect in a matter of years a Web page without a dynamic rating system attached will trigger the same response that a Web page without hyperlinks triggers today: yes, it’s technically possible to create a page without these features, but what’s the point? The Slashdot system might seem a little complex, a little esoteric for consumers who didn’t grow up playing D&D, but think of the millions of people who learned how to use a computer for the first time in the past few years, just to get e-mail or to surf the Web. Compared to that learning curve, figuring out the rules of Slashdot is a walk in the park.
And rules they are. You can’t think of a system like the one Malda built at Slashdot as a purely representational entity, the way you think about a book or a movie. It is partly representational, of course: you read messages via the Slashdot platform, and so the components of the textual medium that Marshall McLuhan so brilliantly documented in The Gutenberg Galaxy are on display at Slashdot as well. Because you are reading words, your reception of the information behind those words differs from what it would have been had that information been conveyed via television. The medium is still the message on Slashdot—it’s just that there’s another level to the experience, a level that our critical vocabularies are only now finding words for.
In a Slashdot-style system, there is a medium, a message, and an audience. So far, no different from television. The difference is that those elements exist alongside a set of rules that govern the way the messages flow through the system. “Interactivity” doesn’t do justice to the significance of this shift. A button that lets you e-mail a response to a published author; a tool that lets you build your own home page; even a collection of interlinked pages that let you follow your own path through them—these are all examples of interactivity, but they’re in a different category from the self-organizing systems of eBay or Slashdot. Links and home-page-building tools are cool, no question. But they are closer to a newspaper letters-to-the-editor page than Slashdot’s collective intelligence.
First-generation interactivity may have given the consumer a voice, but systems like Slashdot force us to accept a more radical proposition: to understand how these new media experiences work, you have to analyze the message, the medium, and the rules. Think of those thousand-post geek-Dionysian frenzies transformed into an informative, concise briefing via the Slashdot quality filters. What’s interesting here is not just the medium, but rather the rules that govern what gets selected and what doesn’t. It’s an algorithmic problem, then, and not a representational one. It is the difference between playing a game of Monopoly and hanging a Monopoly board on your wall. There are representational forces unleashed by a game of Monopoly (you have to be able to make out the color coding of the various properties and to count your money) but what makes the game interesting—indeed, what makes it a game at all—lies in the instruction set that you follow while playing. Slashdot’s rules are what make the medium interesting—so interesting, in fact, that you can’t help thinking they need their own category, beyond message and medium.
Generically, you can describe those rules as a mix of positive and negative feedback pushing the system toward a particular state based on the activities of the participants. But the mix is different every time. The edge cities of Paul Krugman’s model used feedback to create polycentric clusters, while other metropolitan systems collapse into a single, dense urban core. The networks in CNN-era television have engendered runaway positive feedback loops such as the Gennifer Flowers story, while a system like Slashdot achieves homeostatic balance, at least when viewed at level 5. Different feedback systems produce different results—even when those systems share the same underlying medium. In the future, every Web site may well be connected to a rating mechanism, but that doesn’t mean all Web sites will behave the same way. There may be homeostasis at Slashdot’s level 5, but you can always choose to read the unfiltered, anarchic version at level -1.
Is there a danger in moving to a world where all our media responds directly to user feedback? Some critics, such as The Control Revolution’s Andrew Shapiro, worry about the tyranny of excessive user personalization, as in the old Nicholas Negroponte vision of the Daily Me, the newspaper perfectly custom-tailored to your interests—so custom-tailored, in fact, that you lose the serendipity and surprise that we’ve come to expect from reading the newspaper. There’s no stumbling across a different point of view, or happening upon an interesting new field you knew nothing about—the Daily Me simply feeds back what you’ve instructed the software to find, and nothing more. It’s a mind-narrowing experience, not a mind-expanding one. That level of personalization may well be around the corner, and we’ll take a closer look at its implications in the conclusion. But for now, it’s worth pointing out that the Slashdot system is indifferent to your personal interests—other than your interest in a general level of quality. The “ideal state” that the Slashdot system homes in on is not defined by an individual’s perspective; it is defined by the overall group’s perspective. The collective decides what’s quality and what’s crap, to use Rob Malda’s language. You can tweak the quality-to-crap ratio based on your individual predilections, but the ratings themselves emerge through the actions of the community at large. It’s more groupthink than Daily Me.
Perhaps, then, the danger lies in too much groupthink. Malda designed his system to evaluate submissions based on the average Slashdot reader—although the karma points tend to select moderators who have a higher-than-average reputation within the community. It’s entirely possible that Malda’s rules have created a tyranny of the majority at Slashdot, at least when viewed at level 5. Posts that resonate with the “average” Slashdotter are more likely to rise to the top, while posts that express a minority viewpoint may be demoted in the system. (Technically, the moderation guidelines suggest that users should rate posts based purely on quality, not on whether they agree with the posts, but the line is invariably a slippery one.) From this angle, then, Slashdot bears a surprising resemblance to the old top-down universe of pre-cable network television. Both systems have a heavy center that pulls content toward the interests of the “average user”—like a planet pulling satellites into its orbit. In the days before cable fragmentation, the big three networks were competing for the entire television-owning audience, which encouraged them to serve up programming designed for the average viewer rather than for a particular niche. (McLuhan observed how this phenomenon was pushing the political parties toward the center as well.) The network decision to pursue the center rather than the peripheries was invariably made at the executive level, of course—unlike at Slashdot, where the centrism comes from below. But if you’re worried about suppressing diversity, it doesn’t really matter whether it comes from above or below. The results are the same, either way. Majority viewpoints get amplified, while minority viewpoints get silenced.
This critique showcases why we need a third term beyond medium and message. While it’s true that Slashdot’s filtering software creates a heavy center, that tendency is not inherent to the Web medium, or even the subset of online communities. You could just as easily build a system that would promote both quality and diversity, simply by tweaking the algorithm that selects moderators. Change a single variable in the mix, and a dramatically different system emerges. Instead of picking moderators based on the average rating of their posts, the new system picks moderators whose contributions have triggered the greatest range of responses. In this system, a member who was consistently rated highly by the community would be unlikely to be chosen as a moderator, while a member who inspired strong responses either way—both positive and negative—would be first in line to moderate. The system would reward controversial voices rather than popular ones. You’d still have moderators deleting useless spam and flamebait, and so the quality filters would remain in place. But the fringe voices in the community would have a stronger presence at level 5, because the feedback system would be rewarding perspectives that deviate from the mainstream, that don’t aim to please everyone all the time. The cranks would still be marginalized, assuming their polemics annoyed almost everyone who came across them. But the thoughtful minorities—the ones who attract both admirers and detractors—would have a place at the table.
There’s no reason why centrist Slashdot and diverse Slashdot can’t coexist. If you can adjust the quality filters on the fly, you could just as easily adjust the diversity filters. You could design the system to track the ratings of both popular and controversial moderators; users would then be able to view Slashdot through the lens of the “average” user on one day, and through the lens of a more diverse audience the next. The medium and the message remain the same; only the rules change from one system to the other. Adjust the feedback loops, and a new type of community appears on the screen. One setting gives you Gennifer Flowers and cyclone-style feeding frenzies, another gives you the shapeless datasmog of Usenet. One setting gives you an orderly, centrist community strong on shared values, another gives you a multiculturalist’s fantasy. As Wiener recognized a half century ago, feedback systems come in all shapes and sizes. When we come across a system that doesn’t work well, there’s no point in denouncing the use of feedback itself. Better to figure out the specific rules of the system at hand and start thinking of ways to wire it so that the feedback routines promote the values we want promoted. It’s the old sixties slogan transposed into the digital age: if you don’t like the way things work today, change the system.