“What was the Stasi’s most successful active measure?” I asked. I was sitting in the small, crammed office of Georg Herbstritt, a historian in Germany’s vast agency in charge of overseeing the old Stasi files. Herbstritt’s employer has an appropriately unwieldy German name that stretches over three lines, and is therefore known as the BStU.1 East Germany’s infamous and humongous Ministry of State Security, MfS, created an unimaginable amount of paper over its forty years of existence. The BStU’s archives hold 111 kilometers of written material in fourteen different locations, serviced by a staff of more than 1,400 people. The Stasi, by 1988, had more than 90,000 full-time employees,2 with an additional 175,000 “unofficial collaborators.”3
The MfS was perhaps the most formidable spying machine the world has ever seen. The agency even collected samples of its enemies’ body odor from chairs and sofas on which unsuspecting victims had been sitting. At least one analyst was appointed in charge of human “excrements” on an internal organizational chart. Some of the HVA’s disinformation work was so well crafted that it even put the KGB’s far-bigger First Chief Directorate to shame.
Since Department X represents the acme of Cold War active measures, I was keen to hear what they considered their crowning operation. “Well,” said Herbstritt, without having to think very long, “the operation of April 1972.” Herbstritt was referring to when the X engineered the outcome of West Germany’s first vote of no confidence. In 1991, when some of the former X officers started to speak out publicly, they also pointed to the feat of April 1972 as their showpiece.
As I sat in the BStU office, not far from the TV tower at Alexanderplatz, I was reminded of my student days. I had moved to East Berlin in the mid-1990s, to study at Humboldt University. Whenever I entered the university’s main building through its majestic entrance on Unter den Linden, there gleamed Karl Marx’s inscription, in brass letters set on a solid red marble wall: “Philosophers have only interpreted the world. The point, however, is to change it.”
Ten days before my BStU meeting, I rented a car and drove out to Kyritz, a sleepy town in the beautiful, lake-dotted countryside of Brandenburg, to meet with Horst Kopp. That April of 1972, he had managed to trick a conservative member of Parliament into voting against his own minority whip in a historic vote by luring the MP into the false belief that he was helping the Americans instead of aiding the enemy.
I knew I was a suspicious West German to Kopp (he would immediately place my accent). Worse, he knew that I was coming in from London, that old den of spy intrigue. I needed to break the ice. He offered me coffee in his modest living room. I told him that I had studied at Humboldt, and that I used to live in Prenzlauer Berg in East Berlin, a neighborhood now known as the Brooklyn of Berlin. He wanted to know what street. Immanuelkirchstraße, I told him, and said I remembered carrying up heavy tin buckets full of coal briquettes to make a fire in the morning, and that we showered in a tiny plastic box in the kitchen, warming our cold hands over the gas stove. His eyes lit up.
“Ah, I had a KW on that street,” Kopp said. Thankfully, I had learned some Stasi jargon by then: KW was short for konspirative Wohnung, or “conspiratorial apartment.” The HVA used these secret apartments to conspire with collaborators—perhaps politicians or authors or journalists visiting from West Berlin—and to work on “constructions,” as the X called the forgeries used in active measures.4
It felt strange. I liked the old man—he was charming and quick and strangely honest. For more than two hours he told me details of his work in the HVA, including personal anecdotes that were surely unpleasant to him, described colleagues and spies he ran in detail, and quickly admitted when he did not know an answer, or could not remember something specific. I reminded myself that he had been one of the most effective handlers of one of the most effective spy agencies, and being witty and likable was a key part of his job.
Kopp’s soft Eastern accent, the description of my old neighborhood in Berlin, and talk of “constructions” brought back my student-age fascination with social constructions—with epistemology, with the history of science, with postmodern philosophy and constructivism. An idea flashed across my mind as I drove back through serene Brandenburg. Was it possible that my own apartment had been a “KW” just a few years before I moved in? Was Kopp perhaps designing “operative constructions” in the same building, just a few years before I sat there by the kitchen window reading about philosophical constructions? Was he perhaps changing the world while I only interpreted it?
I started looking at disinformation in a new light. The more I did, the more active measures spooked me.
The postwar decades had exposed a cultural tension within truth itself—or rather, between two common understandings of truth that stand in permanent opposition to each other. One is a given, positive and analytical; something is true when it is accurate and objective, when it lines up with observation, when it is supported by facts, data, or experiments. It orients itself in the present, not in the distant, mythical past or an unknowable future. Truth, in this classic sense, is inherently apolitical. Truthful observations and facts became the foundation of agreement, not conflict. The analytic truth bridged divides, and brought opposing views together. Professionals such as scientists, investigative journalists, forensic investigators, and intelligence analysts relied upon a set of shared norms designed to value cold, sober evidence over hot, emotional rhetoric. Changing one’s position in response to new data was a virtue, not a weakness.
But there has always been another truth, one that corresponds to belief, not facts. Something is true when it is right, when backed up by gospel, or rooted in scripture, anchored in ideology, when it lines up with values. This truth is based in some distant past or future. Truth, in this sense, is relative to a specific community with shared values, and thus inherently political. This truth is preached from a pulpit, not tested in a lab. The style of delivery is hot, passionate, and emotional, not cold, detached, and sober. Changing one’s position is a weakness. It tends to confirm and lock in long-held views, and to divide along tribal and communal lines.
These two forms of truth, of course, are exaggerations, ideals, clichés. This distinction is somewhat coarse and simplistic—nevertheless, it helps explain the logic of disinformation. The goal of disinformation is to engineer division by putting emotion over analysis, division over unity, conflict over consensus, the particular over the universal. For, after all, a democracy’s approach to the truth is not simply an epistemic question, but an existential question. Putting objectivity before ideology contributed to opening societies, and to keeping them open. Putting ideology before objectivity, by contrast, contributed to closing societies, and to keeping them closed. It is therefore no coincidence that objectivity was under near-constant assault in the ideologically torn twentieth century.
Ideological certainty and a feeling of epistemic superiority would help reinterpret the factual in unexpected ways. Already, by the late 1950s, intelligence forgeries served a larger ideological truth—for example, that the United States and its aggressive NATO alliance, armed to the teeth with nuclear missiles, were the imperialist, capitalist oppressors. Forgeries didn’t necessarily distort this truth, but articulated it more clearly. “No reporter of any democratic press could have depicted the true backstory of the Eisenhower Doctrine in a more unvarnished way than the oil magnate himself,” wrote Neues Deutschland, East Germany’s state outlet, in its introduction to the 1957 Rockefeller forgery.5 The publishers of Neues Deutschland saw the United States as a capitalist, interest-driven superpower. Another example, from the summer of 1969, is Peace News and Sanity, the two British peace journals, dismissing the question of whether a leaked American war plan was forged or not, because it was “near enough to the truth.” Forgeries were like a novel that spelled out a political utopia with gleaming clarity, or a modernist painting that perfectly articulated an aesthetic form: an artificial vehicle custom-designed to communicate a larger truth.
As I thought about Kopp, I wondered: What was the difference between his operational constructions and my philosophical ones? Was I falling for some active measure myself as I read postmodern philosophy by the window in my very own KW?
The 1960s were a critical moment in this assault on the factual, and not only for intelligence operations. It was a decade of reckoning with the harsh legacy of World War II, of decolonization, the Holocaust, the wars in Algeria and Vietnam, and with the looming destruction of humanity in a global nuclear cataclysm that seemed only hours away at any moment. The 1960s therefore witnessed a major political, cultural, artistic, and intellectual upheaval, at the heart of which was nothing less than the nature of facts themselves. Several different strains of twentieth-century philosophy and art took issue with what they considered to be a naïve “correspondence theory” of truth: facts weren’t inalterable, according to the intellectual avant-garde; they were rooted in culture, language, systems of signs, collective perceptions, discourse, not in some inalterable structure of some independent reality. This avant-garde shunned “positivism,” “structuralism,” and “realism,” and instead examined—or “deconstructed”—how facts were created, socially constructed, scientifically built, and put to use. This new approach felt empowering, and it was. By the 1970s, postmodern thought had become more widespread on campuses, although largely confined to the humanities, to art, film, literature, and perhaps architecture. Most academic critical theorists were, however, only studying and deconstructing the “practices” of knowledge production to shape intellectual discourse, to interpret the world. Meanwhile, in the shadows, intelligence agencies were actually producing knowledge, constructing new artifacts, shaping discourse in order to serve their tactical or strategic purposes—changing the world as they went.
In 1962, the KGB upgraded Department D to Service A, and ordered intelligence agencies across the Eastern bloc to follow their lead. “A” soon came to stand for active measures. One purpose of this name change, and of this new term of art, was to overcome a counterproductive focus on facts, and indeed on non-facts. What made an active measure active was not whether a construction resonated with reality, but whether it resonated with emotions, with collectively held views in the targeted community, and whether it managed to exacerbate existing tensions—or, in the jargon of Cold War operators, whether it succeeded to strengthen existing contradictions.
Shortly after defecting from Czechoslovak state security, Ladislav Bittman testified on disinformation to the U.S. Senate Committee on the Judiciary. Bittman explained why disinformation worked again and again: “Politicians or journalists wanted to believe in that disinformation message,” he told the Senate. “They confirmed their opinion.”6 Just five months earlier, Michel Foucault delivered his landmark inaugural lecture, “The Order of Discourse,” at the Collège de France. The iconic French philosopher and social critic considered “the opposition between true and false” as a long-established, power-wielding system of exclusion that he now revealed for what it was: historical, arbitrary, modifiable, and violent.7 I had been reading Foucault in Prenzlauer Berg in the mid-1990s, and after my conversation with Kopp in Brandenburg, I recalled some of what I’d read. Foucault was breaking down the barrier between analytical truth and ideological truth; so were Agayants and Wagenbreth.
Could this eerie convergence of Eastern spycraft and Western thought really be just a coincidence?
It took a special kind of person to work in disinformation, on both sides of the Iron Curtain. Spotting weakness in adversarial societies, seeing cracks and fissures and political tensions, recognizing exploitable historical traumas, and then writing a forged pamphlet or letter or book—all of this required officers with unusual minds. Intelligence agencies that prized secrecy, military precision, and hierarchy had to find and cultivate individuals with an opposite skill set: free and unconventional thinkers, bookworms, writers, perceptive publicists with an ability to comprehend foreign cultures. Disinformation specialists even needed a certain playful quality of mind, and to enjoy exploring and exploiting contradictions. The best disinformation operators, Kopp told me, were internal rebels. One of the HVA’s best men would sometimes “not do any work for two days, or just read or something,” but then, all of a sudden, deliver a brilliant forged manuscript.8 Active measures attracted and required precisely those creative minds who were in touch with the intellectual zeitgeist. As if to illustrate the point, Bittman, after his defection from East to West, became a modernist painter.
The St. Petersburg trolls were a far cry from the professionals of Service A and the X, but even they appeared to sense this convergence. One member of the American Department called the IRA’s work “postmodernism in the making,” adding that it reminded him of “Dadaism, and surrealism.”9
So what can postmodernism tell us about the history of operational constructions?
First, that disinformation works, and in unexpected ways. The fine line between fact and forgery may be clear in the moment an operator or an intelligence agency commits the act of falsification—for example, in the moment when a fake paragraph is inserted into an otherwise genuine document, or when an unwitting influence agent is lured into casting a parliamentary vote under false pretenses, or when a bogus online account invites unwitting users to join a street demonstration, or shares extremist posts. But fronts, forgeries, and fakes don’t stop there. Active measures will shape what others think, decide, and do—and thus change reality itself. When victims read and react to forged secret documents, their reaction is real. When the cards of an influenced parliamentary vote are counted, the result is real. When social media users gather in the streets following a bogus event invitation, the demonstration is real. When readers start using racial epithets offline, their views are real. These measures are active, in the sense that operations actively and immediately change views, decisions, and facts on the ground, in the now.
Second, disinformation works against itself, and again in unexpected ways. Intelligence agencies and other disinformation actors were, again and again, affected by their own constructions. It’s not that analysts simply believed their own lies; it’s that operators, driven by professional education as well as bureaucratic logic, tended to overstate rather than understate the value of their own disinformation work. Analysts would write after-action reviews and project memos that justified their efforts in terms that were clearer and more convincing than what had happened on the ground, where cause and effect remained entangled by design—exacerbating existing fissures and cracks, or tapping into existing grievances, or enhancing existing activism—all of which meant that engineered effects were very difficult to isolate from organic developments. Yet specialized intelligence units had and will have metrics and data at the ready to support their past projects and future budget authorization requests—balloons launched, protesters counted, forgeries printed, packages mailed, letters received, press stories clipped, or downloads and shares and likes and page views logged. Some disinformers of old had long understood this problem: “I don’t think it’s possible to measure exactly, realistically, the impact of an active measure,” Bittman told me in March 2017, and added that there was always a degree of guessing. “You have no reliable measurement device,” he said.10 Active measures, in short, were impossibly hard to measure by design.
Disinformation about disinformation worsened over time. A one-off disinformation event is unlikely to achieve a given goal. By the early 1960s, some operations had begun to spread out into entire campaigns that could go on for many years, even decades. As more years and decades passed, many subtle lines that once may have demarcated fact from forgery faded until they eventually disappeared entirely. Thus, forged and engineered effects mixed with, and solidified into, actual, observable effects—like a liquid cement mix setting and turning into a firm concrete foundation. With the passing of time, reverse-engineering the delicate construction process became harder and harder.
Then came the internet, with the hacking and dumping of large volumes of data and social media influence campaigns. Higher numbers and refined, real-time online metrics did not make those measurement devices more reliable, but less so. Higher numbers merely translated into higher perceived confidence in assessments, thus creating an even more seductive illusion of metrics. “Measuring the actual impact of trolling and online influence campaigns is probably impossible,” said Kate Starbird, one of the world’s leading researchers of online disinformation campaigns, who examined the influence of digital disinformation on the Black Lives Matter movement. “But the difficulty of measuring impact doesn’t mean that there isn’t meaningful impact,” she added.11 Online engagement figures can be staggering, and new bureaucratic politics can make these figures even more staggering. One New York Times headline in late 2017 stated, “Russian Influence Reached 126 Million Through Facebook Alone.”12 In reality the preelection reach of the Internet Research Agency was far less, for two reasons: only about 37 percent of Facebook’s number of “impressions” were from before November 9, 2016 (the rest was after), and “impressions” are not engagements, only what a user may have scrolled past, perhaps absentmindedly. Facebook was then under intense political pressure, and analysts and executives decided to be as liberal as they could with the data, providing an upper limit of an estimate to Congress, for fear of being accused of lowballing the problem afterward. Many old-school journalists covering what they thought were scandalous social media figures, in turn, were either unable or unwilling to assess the data on their merits, or in the context of a history that had largely been forgotten. Online metrics, in short, created a powerful illusion, an appealing mirage—the metrics created an opportunity for more, and more convincing, disinformation about disinformation. For willfully exaggerating the effects of disinformation means exaggerating the impact of disinformation.
All this is bad news for future historians. Seminars, in-person discussions, and correspondence were always fleeting and rarely archived. Yet the reach of such direct human interactions was limited throughout the twentieth century, and many if not most magazines and published newsletters were archived somewhere. Not so in the early twenty-first century, where secure electronic communications and social media conversations are both more perishable and have a wider reach. Even inside large government bureaucracies more and more memory is lost as screens replace paper, and as files get removed or destroyed. The digital age has upended the way we preserve records, and our collective memory has already begun to corrode more quickly and more thoroughly as a result. It will therefore be even more difficult to study and reconstruct the impact of active measures in the future. The internet, contrary to a popular misconception, forgets every day, especially on ephemeral social media platforms. Suspending accounts for coordinated inauthentic behavior, for example, means hiding the main records of that behavior, and potentially assisting adversaries in hiding their tracks. Accurately gauging impact becomes harder; understating and overstating impact becomes easier. Active measures will thus not only blur the line between fact and fiction in the present, but also in the past, in retrospect.
Active measures, third, crack open divisions by closing distinctions. It is very hard to distinguish—for an activist, for the target of an active measures campaign, even for a large organization running its own active measures—between a cunning influence agent on the one hand, and a genuine activist on the other. In theory, on an individual basis, one person is either a genuine activist or a controlled agitator, but this worldview applies only in the abstract. In practice, one individual can be both genuine and an exploited asset, a witting and unwitting collaborator at the same time. Was Philip Agee, reportedly at one point a witting KGB collaborator, unwitting when he received a forged leak that was camouflaged as coming from a legitimate U.S. government whistle-blower? This postmodern problem gets even more convoluted when applied not to an individual but to a group of people. A 50,000-person demonstration may be a genuine expression of political dissatisfaction, as with the demonstrations against NATO ballistic missiles in Germany. Yet a large demonstration can also be exploited, organized, and even funded by an adversarial power, with, say, an interest in stopping the deployment of NATO ballistic missiles in Europe, all without undermining the legitimate character of the protest. Other examples are activist platforms and leak projects like the Fifth Estate, CyberGuerrilla, or WikiLeaks, which can empower witting participants and genuine activist projects at the same time, even in the same instance. Active measures are therefore difficult to contain conceptually, with no obvious beginning or end. The problem may not be the quality of the data or the design of the research; the problem may be the quality of an operation and the very design of the “construction” in the first place.
This seeming contradiction is no contradiction, but a core feature of active measures over the past century. Active measures are purpose-designed temptations, designed to exaggerate, designed to give in to prejudice, to preformed notions—and to erode the capacity of an open society for fact-based, sober debate, and thus wear down the norms and institutions that resolve internal conflict peacefully. This strange postmodern intelligence practice is, confusingly, underdetermined by observable evidence. Saying where an operation ended, and whether it failed or succeeded, requires more than facts; it requires a judgment call, which in practice means a political decision, often a collective decision. Therefore, if a targeted community believes that a disinformation campaign was a major success, then it has made it a major success.
Disinformation, finally, is itself disintegrating. Bureaucratically, this degeneration proceeded with the breakup of the old Soviet security establishment and the dissolution of the once-so-formidable spy agencies of the Eastern bloc. The term “active measures” faded, even in Russia, in the early 1990s as the KGB’s First Chief Directorate was transitioned into the SVR. The sweeping official history of Russian foreign intelligence acknowledges that over the past century the designations of the same operational activity—disinformation—came and went, from “operational games” to “active measures” to the blander, more recent “support measures.”13
Then came the rise of the internet, which upended the old art and science of disinformation in unexpected ways. Cutthroat media competition and distrust in “opinion factories,” as the Eastern bloc had recognized by mid-century, still worked to the advantage of disinformation operators in the mid-2010s. But the amount of craftsmanship and work required from disinformation specialists was lower in the twenty-first century than it was in the twentieth. Digital storage made it possible to breach targets remotely and extract vast amounts of compromising material. The internet facilitated acquiring and publishing unprecedented volumes of raw files at a distance and anonymously. Automation helped to create and amplify fake personas and content, to destroy data, and to disrupt. Speed meant that operational adaptation and adjustments could take place not over years, months, or weeks—but in days, hours, even minutes. Activist culture meant existing leak platforms outperformed purpose-created ones. And the darker, more depraved corners of the internet offered teeming petri dishes of vicious, divisive ideas, and guaranteed a permanent supply of fresh conspiracy theories. All this took place while many reporters, worn down by breakneck news cycles, became more receptive to covering leaked, compromising material of questionable provenance, and as publishers recycled unoriginal, repetitive content. The end effect was that a significant and large portion of the disinformation value-creation chain was outsourced to the victim society itself, to journalists, to activists, to freelance conspiracy theorists, and, to a lesser degree, to researchers.
The temptingly obvious conclusion about these trends appears to be that the art and craft of disinformation has become easier—yet such a finding would be misleading. Active measures have become more active and less measured to such a degree that they are themselves disintegrating—and this disintegration creates a new set of challenges. For the offender, campaigns have become harder to control, harder to contain, harder to steer, harder to manage, and harder to assess. For victims, disinformation campaigns have also become more difficult to manage, more difficult to assess in impact, and more difficult to counter. At the beginning of the third decade of the twenty-first century, both open and closed societies, many thrown into self-doubt and outright identity crises by the rise of the internet and its side effects, are both overstating and, more rarely, understating the threat and the potential of disinformation campaigns—and thus helping expand and escalate that very threat and its potential. This constructivist vortex is propelled by an unprecedented confluence of incentives that lead many victims—politicians, journalists, technologists, intelligence analysts, adversary operators, and most researchers—to highlight the potentials of disinformation over its limitations.
Perhaps the most vivid illustration of this trend is the fantastic story of the Shadow Brokers—the devastating NSA leak with its subsequent reuse and integration of U.S. government hacking tools into the Russian NotPetya computer worm, in the words of the White House the “most destructive and costly” computer network attack in history. That iconic overall campaign was also a disinformation project. The theft, the gradual and meticulously timed release of files, the weaponization of experts and journalists, and the subsequent destructive redeployment of computer code was designed, carefully planned, and executed with skill and discipline as an active measure—yet it has remained unclear for years who was responsible for the different components of this campaign. Whoever initiated the leak, an insider or a foreign intelligence agency, the Shadow Brokers campaign was an artful masterpiece that illustrated, in its cruel uncertainty, the twisted logic of active measures—irreversibly blurring the line between victim and perpetrator, between observation and participation, between reality and representation.
Just a few weeks before I met him, Horst Kopp had presented his memoir at the Spy Museum in Berlin. I asked him whether any surviving members of the X were there for his book talk. “Well, you know, the Mutz called me one day ahead of the press conference,” Kopp told me. Some Germans have the habit of referring to familiar colleagues by their last names plus the definite article; “the Mutz” was Kopp’s former boss, Wagenbreth’s longtime deputy and the last head of the X. When the phone rang, Kopp did not even recognize the voice of his former boss, as they had not spoken since 1985. “Wolfgang here,” he said. He wanted to know what Kopp was going to reveal about their deception work. Kopp gave him an overview.
“He told me,” said Kopp, “that they were going to send two people to my book talk.”