The United States and Russia are fighting an undeclared virtual war. It is not a hot war, as between Japan and the United States in World War II. And it is not a cold war, driven by two ideologically incompatible world powers, as between the United States and Soviet Union from the late 1940s through the 1980s. Rather, it is something in between, a shadow war, wherein the combatants attempt to achieve goals that, for much of human history, would have required the direct use of military force or other physical action, but today can be accomplished through less kinetic means. The United States, with its vast “soft power” capabilities and unparalleled technological prowess, has pioneered the use of these new approaches. Moscow, however, has been a diligent student, and it has turned these techniques back on America in Russianized, asymmetric form, attempting to exploit US weaknesses while maximizing the few areas in which Russia has advantage. Unlike the Cold War at its peak, this is a war being waged largely without rules, and it is driven by a different logic. Moreover, in contrast to the shared appreciation of risks and dangers that characterized the US-Soviet relationship after the Cuban missile crisis, neither side recognizes how easily this shadow war could spiral out of control. Understanding the various fronts on which this war is being waged is a critical step toward coping with its dangers.
In September 2017, analysts at the internet security firm Symantec got a firsthand glimpse into the US-Russian shadow war. According to Wired magazine, the analysts were investigating malware infections in the computer networks of American electricity generation facilities when they discovered that hackers had taken screenshots of the system’s human-machine interface, the control panels governing the regional power grid. They were stunned. Dealing with attempted network intrusions was a daily occurrence for cybersecurity analysts, something so routine that it was like clicking through their email in-boxes. But never before had they seen network intruders reach the point where they could send remote commands to circuit breakers, valves, and other power company controls in the United States, allowing them to cut the flow of electricity to homes and businesses. Few things could send a modern society more quickly into panic and chaos than an extended loss of electrical power. The screenshots suggested that the only thing preventing the hackers from flipping a switch that would send the region into darkness was their decision to hold back—for now.1
The analysts could not be sure that the hackers were Russian, but the circumstantial evidence looked damning. Russian hackers had created blackouts in Ukraine in 2015 and 2016 during Moscow’s undeclared war there, timed to take place in winter, when they would cause maximum damage to the nation’s psyche. Each of the Ukrainian blackouts had lasted only for a few hours and had affected only hundreds of thousands, not millions, of people. They were only one part, however, of a massive sustained cyberassault on “practically every sector of Ukraine: media, finance, transportation, military, politics, energy.”2 The United States was much more dependent on digital networks than was Ukraine. The flow of planes, trains, and automobiles increasingly relied on automated transportation networks and satellite-based Global Positioning System (GPS) technology. Public water systems and chemical plants and Wall Street trading and basic business inventory controls were linked to the internet. Even such common household appliances as refrigerators and microwave ovens “talked” over the internet using smart technology embedded inside them. All this technology had made the operations of the modern world faster and more efficient than anything achieved in the past, but it had also made these systems more vulnerable to penetration and disruption by hackers. As a small preview of what cybersabotage could do if directed at the United States, the attacks in Ukraine had sent an ominous message.
There were also more recent reports that the Russians had penetrated the business systems, though not the control panels, of US nuclear power generation facilities—an intrusion dubbed Palmetto Fusion by US government investigators.3 Clearly, the business systems were not the Russians’ ultimate goal; they were obviously attempting to gain control not just of conventional electrical power generation but nuclear power plants as well. The scale and sophistication of the intrusions, coupled with the targets themselves, made clear that these were not mere novices or “patriotic hackers,” harassing perceived enemies of the Russian state without direction from the government. Nor were they simply knocking on any cyberdoor they encountered, trying to gain entry to all the systems they could willy-nilly. These were professionals, working against specific targets for specific reasons. One other aspect of the intrusion stood out: the Russians were among the world’s best at concealing their cyberoperations when they wanted to, but whoever it was that had penetrated the power grid was not trying very hard to avoid detection.4 It seemed the hackers were sending a not-so-subtle message that they were able to sabotage the power grid whenever they chose.
If that message was meant to prompt Americans to think twice before intruding into Russia’s critical infrastructure, however, it backfired. In fact, reports about Russian cyberintrusions of varying types and severity produced increasing calls in the United States for retaliation in the form of offensive cyberoperations. Convinced that America’s cyberadversaries had not paid a sufficient price for their hacking, the Trump administration announced in its 2018 National Cyber Strategy changes that improved the ability of US Cyber Command to undertake offensive operations.5 Many US experts began to argue that cyberdeterrence based on imposing greater costs on hackers while trying to deny them the benefits of intrusions was not enough. “The United States should be pursuing a more active cyberpolicy, one aimed not at deterring enemies but at disrupting their capabilities. In cyberwarfare, Washington should recognize that the best defense is a good offense,” argued a former Obama administration cyberpolicy official.6 Unlike in the Cold War, the specter of mutually assured destruction did not seem to be inducing mutual restraint in the new era of cyberconflict.
What might be called cybersabotage—the use of computers to destroy, disrupt, or disable a machine or a system—represents a high-technology twist on an old tactic. The term derives from the French word for a peasant’s wooden shoe, a sabot, and evokes the act of throwing a shoe in the gears of a machine to interfere in its functions. States have employed sabotage of various sorts throughout history. The Office of Strategic Services (OSS), the American progenitor of the CIA, conducted numerous secret sabotage operations behind enemy lines during World War II. Its classified Simple Sabotage Field Manual, published years after the war, highlighted the value of “slashing tires, draining fuel tanks, starting fires, starting arguments, acting stupidly, short-circuiting electric systems, [and] abrading machine parts” for disrupting and demoralizing the enemy.7 The Soviet Union’s intelligence services were enthusiastic saboteurs, both during and after World War II. In Special Tasks: The Memoirs of an Unwanted Witness, Pavel Sudoplatov describes running networks of illegal agents charged with sabotaging American and NATO installations in the event that the Cold War turned hot.8
But this new twist on an old practice has potentially devastating implications. Sabotage has traditionally been a tactical measure. It could help one’s fortunes in a military skirmish or provide an important advantage in a battle, but it could not win wars or force national-level leaders to negotiate a settlement. In a networked world, by contrast, cybersabotage is potentially strategic. The Stuxnet worm, which damaged or destroyed systems that Iran was using to enrich uranium to weapons-grade specifications, demonstrated that twenty-first-century sabotage could have profound effects on a nation’s most significant strategic military capabilities. In a special report on the cyberthreat, the Defense Science Board, a group not generally given to hyperbole, compared the implications of cybersabotage to nuclear weapons:
The benefits to an attacker using cyber exploits are potentially spectacular. Should the United States find itself in a full-scale conflict with a peer adversary, attacks would be expected to include denial of service, data corruption, supply chain corruption, traitorous insiders, kinetic and related non-kinetic attacks at all altitudes from underwater to space. US guns, missiles, and bombs may not fire, or may be directed against our own troops. Resupply, including food, water, ammunition, and fuel may not arrive when or where needed. Military Commanders may rapidly lose trust in the information and ability to control US systems and forces. Once lost, that trust is very difficult to regain.
Based upon the societal dependence on these systems, and the interdependence of the various services and capabilities, the Task Force believes that the integrated impact of a cyber attack has the potential of existential consequence. While the manifestations of a nuclear and cyber attack are very different, in the end, the existential impact to the United States is the same.9
The board was equally pessimistic about prospects for defending against such cybersabotage: “Today, much of DoD’s money and effort are spent trying to defend against just the inherent vulnerabilities which exist in all complex systems. Defense-only is a failed strategy.”
The reason for this pessimism is that cybertechnology has tilted the age-old competition between offensive and defensive measures starkly in favor of the offense.10 It is far easier to penetrate a network than it is to prevent a would-be intruder from gaining access. Software code inevitably includes mistakes that clever hackers can exploit. No matter how much training they receive, some percentage of system users will click on links that they should not, use passwords that are maddeningly simple to determine, or fail to update software in a timely manner, offering easy ways for attackers to steal credentials and enter networks. Intrusion-detection systems can be circumvented or fooled. Antivirus software is designed to spot hacking code that has already been used in past intrusions; it cannot detect new exploits that take advantage of what hackers call zero-day vulnerabilities—coding flaws previously undiscovered and unpatched.11 In a very real sense, antivirus software defends against yesterday’s attacks, not tomorrow’s. Defenders can complicate the tasks of cyberattackers, and they are getting better at discovering and identifying intruders, but rarely can they stop them prior to an intrusion.
Awareness of these realities fuels a pervasive sense of vulnerability among those charged with securing computers and networks and among national security professionals more broadly.12
This vulnerability, together with the advantages enjoyed by intruders, has created a vicious circle of aggression and counteraggression in the cyberarena, where the real or imagined compromise of systems on one side prompts the other side to redouble efforts to compromise its rival’s systems. Just as American experts have questioned the effectiveness of cyberdefense in the face of sophisticated intrusions, the Russians have equal reason to believe cyberoffense is their most effective form of defense. Devoting countless hours to scouring millions of lines of code to uncover malware is often a fruitless approach when dealing with sophisticated attackers. The Russian cybersecurity firm Kaspersky has alleged, for example, that highly sophisticated American malware was operating undetected in Russian networks for more than a decade before being discovered.13 Even when searches come up empty, defenders cannot be certain that a malware bomb is not lying undiscovered somewhere in their vast web of infrastructure. Nor can they be certain that what they do discover is not a false flag operation—malware planted by a third country in disguise, hoping to benefit by stoking Russian-American tensions. These vexing problems argue for a different response to the problem of cybersabotage: to penetrate an opponent’s networks even more deeply to learn what exactly it might be doing—and to plant a few more cyberweapons of one’s own to deter the adversary from any detonation. Cybertechnology has, in many ways, created a new form of existential threat for the world that is reminiscent of the impact of nuclear technology on the Cold War, but it operates according to a different logic. Largely invisible, these weapons can quickly become ineffective if they are not implanted and constantly updated, even if not detonated. And their very invisibility encourages states to assume and plan for the worst.
The perverse dynamics of cybersabotage, where acts meant to deter can wind up incentivizing aggression, and where inherent uncertainties in the attribution of intrusions serve as perpetual temptations to take risks that few would undertake in the bricks-and-mortar world, are only one part of the undeclared virtual war going on between Russia and the United States. Another is in the world of espionage.
This world was undergoing a little-noticed revolution in the wee hours of a Sunday in 1998. It began when a technician at a materials company happened to notice that someone was connecting from the company’s network to Wright-Patterson Air Force Base in Ohio at three o’clock in the morning. This was unusual, to say the least. He reached out to the owner of the account to ask about the connection, but the employee denied that he was even online at that time, let alone surfing around on Wright-Patterson sites. The technician’s curiosity quickly turned into suspicion. He alerted several Computer Emergency Response Teams, including that of the US Air Force, which determined that the technician had stumbled upon an ongoing cyberintrusion.
Delving into the case, Air Force investigators learned that the connection from the materials company was only one of many suspicious connections. The same user had also connected to Wright-Patterson from the University of South Carolina, Bryn Mawr, Duke, Auburn, and other universities, and he was pilfering sensitive, albeit unclassified, files on such things as cockpit design and microchip schematics.14 Moreover, he was not just targeting Wright-Patterson; he used computers in various university research labs to gain continuing access to a wide range of military sites and networks in search of specific information. He was also technically advanced, rewriting network logs after leaving sites so that no one would discover evidence that he had ever been there. Struck by the unprecedented scale and sophistication of the hacks, the FBI opened an official investigation and brought in officials from law enforcement, the military, and other parts of government to form a forty-person working group on the intrusion set. And they gave it a code name: Moonlight Maze.
The first task in the investigation was to determine the attacker’s origins and intentions. The intrusions all seemed to take place during the same nine-hour span, one that coincided with business hours in Moscow, and they did not occur on Russian Orthodox holidays. But was Moscow really the attack’s point of origin, or was it simply one of many other transit points in the pathway, like the universities the attacker was using to mask his location? The working group decided to pursue answers by using a “honey pot,” a fake set of digital files on a topic that would interest the attacker. Once he ventured into those files, investigators would be able to follow his moves in real time and track him down. To aid their efforts, the investigators also implanted within the honey pot files a digital beacon, a few lines of code that automatically attached to the attacker and sent back a signal to investigators as he hopscotched from point to point along the internet back to his home origin. The approach worked; the attacker took the bait, and the beacon traced him back to an IP address at the Russian Academy of Sciences in Moscow.15 Further research indicated that the attacker’s programming code, prior to encryption, had been written in Cyrillic characters. All the indicators pointed in the same direction: American military networks had been victimized by a massive, ongoing, state-directed cyberespionage operation—what cybersecurity professionals later came to call an advanced persistent threat—originating in Russia.
As investigators learned more and more about the intrusion set, they were stunned by its scale. It had been going on for years before being discovered, since at least 1996, a year when the United States was using its economic might and electoral expertise to help Boris Yeltsin win a second term as president of Russia. And it had produced a haul of information comparable in size to the holdings of a municipal library: some 5.5 gigabytes of data, the equivalent of nearly three million pages, on such topics as helmet design, hydrodynamics, oceanography, satellites, aerodynamics, and various surveillance technologies. It was the kind of thing that once would have required a large network of agents, clandestine photographs, dead drops, and secret communications, with information delivered through painfully slow and risky channels, using individuals whom Russia had convinced to betray their country. By contrast, Russian cyberoperators could exploit the ever-occurring imperfections in software coding and boundless gullibility of system users to gain access to virtually any network they wanted and grab mind-boggling amounts of information. Large support organizations were not required. If done well, in fact, no one in the target organization would even know their information had been compromised. It represented a quantum leap in espionage capabilities.
Some twenty years since the revelations of Moonlight Maze, cyberespionage has become commonplace in the intelligence world and widely known outside it, featured in numerous Hollywood films and covered frequently in media reports. But beyond remarking on the new capabilities afforded by gee-whiz technology, few observers have grappled with the broader implications of this phenomenon for international stability. Cyberespionage is not merely facilitating the collection of information by both government intelligence organizations and non-state actors; it is, by its nature, blurring the lines between spies, warriors, diplomats, criminals, and private citizens—and between acts of espionage and acts of warfare—in ways that have major implications for international stability.
First, cybertechnology is not only changing the way states collect sensitive information; it has also changed the targets of that collection. In the Cold War, the United States and Soviet Union each focused collection efforts on the other’s national security apparatus, a restricted world of senior political leaders, militaries, intelligence organizations, defense industry, weapons systems, and high-technology laboratories. Private citizens could get occasional glimpses into this espionage through media reports about the spying going on behind closed doors, and they could marvel at the glamorized Hollywood versions of the spy game at the cinema, but seldom would the worlds of espionage and civilian society intersect. In cyberespionage, these worlds not only overlap to a much greater degree, but there is increasingly little distinction between national security targets and civilian targets. Internet “packets” conveying a public Facebook post about a family vacation speed through privately owned telecommunications networks side by side with encrypted national security information meant for select, cleared audiences. Critical infrastructure, such as power utilities or water treatment plants, is often privately owned, but could cause national damage if its systems crashed. Many advances in artificial intelligence, machine learning, and other aspects of information technology—indeed, in most technological areas—depend not on government-funded laboratories or large private companies but on local start-ups and private or university-based accelerators, making these entities attractive targets for intelligence collection. The laptops of private citizens can be remotely commandeered and assembled into botnets for use in penetrating national security networks. And collecting information on individual citizens—their habits and histories, likes and dislikes, friends and associates—can help hackers gather valuable insights into the private lives of targeted officials and create convincing “spear-phishing” operations that aid other efforts to penetrate national security networks. Unlike in the Cold War, there are no “stand-alone entities anymore—everything is part of a network,” according to one cybersecurity expert.16 To keep pace with these advancements, and to gain access to such sensitive information, cyberspies inevitably target and tread on what was once considered civilian territory. The shadow war is an unavoidably public-private venture, and managing public involvement in it is a formidable challenge.
One reason increased public involvement in cyberespionage is challenging is that the public typically sees only a small slice of the game of spy versus spy. Western media, for example, frequently report on Chinese and Russian cyberactivities directed against the United States, and it is not uncommon for American private cybersecurity professionals to encounter a Chinese or Russian intrusion directly. But rarely does one encounter American media reports about US intrusions into Russian or Chinese systems. The Russians certainly believe the United States is actively engaging in advanced cyberespionage; the Russian cybersecurity company Kaspersky Labs issued a report in 2015 on highly sophisticated operations by what it called the Equation Group that it all but explicitly said was run by the National Security Agency.17 Former director of national intelligence Michael McConnell’s claim in 2013 that “at least 75 percent” of President Obama’s daily intelligence brief came from cyberspying is one of the few public comments on American cyberespionage.18 For the public, it is a bit like having front-row seats to a boxing match in which viewers can see the torso of one of the boxers, but not his arms or gloves. The audience can see him getting hit by his opponent but cannot see whether he is punching. And because it cannot, its natural response is to want him to hit back. This public sentiment adds to incentives for offensive cyberoperations.
Cybertechnology is also changing the risk-reward balance in intelligence operations. In the old days, the risks of any given espionage operation—to the personnel involved and to the broader interests of the government—were often significant and had to be balanced against the potential rewards of success when considering whether to authorize it. In cyberespionage, however, the risks often seem small compared to the potential benefits, which incentivizes greater aggression and a higher pace of operations compared to the past. As cybersecurity pioneer Cliff Stoll pointed out some three decades ago, “Espionage over networks can be cost efficient, offer immediate results, and target specific locations … insulated from the risks of internationally embarrassing incidents.”19
The incentives fueling offensive cyberespionage are being reinforced by new surveillance technologies in the bricks-and-mortar world that are making traditional cloak-and-dagger espionage more difficult than ever. Not long ago, intelligence organizations could create viable cover stories for covert case officers, send them overseas under aliases, and have them meet, recruit, and communicate with foreign agents in secret. In a world where nearly everyone has social media or other forms of online histories, however, cover stories become problematic.20 The increased use of biometric surveillance systems in airports and train stations around the world makes traveling under aliases a greater challenge. Many world capitals are blanketed by video surveillance systems that track movements on every square foot of cityscape. Facial recognition software makes disappearing into obscurity a challenge. Carrying a cell phone provides counterintelligence organizations with a handy geolocational tracker; not carrying one is anomalous behavior that draws their unwelcome attention. As a result, the traditional recruitment of what intelligence agencies call human assets has grown more difficult. This has further incentivized aggressive cyberespionage, and it has fueled feelings of vulnerability; with fewer human agents to explain the motivations behind various cyberpenetrations, governments tend to assume worst-case intentions on the part of their adversaries.21
These worst-case assumptions become particularly important because, for those on the receiving end of such cyberintrusions, it can be difficult to distinguish between operations meant to grab sensitive information and those intended to prepare for cybersabotage. Once inside a computer, a hacker can explore the entire network connected to it and download enormous quantities of data. He can also, however, alter, corrupt, or destroy that data, or he can distort, disable, or destroy the operations of systems controlled by the network.22 To cybersecurity operators, an intrusion penetrating the system in order to steal sensitive information or learn about an adversary’s plans looks just like an intrusion mapping the network’s passageways and weak points in order to “prepare the battlefield” for a damaging attack.23 A hacker’s intentions are often not apparent until sometime well after the operation has begun, and this has a profound impact on perceptions. “From a psychological perspective, the difference between penetration and manipulation may not matter much,” according to cyberexpert Martin Libicki of the US Naval Academy.24 In other words, in the cyberage, the business of espionage is increasingly indistinguishable from the business of warfare.
The third front of the shadow war is another new twist on an old activity. In the West, it has gone by a variety of names throughout history, including propaganda, strategic communications, information operations, and PSYOP, or psychological operations. In Russian, it is also known in various forms as dezinformatsiya, maskirovka, kompromat, and aktivniye meropriyatiya—active measures. It can be produced openly or covertly, and it can aim to inform, to inspire, or—in its darker varieties—to deceive or to subvert. But regardless of its name or form, its target is the same: the human mind.
The Kremlin School for Bloggers was an early salvo in a US-Russian information war that only one side recognized it was fighting. It was born in fear, launched by the Russian government in 2009 not long after events in the former Soviet republic of Moldova that were billed as one of the first “Twitter revolutions.”25 Parliamentary elections there had produced a clear victory by the Communist Party, and in its aftermath, two obscure youth groups had posted a notice online that called for people to gather the next day on the Moldovan capital’s central square at an event they called “I Am Not a Communist.” One of the gathering’s organizers described it on her blog as little more than “six people, 10 minutes of brainstorming and decision-making, several hours of disseminating information through networks, Facebook, blogs, SMSs and emails.”26 Surprisingly, more than fifteen thousand protestors showed up, and within another day, peaceful demonstrations had turned into mass riots, arson, and vandalism.27 The protests caused significant material damage and resulted in several deaths, but they eventually petered out following a harsh government crackdown, and they did little to produce meaningful political change in Moldova.
Moscow, however, was concerned. For those playing defense, internet campaigns were more difficult to contain than old-fashioned propaganda vehicles or traditional subversion efforts, which needed leaders, organizations, and money. Social media could deliver information directly to specifically targeted audiences without having to go through such gatekeepers as newspaper editors or television or radio producers, and they allowed users to plan and publicize actions quickly in response to news and events, creating so-called flash mobs. There were few formal organizations, recognized leaders, or significant money flows that government authorities could monitor and counter. The Moldova protests had materialized practically overnight, with almost no planning or organization, and their scale and ferocity had surprised even those who had proposed them.
It was not hard to envision similar social media–fueled instability erupting in Russia. Kremlin officials saw Twitter revolutions as a genuine threat. Gleb Pavlovsky, a Kremlin consultant and one of Russia’s leading information warriors at the time, explained that “Moscow views world affairs as a system of special operations, and very sincerely believes that it itself is an object of Western special operations.”28 Information technology made this Western targeting doubly dangerous. According to Russian journalist Andrei Soldatov, “You can now mobilize people to get them to the streets without traditional means, and even without a youth movement. You can use technology; you can use social media.… The thing that was important for the Kremlin [was] that they saw these things as a part of a bigger plot, all arranged by the West, and namely by the US State Department. That was why they really believed that they’re under real attack, and the threat is real.”29 Moscow’s concerns were later captured officially in its National Security Strategy, which bemoaned the “intensifying confrontation in the global information arena caused by some countries’ aspiration to utilize informational and communication technologies to achieve their geopolitical objectives, including by manipulating public awareness and falsifying history.”30
By contrast, the developments in Moldova and in the subsequent Green Revolution in Iran set Washington abuzz with internet optimism. Social media, it seemed, were the ultimate democratizers. They allowed everyone across the world to have access to information unfiltered by government censors, and they helped like-minded strangers to connect and organize despite being separated, in many cases, by vast physical distances.31 Brimming with hope for a more liberal global future, the United States government, Facebook, Google, and several other prominent American-based businesses and organizations had launched the Alliance for Youth Movements to fund and organize social media movements in Latin America, Africa, the Middle East, Europe, and Asia that would “change the world,” holding annual summits of international youth leaders and prominent technologists in New York and Mexico City.32 The new digital tools created “a greater chance for civil society organizations’ coming to fruition regardless of how challenging the [political] environment,” as the State Department’s point man for cyberdiplomacy, Jared Cohen, proclaimed in 2009.
To further the cause of free expression and hasten liberalization within Russia, the US government had also helped Russia’s Glasnost Defense Foundation, an NGO launched in the Gorbachev period, to create what it called the School for Bloggers, aimed at enabling independent Russian voices to gain internet platforms to deliver their views on key social and political issues. In this effort, Americans did not see themselves as fighting an information war or seeking regime change in Russia; they were simply promoting cherished Enlightenment principles, freedom of expression and freedom of assembly, through the vehicle of new technology. The invisible hand of liberal progress would then do its work. US diplomats and businessmen needed to do little more than facilitate the spread of internet access, and good things would follow. As one American columnist put it, “As new media spreads its Web worldwide, authoritarians … will have a difficult time maintaining absolute control in the face of the technology’s chaotic democracy.”33
Russian officials scrambled to counter this seeming momentum. It was no accident that Moscow named its new venture the Kremlin School for Bloggers—it was a direct response to the Glasnost Defense Foundation’s own similarly named media initiative.34 The school comprised some eighty people working with two or three bloggers each across Russia to mount information campaigns online.35 Whereas the Glasnost Defense Foundation focused on bloggers critical of the Russian government, the Kremlin sought to train and promote such personalities as Maria Sergeyeva, an attractive twentysomething blonde who combined posts extoling Catherine the Great with occasional photos from hip parties around town.36 Sergeyeva, in turn, promoted training sessions for loyalist bloggers under the acronym KGB—Kursy Gosudarstvennykh Bloggerov, or Courses for State Bloggers—which, among other things, taught pro-Kremlin cyberwarriors how to hack into opposition blogs and find the addresses and telephone numbers of those behind them.37 The Kremlin also worked to co-opt Rustem Adagamov, known by the pseudonym Drugoi (Different), a graphic designer and photographer and author of the most popular blog on the social media site LiveJournal.38 Moscow’s aim was not to block or censor information critical of the government, however, but rather to defeat it—to co-opt youth groups from the inside and make it seem cool to be patriotic, and to show Russian audiences that the ideas of the government’s opponents were mistaken, ineffectual, or tainted by foreign sponsorship.
This approach marked an important shift from Soviet-era media control, which had relied on banning non-regime publications and censoring expression. As Russian internet guru and State Duma deputy Konstantin Rykov observed, blocking information was in many ways impractical in the cyberage, but more importantly, internet censorship would alienate the very audience—Russian youth—that the Kremlin most wanted to attract and persuade.39 Under the new approach, those who challenged the Kremlin’s policy line could express their views online, but they often found themselves subjected to an onslaught of orchestrated counterarguments and online harassment from an army of paid internet trolls. The Kremlin aimed to create the appearance of a marketplace of ideas in Russia’s digital domain, while ensuring that the market was tilted heavily in favor of the state’s preferred outcomes. Unlike in the Cold War, this information war was not fought over access to news and opinions, pitting an open society against a closed one.40 Rather, it was a battle for hearts and minds, and it would be won, in Moscow’s view, not by building walls against information and ideas but by enlisting the help of influencers who could argue and convince, as well as harass and intimidate, in the trenches of a blogosphere that transcended geographic borders.
Like cybersabotage and cyberespionage, cyberinfluence activities have added a troublesome new dimension to old-fashioned propaganda and subversion that has created new fears and uncertainties in international relations. The fears flow in part from the daunting speed and scale of modern cyberinfluence campaigns, which can put messaging content directly in front of hundreds of millions of eyes with unprecedented rapidity, and in part from our poor understanding of the degree to which online content actually shapes perceptions and motivates behaviors.
The potential impact of cyberinfluence operations seems vast. More than two and a half billion people worldwide were using social media in 2017, including more than 80 percent of the US population, and their online activity has created an enormous repository of data about who they are, what they like, what they think, and what they do.41 Platforms such as Facebook and Twitter are designed to allow advertisers to track user engagement with ads, page likes, search results, and news feeds. This provides advertisers and political campaigns with the potential to microsegment audiences according to their interests, views, demographic attributes, and behaviors to provide them with content customized to resonate with target segments and to employ machine-learning algorithms to improve the effectiveness of proffered content over time. But by default, it also offers the same possibilities to foreign intelligence services keen to understand and affect grassroots dynamics in the societies of their adversaries. Cyberinfluence campaigns can zero in, for example, on mothers in their thirties living in Pennsylvania who are interested in preschool education, and then push content and advertising tailored to their concerns directly into their social media feeds. In turn, social media “listening” software allows content providers to optimize the timing of these placements and gauge the impact of their content on audience response.
When used to sell products and fuel economic growth, or to support causes that help society, these innovations can potentially be of benefit, but customized influence campaigns can be built to deceive as well as to inform. Twitter bots can mimic the appearance of actual human users to disseminate tweets to audiences and distort impressions about the impact of news and events. New algorithms assisted by artificial intelligence make it easy to create “deep fakes,” bogus video or audio segments that depict public figures doing and saying things that they never did or said but that are so realistic that it is practically impossible for audiences to detect the deception. The speed with which these deep fakes can be delivered vastly outpaces the time it takes to detect them and disseminate warnings and corrections, which means their impact could be disproportionately great in the context of a fast-moving election campaign. This technology has put old-fashioned disinformation—the deliberate publication of false information to deceive audiences—on steroids.
There is no doubt that new information technology makes it easier than ever before to engage in grassroots political influence activities, whether constructive or destructive. But engagement and effectiveness are not the same thing. Are these new cyberinfluence tools effective in shaping grassroots political behavior? Election campaign professionals generally believe that altering the views of voters is exceedingly difficult, and modern election campaigns rarely attempt it, even with an array of new digital tools and data with which to work. Rather, they use digital data to identify people who are already inclined to agree with a candidate’s positions on key issues, and they employ cybertools to rally them to turn out at the polls.42 Do they help to bolster turnout? And can cyberinfluence campaigns also motivate people to act in other ways on their preexisting beliefs, such as to protest or even to engage in violence? The truth is that no one yet knows. Vanishingly few studies have examined the link between online content, audience perceptions, and political behaviors. But the seeming potential held by new cybertools to mobilize, inflame, or deceive audiences, coupled with their ability to target customized messages at specific groups or individuals, is causing great worry in both Russia and the United States.
Layered on top of these fears is the difficulty of discerning the intent of those behind cyberinfluence efforts. Sometimes, influence efforts can aim at little more than reinforcing a state’s diplomatic messaging. The Voice of America has long broadcast news and opinions into countries dominated by state-controlled media, hoping to provide these audiences with alternative perspectives on events. More recently, Moscow launched Russia Today, later renamed RT, to broadcast Russian perspectives into the United States, Europe, and other parts of the world. Other influence activities are intended not to persuade but to subvert—to rend the political fabric of an adversary nation in a way that undermines its governing authority.
But the broader goal behind subversive activity is not always clear. In his book Cyber War Will Not Take Place, Johns Hopkins University professor Thomas Rid explains that the goal of subversion may either be to overthrow an established economic or governmental order or to force those in power to do things they would rather not do. “The first objective is revolutionary and existential; the second objective is evolutionary and pragmatic.”43 Discerning the difference, however, is maddeningly difficult. Through American eyes, US support for independent Russian blogging was intended to push Moscow to accept principles—free expression and assembly—that the Russian government did not fully embrace. Through Russian officials’ eyes, those same US efforts represented an existential threat. When hackers helped to publish embarrassing emails from within Hillary Clinton’s presidential campaign in 2016, that information clearly subverted Clinton’s reputation to some degree. But was it ultimately meant to torpedo Clinton’s election prospects and “destroy American democracy” as claimed by many, or did it aim to force what most Russians regarded as an inevitable Clinton presidency to do things it would rather not do, such as back off evangelical democratization efforts in and around Russia and recognize the potential societal instability that an ungoverned internet might bring? Such ambiguities in assessing intentions are inherent to cyberinfluence activities and can magnify fears and destabilize state-to-state relations.
For the sake of clarity, this chapter has presented cybersabotage, cyberespionage, and cyberinfluence as separate and distinct activities. In reality, they are interwoven and mutually reinforcing. Together, they breed a pervasive sense of vulnerability and encourage aggressive responses rooted in fear. As the United States has long argued, it is quite difficult to control the flow of information in the World Wide Web. Yet for Russia, that information can pose threats to societal stability by fueling sabotage and subversion. This combination incentivizes offensive cyberinfluence operations. When one cannot block news and opinions, the best alternative is to defeat them. Similarly, software flaws, human imperfections, and the insecure nature of the internet mean that cyberdefenders have enormous difficulty stopping network intrusions. When one cannot play effective defense, there is a strong temptation to go on offense to penetrate the adversary’s networks to discover what the other side is doing. Once inside a network, a cyberwarrior can not only gather information but also sabotage an adversary’s systems. To deter the other side from detonating such cyberbombs, states are incentivized to take cyberhostages of their own, threatening to damage the other side’s systems in response. This shadow war in the cyberdomain is the essence of a vicious circle, creating escalating spirals of aggression and suspicion. It would be dangerous enough if limited strictly to the cybersphere. But in a networked, globalized world, in which digital networks and national economies and media systems and nuclear command and control systems are all linked together in some way, limiting spillover from the cybersphere is inherently problematic. Unlike in Las Vegas, what happens in the cyberworld does not stay in the cyberworld. It sooner or later spills into other domains, including economic and kinetic military operations.