8

BEYOND CYBER WAR

Before moving on to conclusions, one subject needs to be moved out of the way. That subject has permeated this analysis—and it is a subject that pervades, if not distorts, many other publications on cyber security: analogies.

Analogies, similes, and metaphors have enabled and shaped the discussion of computers for many decades. As engineers developed new technologies, they needed new words to describe what they and their technology were doing. Storage and packets and firewalls are all spatial analogies that refer to something humans could intuitively relate to. Hard disks were a repository for lots of data, old and new, so referring to them as storage devices must have seemed intuitive. Small pieces of information, bundled between a header and footer with an address on it, naturally could be called a packet. Calling software that prevented unauthorized access a firewall just made sense. Pointing out that cyberspace itself is a spatial metaphor may be obvious. Less obvious is the fact that it was not an engineer who coined the term, but William Gibson, a novelist. Gibson first used the word in his 1982 science fiction story Burning Chrome.1 He later popularized it in the 1984 novel Neuromancer. It is worth quoting the paragraph that introduced the word to the book’s readers. The segment describes the fictional thoughts of Henry Dorsett Case, a low-level drug dealer in the dystopian underworld of Chiba City, Japan:

A year here and he [Case] still dreamed of cyberspace hope fading nightly. All the speed he took, all the turns he’d taken and the corners he’d cut in Night City, and still he’d see the matrix in his sleep, bright lattices of logic unfolding across the colorless void … The Sprawl was a long strange way home over the Pacific now, and he was no console man, no cyberspace cowboy. Just another hustler, trying to make it through.2

Cyberspace, for Gibson, was meant to evoke a digital virtual world of computer networks that users would be able to “jack” into through consoles. Cyberspace, Gibson later explained, was an “effective buzzword … evocative and essentially meaningless.” The writer imagined the word as a suggestive term, one that had “no real semantic meaning.”3 But probably not even the resourceful Gibson could have imagined a more spectacular rise of his evocative yet meaningless expression. To this day, this creative lack of semantic clarity remains a potent source of agility that helped the analogy jump out from the pages of an obscure sci-fi novel and “jack” cyberspace into the political reality of international relations and national security.

Yet analogies should be used with caution and skill, especially in the conceptual minefield that is cyber security. Analogies can be triply instructive. Firstly, a metaphor can make it easier to understand a problem—analogies are didactic devices (saying that cyber security is a conceptual minefield, as the opening sentence of this paragraph just did, makes one thing obvious: be careful, something can go wrong if you don’t pay attention to detail). Secondly, the comparisons that metaphors force upon us can highlight areas of importance and connections that might otherwise have been missed—analogies are inspirational and creative devices (if cyber security is a conceptual minefield, then perhaps we can come up with a way to better find the “mines?”). Finally, and most importantly, at some point a metaphor will begin to fail, and at this point of conceptual failure we may learn the most important things about the subject at hand, how it differs from the familiar, how it is unique—analogies are also testing devices.4 This triple approach to evaluating the utility of metaphors is simple in concept but difficult in practice. Perhaps especially in the context of cyber security—a field which encompasses the technological knowledge of various subdisciplines of computer science as well as social science, political science, legal studies, and even history—each step on this three-bar ladder of abstraction by analogy requires progressively more specialized expertise. Taking the first step is easy. Even laypersons may use analogies as didactic devices. Going to the second step is not too difficult. Using analogies as creative devices requires some working knowledge of a field but not special training. But using analogies as testing devices requires expertise and skill. Recognizing a complex analogy’s point of failure and taking advantage of the additional insights afforded by the conceptual limitations of a given metaphor takes expert knowledge, perhaps even knowledge from across unrelated disciplines and subdisciplines. In practice, therefore, analogies often begin to fail without their users noticing the defect. The short-sighted and flawed use of metaphors is especially prevalent in the cyber security debate, particularly, it seems, among experts in military affairs both in uniform and outside uniform. Talking about cyber war or cyber weapons, for instance, is didactically useful: the audience instantly has an idea of what cyber security could be about; it inspires creativity: perhaps evoking thoughts of “flying” or “maneuvering” in cyberspace, not unlike Henry Dorsett Case jacking in. But too often analogies are used without understanding or communicating their point of failure (if cyber security is a conceptual minefield, then stepping on one of the dangerous devices causes harm that cannot instantly be recognized). The line between using such comparisons as self-deception devices and testing devices, in other words, can be a subtle one.

A perfect illustration of this problem is the much-vaunted war in the ostensible fifth domain. “Warfare has entered the fifth domain: cyberspace,” The Economist intoned in July 2010.5 Indeed, referring to cyber conflict as warfare in the fifth domain has become a standard expression in the debate. This author was taken aback in a closed-door meeting in the Department of War Studies at King’s College London in early 2012 when a senior lawyer for the International Committee of the Red Cross referred to cyber war and wondered whether the ICRC needed to work toward adapting the law of armed conflict to that new “fifth domain.” Five points will help clear the view. First: the expression of war in the fifth domain has its origin as a US Air Force lobbying gimmick. The Air Force had already been in charge of air and space, so cyberspace came naturally. In December 2005 the US Air Force expanded its mission accordingly. That alone is not a strong argument against the term’s utility, but it should be clear where the expression comes from, and what the original intention was: claiming a larger piece of a defense budget that would start to shrink at some point in the future. Second: ultimately, code-triggered violence will express itself in the other domains. Violence in cyberspace is always indirect, as chapter two discussed at length. By definition, violence that actually harms a human being cannot express itself in a fifth domain. Third, if warfare in the fifth domain, as consequently would be necessary, referred only to damaging, stealing, or deleting information stored in computer networks, rather than to affecting something that is not part of that domain in the first place, then the very notion of war would be diluted into a metaphor, as in the “war” on obesity. Fourth, cyberspace is not a separate domain of military activity. Instead the use of computer networks permeates all other domains of military conflict, land, sea, air, and space. To an extent, that has always been the case for the other domains as well. But in the case of IT security an institutional division of labor is far more difficult to implement, especially in a military context: the air force doesn’t have tanks, the army has no frigates, but everybody has computer-run command-and-control networks. Finally, cyberspace is not even space. Cyberspace is a now-common metaphor to describe the widening reaches of the Internet. “Firewall” and “surfing” the web are other well-established and widely accepted spatial metaphors. Saying the air force “flies” in cyberspace is like the army training troops to “scale” firewalls or the navy developing new “torpedoes” to hit some of those surfing the web. In fact the very idea of “flying, fighting, and winning … in cyberspace,” enshrined in the US Air Force’s mission statement, is so ill-fitting that some serious observers can only find it faintly ridiculous—an organization that wields some of the world’s most terrifying and precise weapons should know better. The debate on national security and defense would be well served if debating war was cut back to the timetested four domains. After all there is no cyber attack, not even the over-cited Stuxnet, which unequivocally represents an act of war on its own. No cyber offense has ever caused the loss of human life. No cyber offense has ever injured a person. No cyber attack has ever seriously damaged a building.

Once the distraction of the “fifth domain of warfare” is moved out of the way, five fundamental and largely novel conclusions become visible.

The first and main conclusion, and this book’s core argument, is that the rise of cyber offenses represents an attack on violence itself. Almost all cyber attacks on record are non-violent. Those cyber attacks that actually do have the potential to inflict physical violence on machines or humans can do so only indirectly, as chapter two argued in detail: so far, violence administered through cyberspace is less physical, less emotional, less symbolic, and less instrumental than more conventional uses of political violence. This applies in all three main areas where political cyber attacks appear: sabotage operations may be violent or, in the majority of cases on record, non-violent. The higher the technical development and the dependency of a society, the higher the potential for both violent and non-violent cyber sabotage. This has double significance: it is easier to distinguish between violence and non-violence, and it is more likely that saboteurs choose non-violence over violence. Espionage operations seem to be making less use of personnel trained in the management of violence than in the pre-Internet age. To an extent, experts in the use of code are replacing experts in the use of force, though only in relative terms and at a price. Finally, subversion has changed. The early phases of subversively undermining an established authority require less violence than before, but turning a budding subversive movement into a revolutionary success has become more difficult. Technology seems to have lowered the entry costs while raising the costs of success. Yet cyber attacks of all strands, even in their predominantly non-violent ways, may achieve a goal that previously required some form of political violence: to undermine the collective social trust in specific institutions, systems, organizations, or individuals. And cyber attacks, whether executed by a state or by non-state groups, may undermine social trust, paradoxically, in a more direct way than political violence: through a non-violent shortcut.

The second conclusion concerns the balance between defense and ofiense in the context of cyber attacks. Most conventional weapons may be used defensively and offensively. But the information age, the argument goes, has “offence-dominant attributes.”6 A 2011 Pentagon report on cyberspace still stressed “the advantage currently enjoyed by the offense in cyberwarfare.”7 Cyber attack, proponents of the offense-dominance school argue, increases the attacker’s opportunities and the amount of damage to be done while decreasing the risks (sending special code is easier than sending Special Forces).8 The attribution problem unquestionably plays into the hands of the offense, not the defense. Hence expect more sabotage and more saboteurs.

But adherents of the offense-dominance ideology should reconsider their arguments, for three different reasons: one is that when it comes to cyber weapons, the offense has to deal with a set of costs and difficulties that the defense does not have to deal with. One implicit assumption of the offense-dominance school is that cyberspace favors the weak. But cyber attack may also favor strong and rich countries in unexpected ways: the Idaho National Laboratory, for instance, has many of the mainstream industrial control systems installed on range, in order to test them, find vulnerabilities, and build stronger defensive systems. But the same testing environment is also a tremendous offensive asset. By the time engineers have understood how to fix something, they also know how to break it. In addition to this, some installations are highly expensive to simulate and to install in a testing environment, for instance the control systems used in complex and highly bespoke refineries. At the same time only very few systems are used in refineries. This means that only a country that has this capability may be in a position to attack that capability elsewhere—this could limit the number of potential attackers.9 Cyber sabotage with serious destructive potential is therefore possibly even more labor intensive than the brick-and-mortar kind, even if the required resources are dwarfed by the price of complex conventional weapon systems.10 Vulnerabilities have to be identified before they can be exploited; complex industrial systems need to be understood first; and a sophisticated attack vehicle may be so fine-tuned to one specific target configuration that a generic use may become impracticable—consider a highly sophisticated rocket that can only be fired against one single building and at nothing else, and it can only be fired once the attacker knows precisely what’s inside that building. The target set of a cyber weapon is therefore more limited than commonly assumed—the reverse is true for robust defenses.

Another reason is that, when it comes to cyber weapons, the offense has a shorter half-life than the defense.11 Clandestine attacks may have an unexpectedly long life-cycle, as Stuxnet and especially Flame illustrated. But weaponized code that is designed to maximize damage, not stealth, is likely to be more visible. If an act of cyber war was carried out to cause significant damage to property and people, then that attack would be highly visible by definition. As a result it is highly likely that the malicious code would be found and analyzed, probably even publicly by anti-virus companies and the software vendors that provide the attacked product. The exploits that enabled the attack would then most likely be patched and appropriate protections put in place. Yet political crises may stretch out for many weeks, months, or even years. Updated defenses would make it very difficult for the aggressor to repeat an attack. But any threat relies on the offender’s credibility to attack, or to repeat a successful attack. If a potent cyber weapon is launched successfully once, it is questionable if an attack, or even a salvo, could be repeated in order to achieve a political goal. The problem of repetition reduces the coercive utility of destructive cyber attacks.

The final factor favoring the defense is the market. One concern is that sophisticated malicious actors could resort to asymmetric methods, such as employing the services of criminal groups, rousing patriotic hackers, and potentially redeploying generic elements of known attack tools. Worse, more complex malware is likely to be structured in a modular fashion. Modular design could open up new business models for malware developers. In the car industry, for instance,12 modularity translates into the possibility of a more sophisticated division of labor. Competitors can work simultaneously on different parts of a more complex system. Modules could be sold on underground markets. But even if this analysis is correct, emerging vulnerability markets pose a limited risk: the highly specific target information and programming design needed for potent weapons is unlikely to be traded generically. To go back to the imperfect analogy of chapter four: paintball pistols will continue to be commercially available, not intelligence devouring preprogrammed warheads of virtual, one-shot smart missiles. At the same time the market on the defensive side is bullish: the competition between various computer security companies has heated up, red-teaming is steadily improving, active defense is emerging, and very slowly but notably consumers are becoming more security aware.

Once the arguments are added up, it appears that cyberspace does not favor the offense, but actually has advantages for the defense in stock. What follows may be a new trend: the level of sophistication required to find an opportunity and to stage a successful cyber sabotage operation is rising. The better the protective and defensive setup of complex systems, the more sophistication, the more resources, the more skills, the more specificity in design, and the more organization is required from the attacker. Only very few sophisticated strategic actors may be able to pull off large-scale sustained computer sabotage operations. A thorough conceptual analysis and a detailed examination of the empirical record corroborates one central hypothesis: developing and deploying potentially destructive cyber weapons against hardened targets will require significant resources, hard-to-get and highly specific target intelligence, and time to design, test, prepare, launch, execute, maintain, and assess an attack. Successfully attacking the most highly secured targets would probably require the resources or the support of a state actor; terrorists are unlikely culprits of an equally unlikely cyber-9/11.

The third conclusion is about the ethics of cyber attacks. If cyber attacks reduce the amount of violence inherent in conflict, rather than increase it, then this analysis opens a fresh viewpoint on some important ethical questions. Some observers have suggested creating an agreement like the Geneva Conventions to limit the use of weaponized code.13 Often such demands reach back to comparisons to the Cold War and nuclear weapons. For example: Brent Scowcroft, a cold warrior who served presidents Gerald Ford and George H.W. Bush, addressed a group of experts and students at Georgetown University in March 2011. The Cold War and cyber security are “eerily similar,” said Scowcroft, arguing that the US–Soviet arms control treaties should serve as a blueprint for tackling cyber security challenges. “We came to realize nuclear weapons could destroy the world and cyber can destroy our society if it’s not controlled,” a nouning Snowcroft told his Georgetown audience.14 Many views along similar lines could be added, and several academic articles have attempted to extract useful Cold War parallels.15 Naturally, avoiding the “destruction of our society” at all costs appears as the ethically correct choice. Many observers naturally fall back into well-established patterns of thought: striving for an international treaty to stop the impending “cyber arms race” or trying to apply jus ad bellum to acts of war that have not taken place.16 But lazy and loose comparisons cannot replace sober and serious analysis—nuclear analogies are almost always flawed, unhelpful, and technically misguided.17

Once cyber attacks are broken down into their three strains, sounder ethical considerations become possible. Subversion is the most critical activity. It is probably impossible to find much common ground on this question between the United States and the European Union on the one hand and the authoritarian political regimes in Russia or China on the other—and it would be ethically unacceptable to make compromises. Russia and China have famously suggested finding such a compromise in the form of an “International code of conduct for information security,” laid out in a letter to the United Nations Secretary-General on 12 September 2011. One of the core tenets of this suggested code was “respect for sovereignty” online in order to “combat” criminal and terrorist activities that use the Internet. The goal: curbing the dissemination of information that incites extremism, secessionism, and “disturbance.” Unnerved by the web-driven Arab revolutions, China and Russia wanted to set the international stage for better counter-subversion at home, as became evident during the World Conference on International Telecommunications (WCIT) in Dubai, United Arab Emirates, in early December 2012.

Yet for liberal democracies, the most normatively crucial question is a very different one. The question is not how to curb subversion, but how to maintain the right forms of subversion that enable democratic as well as entrepreneurial self-renewal: how should a free and open liberal democracy draw and renegotiate the line between regenerative subversion, which is enshrined in the constitution of truly free countries, and illegal subversive activities? This fine line has evolved over hundreds of years in liberal democracies, along with the tacit understanding that at rare but important junctures illegal activity can be legitimate activity. This line will have to be carefully reconsidered under the pressure of new technologies that may be used to extend control as well as to protest and escape control. The real risk for liberal democracies is not that these technologies empower individuals more than the state; the long-term risk is that technology empowers the state more than individuals, thus threatening to upset a carefully calibrated balance of power between citizens and the governments they elect to serve them.

The ethics of sabotage look very different. Weaponized code, or cyber attacks more generally, may achieve goals that previously would have required the use of conventional force. This analysis has also argued that the most sophisticated cyber attacks are highly targeted, and that cyber weapons are unlikely to cause collateral damage in the same way as conventional weapons. Therefore, from an ethical point of view, the use of computer attack in many situations is clearly preferable to the use of conventional weapons: a cyber attack may be less violent, less traumatizing, and more limited. Sabotage through weaponized code, in short, is likely to be more ethical than an airstrike, a missile attack, or a Special Forces raid. Something comparable applies to the ethics of cyber espionage. Again the use of computer attack as a tool of statecraft has to be compared with its alternatives, not with taking no action at all. Juxtaposing the alternatives is useful: intelligence may be gained by infiltrating computer systems and intercepting digital signals—or intelligence may be acquired by infiltrating human spies into hostile territory at personal risk, possibly armed, or by interrogating suspects under harsh conditions. Depending on the case and its context, computer espionage may be the ethically safer choice. The major problems are not ethical, but operational. This leads to the next conclusion.

The fourth conclusion is the most sobering one—it concerns the starkly limiting subtext of stand-alone cyber attacks.18 A state-sponsored cyber attack on another state sends a message in the subtext. The best-known and most successful example is Stuxnet. An assessment of this technically impressive operation has to be put into the larger strategic context: it was designed to slow and delay Iran’s nuclear enrichment program, and undermine the Iranian government’s trust in its ability to develop a nuclear weapon. Yet, firstly, it long remained unclear and controversial how successful the designers of Stuxnet were in this respect—it was probably clearer for the Iranians themselves. What is clear for outsiders, though, is that Stuxnet did not succeed in stopping Iran or denting the regime’s determination to develop a nuclear weapons capability. Several countries, secondly, pursued a number of policies vis-à-vis Iran in order to prevent the Islamic Republic from acquiring nuclear weapons. The instruments range from diplomacy, negotiations and sanctions of various sorts to covert operations, the assassination of nuclear scientists (and others), military threats, and ultimately to air strikes against key Iranian installations. Cyber attacks are only one instrument among many. Focusing on cyber attacks with questionable efficiency, and possibly with an AC/DC soundtrack, therefore runs the risk of sending a counterproductive message to the Iranians in the subtext: we’re alert and technically sophisticated, but we’re not really serious about attacking you if you cross a red line. A stand-alone cyber attack, especially done clandestinely and in a fashion that makes it impossible or nearly impossible to identify the attacker in situ, can be executed from a safe distance. Such an attack does not put the lives of service personnel at risk—therefore the political stakes are by definition lower than in a conventional operation. It is useful to remember the Cold War-logic of the trip-wire here. Throughout the Cold War, substantial numbers of American ground forces were stationed in West Germany and elsewhere to make a credible statement to the Soviet Union in the subtext: we’re alert and technically sophisticated, and we’re really serious about attacking you if you cross a red line.19 The White House credibly demonstrated its seriousness by putting the lives of American citizens on the line. Cyber attacks could yield very valuable intelligence, no doubt. But from a political vantage point their coercive utility is far more questionable. Based on the empirical record and on a technical analysis, the political instrumentality of cyber attacks has been starkly limited and is likely to remain starkly limited. Paradoxically this effect is enhanced by the wide gulf between hype and reality, as those at the receiving end of cyber sabotage hear a lot of noise but feel (comparatively) little pain. Put simply, cyber sabotage attacks are likely to be less efficient than commonly assumed.

The final conclusion is about the reaction, about countermeasures to cyber attacks. Senator John McCain commented on the failed Cybersecurity Act of 2012 over the summer that year. The prominent national security leader had voted against the proposed bill: “As I have said time and time again, the threat we face in the cyber domain is among the most significant and challenging threats of twenty-first-century warfare.” Early in 2013 the Pentagon announced that it would boost the staff of its Cyber Command from 900 to 4,900 people, mostly focused on the offense. The use of such martial symbolism points to a larger problem: the militarization of cyber security.20 William Lynn, the Pentagon’s number two until October 2011, responded to critics by pointing out that the Department of Defense would not “militarize” cyberspace. “Indeed,” Lynn wrote, “establishing robust cyberdefenses no more militarizes cyberspace than having a navy militarizes the ocean.”21 There is one major problem with such statements.

The US government, as well as many other governments, has so far failed to establish robust cyberdefenses. Robust defenses against sabotage mean hardening computer systems, especially the systems that are moving stuff around, from chemicals to trains—but the actual security standards of the systems that run the world’s industrial and critical infrastructure continued to be staggeringly low in 2013. Robust defenses against espionage mean avoiding large-scale exfiltration of sensitive data from companies and public agencies—but Western intelligence agencies are only beginning to understand counter-espionage and the right use of human informants in a digitized threat environment. Robust defenses against subversion finally mean maintaining social stability by strengthening the Internet’s openness and the citizen participation in political process—the best insurance against degenerative subversion is allowing and even protecting regenerative subversion. If militarizing cyberspace means establishing robust cyber defenses, then cyberspace has not been “militarized.” What has been militarized is the debate about cyberspace. One result is that the quality of this debate dropped, or, more accurately, never rose to the level that information societies in the twenty-first century deserve. This book has argued that this remarkable double-standard is not merely a coincidence. What appears as harmless hypocrisy masks a knotty causal relationship: loose talk of cyber war overhypes the offenses and blunts the defenses.

In the 1950s and 1960s, when Giraudoux’s play was translated into English, the world faced another problem that many thought was inevitable: nuclear exchange. Herman Kahn, Bill Kaufmann, and Albert Wohlstetter were told that nuclear war could not be discussed publicly, as Richard Clarke pointed out in his alarmist book, Cyber War. He rightly concluded that, as with nuclear security, there should be more public discussion on cyber security because so much of the work has been stamped secret. This criticism is justified and powerful: too often countries as well as companies do not share enough data on vulnerabilities as well as capabilities. Of course there are limits to transparency when national security and corporate revenue is at stake. But democratic countries deserve a public debate on cyber security that is far better informed than the status quo. Open systems, no matter whether we are talking about a computer’s operating system or a society’s political system, are more stable and run more securely. The stakes are too high to just muddle through.

The Pearl Harbor comparison and the Hiroshima analogy point to another limitation: unlike the nuclear theorists of the 1950s, cyber war theorists of the 2010s have never experienced the actual use of a deadly cyber weapon, let alone a devastating one like “Little Boy.” There was no and there is no Hiroshima of cyber war. Based on a careful evaluation of the empirical record, based on technical detail and trends, and based on the conceptual analysis presented here, a future cyber-Hiroshima must be considered highly unlikely. It is about time for the debate to leave the realm of myth and fairytale—to a degree, serious experts have already moved on, and the political debate in several countries is beginning to follow their lead. Cassandra, it is true, was right at the end and had the last word in the famous Greek myth. But then, did the Trojan War really take place?