2

VIOLENCE

On 6 September 2007 the Israeli Air Force bombed the construction site of a nuclear reactor at Dayr ez-Zor in Northern Syria. To prepare the air raid, a secret Israeli agency neutralized a single Syrian radar site at Tall al-Abuad, close to the Turkish border. To do so, the Israelis probably used computer sabotage. This intrusion achieved something that would previously have required the physical destruction of radar installations, damaging property, potentially hurting or killing some of the system’s operators, and possibly innocent civilians: a missile strike or an infiltration of Special Forces teams to blow up the site would have been the conventional alternative. So the outcome of the cyber attack was in some ways equivalent to that of a physical attack: a disabled air defense system. But was the cyber attack violent?

Any serious discussion of cyber war necessarily rests on a foundation. This foundation is our understanding of the nature of violence, and by extension our understanding of violence in cyberspace. And as with cyber war and cyber weapons, understanding the nature of violence in cyberspace means understanding the nature of the former phenomenon first. Only then can the key questions be tackled: what is violence in the context of cyber attacks? Does the notion of violence change its meaning when it is applied to cyberspace? The answer therefore depends on where a line is drawn between a violent act and a non-violent act, and on what we consider to be violence and what we do not consider as violence. This understanding of violence also forms the foundation for our understanding of political, economic, military, and especially ethical considerations of all cyber attacks, be they violent or otherwise.

This chapter puts forward a simple argument with a twist: most cyber attacks are not violent and cannot sensibly be understood as a form of violent action. And those cyber attacks that actually do have the potential of force, actual or realized, are bound to be violent only indirectly. Violence administered through cyberspace is less direct in at least four ways: it is less physical, less emotional, less symbolic, and, as a result, less instrumental than more conventional uses of political violence. Yet cyber attacks, be they non-violent or in very rare cases violent, can achieve the same goal that political violence is designed to achieve: namely, to undermine trust, and specifically collective social trust in specific institutions, systems, or organizations. And cyber attacks may undermine social trust, paradoxically, in a more direct way than political violence, by taking a non-violent shortcut. Moreover, they can do so by remaining entirely invisible.

The argument is outlined in four short steps. The chapter starts by considering the various media through which violence can be expressed. Secondly the crucial role of the human body in committing as well as receiving acts of violence will be discussed. The chapter then briefly clarifies the concept of violence, in juxtaposition to power, authority and, most importantly, force, highlighting the symbolic nature of instruments of force. The argument finally discusses trust and the most important limitation as well as the most important potential of cyber attacks.

Violence is conventionally administered in one of three ways—through force, through energy, or through agents. A new fourth medium is code, which is bound to be more indirect—if it is to be included as a separate medium at all. The first two categories are borrowed from physics, the third from chemistry and biology. The first instance—force—is the most obvious. In physics, force is described as an influence that changes the motion of a body, or produces motion or deformation in a stationary object. The magnitude of force can be calculated by multiplying the mass of the body by its acceleration, and almost all weapons combine physical mass with acceleration, be it a fist, a stone, a pike, a bullet, a grenade, even a missile. The second medium—energy—is perhaps somewhat less obvious at first glance, but is almost as old as the use of mechanical force to coerce, hurt, or kill other human beings. Fire, heat, and explosions are used as powerful and highly destructive media of violence. Sun Tzu, the ancient Chinese author of The Art of War, had a chapter on “attack by fire.”1 Less common uses of energy at war include the use of electricity, for instance in Tasers, or lasers. Agents are the third medium of violence. Some weapons rely neither on physical force nor on energy, but on agents to do the work of harming the target. The most obvious examples are biological weapons and chemical weapons—after all, such agents do not have to be fired in an artillery shell or missile, which physically thrusts them into a target perimeter to deliver the deadly payload. The agent does the harm. Weaponized agents impair the human organism and lead to injury or death: anthrax, endo-spores that cause respiratory infection, and in a high number of infections ultimately respiratory collapse; mustard gas, a chemical agent, causes blisters, burns, and is strongly carcinogenic.

Any discussion of violence in the context of cyber attacks needs to start by recognizing some basic philosophical insights. In contrast to almost all instruments that may be used for violent effect, code differs in two notable ways. The first basic limitation is that code-caused violence is indirect: it has to “weaponize” the target system in order to turn it into a weapon. Code doesn’t have its own force or energy. Instead, any cyber attack with the goal of physical destruction, be it material destruction or harming human life, has to utilize the force or energy that is embedded in the targeted system or created by it. Code, quite simply, doesn’t come with its own explosive charge. Code-caused destruction is therefore parasitic on the target. Even the most sophisticated cyber attack can only physically harm a human being by unleashing the violent potential that is embedded in that targeted system. This could be a traffic control system, causing trains or planes to crash; a power plant that may explode or emit radiation; a dam that may break and cause a devastating flash flood; a pipeline that may blow up; hospital life support systems that may collapse in emergency situations; or even a pacemaker implanted in a heart patient that could be disrupted by exploiting vulnerabilities in its software. Yet so far, no such scenario has ever happened in reality. Lethal cyber attacks, while certainly possible, remain the stuff of fiction novels and action films. Not a single human being has ever been killed or hurt as a result of a code-triggered cyber attack.

Computer code can only directly affect computer-controlled machines, not humans. At first glance the way a biological virus harms a system may be compared to the way a computer virus—or other malware—harms a computer system. Jürgen Kraus, a German student, coined this metaphoric comparison, and the term computer virus itself, in 1980. In his MA dissertation, “Reproduktion bei Programmen,” Kraus argued that self-reproducing programs would be inconsequential if they did not reside inside the memory of a computer: “Only inside the computer, and only if the program is running, is a self-reproducing program in a position for reproduction and mutation.”2 Kraus then pointed to an important difference: a biological virus could start its own reproductive process, but a computer virus would rely on activation through the operating system. The most crucial difference was so obvious to Kraus that he didn’t have to mention it: biological viruses can only affect biological systems; computer viruses can only affect machines that rely on code. Put simply, a biological virus cannot directly harm a building or vehicle, and a computer virus cannot directly harm a human being or animal.

Finally, one special hypothetical case of a parasitic cyber attack should be mentioned. Many modern weapon systems, from artillery guns to naval drones, are controlled by software, by computer code. An increasing number of such systems will be equipped with varying degrees of autonomous decision-making capabilities in the future. The International Committee of the Red Cross has recognized this trend and has already started considering possible adaptations to the law of armed conflict. Yet caution is warranted. Code that is a built-in component of a weapon system should not be seen as part of a cyber attack—otherwise the concept would lose its meaning: every complex weapon system that uses computers in one way or the other would then count as a form of cyber attack. That would not make sense. But there is one exception: an automated complex weapon system becoming the target of a breach. If weaponized code can only unlock physical destruction by modifying a targeted system, then the perfect target system is one that gives the attacker maximum flexibility and maximum potential for damage: in theory, an armed remotely controlled aircraft, such as a Predator or Reaper drone, is a far more attractive target for a cyber attack than a power plant or an air-traffic control system. In such a scenario, the aggressor would not merely weaponize a clunky system that was never designed to be a weapon—the attackers could actually “weaponize” a weapon. In practice, however, such an episode has never happened and indeed is difficult to imagine. The only incident on record that comes close to such an attack occurred in late June 2012: researchers from the University of Texas at Austin’s Radionavigation Laboratory hijacked a small surveillance drone during a test-flight in a stadium in Austin, Texas. The academics “spoofed” the drone’s GPS system by infiltrating the machine’s navigation device with a signal that was more powerful than the one received from satellites used for legitimate GPS navigation. This meant that the scholars could override the drone’s commands and thus control its flight path. The device they used allegedly cost only $1,000 to build, Fox News reported.3 A far better-known incident is a questionable example: in the first days of December 2011, a CIA-operated American Lockheed Martin RQ-170 Sentinel drone fell into Iranian hands near the city of Kashmar, in the country’s north-east. An anonymous Iranian engineer who worked on analyzing the captured drone claimed that electronic warfare specialists had spoofed the drone’s GPS navigation.4 After a ten-week review of the incident the CIA reportedly found that a faulty data stream had caused operators to lose contact with the drone, rather than Iranian interference or jamming.5 While spoofing is technically possible, it is very unlikely that such an attack could succeed against a more complex armed military drone in the field: the controls are likely to be encrypted, altitude can be a problem, and deceiving a GPS receiver is not the same as infiltrating the control system that can unleash a military drone’s deadly missiles against specified targets.

The human body, the previous pages argued, is not directly vulnerable to cyber attack, only indirectly. But the human body, in several ways, is the foundation of violence. It enables both the act of attacking and of being attacked. Understanding this foundational role of the human body is necessary to see the emotional limitations of code-borne violence, as well as the symbolic limitations of cyber attack.

Taking the human body as the starting point for political theory has a long tradition, especially among political philosophers concerned with the phenomenon of violence and how to overcome it. The human body’s vulnerability is its most notable feature in most such considerations of political theory. Thomas Hobbes and his reflections on the vulnerability of the unprotected human existence probably come to mind first. The driving force for all political organization is the universal weakness of all humans and their resulting dependence on protection. The opening paragraph of Chapter 13 of Leviathan deserves to be read in full:

Nature has made men so equall, in the faculties of body, and mind; as that though there bee found one man sometimes manifestly stronger in body, or of quicker mind then another; yet when all is reckoned together, the difference between man, and man, is not so considerable, as that one man can thereupon claim to himselfe any benefit to which another many not pretend, as well as he. For so as to the strength of the body, the weakest has the strength enough to kill the strongest, either by secret machination, or by confederacy with others, that are in the same danger with himselfe.6

This equalizing vulnerability forms the conceptual basis for Hobbes’s famous state of the war of all against all, and the normative basis for the social contract to overcome the resulting “natural” state of anarchy. This state of war, and the absence of political order, prevents all meaningful social development and civilization. The consequence, in Hobbes’s most famous words, would be “continuall feare, and danger of violent death,” which would inevitably make the life of man “solitary, poore, nasty, brutish, and short.”7 Self-help, therefore, needed to be rejected and violence taken away from man, monopolized,8 and given to the collective, the “commonwealth.” It is important to note that this basic insight forms the continued foundation of most contract theory, legal theory, and indeed the modern notion of the state.9

Wolfgang Sofsky, a more recent German political theorist, is also noteworthy in this context. The political philosopher wrote a highly inspiring work about violence, entitled Traktat über die Gewalt (Pamphlet on Violence).10 The book is especially useful in the context of the present inquiry because, like cyber attacks, it ignores the state’s frontiers, and does not limit itself to internal or external violence. For Sofsky, whether domestically or internationally, the body is the center of human existence. It is because of man’s bodily existence that all humans are verletzungmächtig, able to hurt, and verletzungsoffen, able to be hurt.11 The body, in other words, is the first instrument of violence, and it is the first target of violence. The two sides—the active and the passive; aggression and protection; offense and defense—will be considered in turn.

The body is the first and foremost target of violence. Even if more advanced weapons are designed to destroy buildings, bunkers, or barricades, their ultimate aim always remains the human body. Appreciating this foundation is crucial. The experience of physical violence at the hands of a criminal or an enemy is a life-changing event for the survivor that transcends the moment of aggression. It stays with the victim. Sofsky puts this in drastic but appropriate language. Somebody else’s death may leave a painful hole in the social fabric of a family, village, or country. But the situation is different for the survivor of a violent attack. Depending on the attack’s intensity, the victim has made personal acquaintance with a life-changing event, with possibly permanent injuries, physical as well as psychological ones. Sofsky:

Pain is the material portend of death, and fear [of pain] is ultimately only an offspring of the fear of death. Pain forces man to feel what ceases to be felt in death, the tenuousness of the body, the destruction of the mind, the negation of existence. The survivor knows this in his flesh. He feels that dying has begun.12

Violence, like serious illness, confronts individuals with the fragility of their existence, with the proximity of death. Anyone who has spent time in a war zone, even if they were spared a personal encounter with violence, understands this existential dimension. The strong bonds of friendship forged in war zones are an expression of this existential and highly emotional experience. Individual violence can literally break down an individual, cause irreversible trauma, and end his or her existence—political violence, likewise, can break down a political community, cause deep collective trauma, and even upend its existence entirely. Therefore, Sofsky concluded, “no language has more power to persuade than the language of force.”13

From this follows the second major limitation: violence administered through cyberspace is not only indirect and mediated; it is also likely to have less emotional impact. Due to its intermediary and roundabout nature, a cyber attack is unlikely to release the same amount of terror and fear as a coordinated campaign of terrorism or conventional military operations would produce. A coordinated cyber attack that produces a level of pain that could sensibly be compared to that which a well-executed air campaign can inflict on a targeted country is plainly unimaginable at present.14 And here a comparison with airpower may be instructive: the use of airpower has historically been overestimated again and again, from the Second World War to Vietnam to the Persian Gulf War to the Kosovo campaign to Israel’s 2006 war against Hezbollah. In each instance the proponents of punishment administered from the air overestimated the psychological impact of aerial bombardments and underestimated the adversary’s resilience—and it should be noted that being bombed from the air is a most terrifying experience.15 Against this background of consistent hopefulness and overconfidence, it is perhaps not surprising that it was air forces, not land forces, that first warmed to “cyber” and started to maintain that their pilots would now “fly, fight and win”16 in cyberspace, the much-vaunted “fifth domain of warfare,”17 next to land, sea, air, and space. The use of cyber weapons that could inflict damage and pain comparable to the pummeling of Dresden, London, Belgrade, or Beirut at the receiving end of devastating airpower is, at present, too unrealistic even for a bad science fiction plot.

When the first casualty is caused by a cyber attack—and a “when” seems to be a more appropriate conjunction than an “if” here—there is no question that the public outcry will be massive, and depending on the context, the collective fear could be significant. But after the initial dust has settled, the panic is likely to subside and a more sober assessment will be possible. The likely finding will be that, to paraphrase Sofsky, in cyberspace the language of force just isn’t as convincing. Another reason for the limitations of instruments of violence in cyberspace, or cyber weapons, is their symbolic limitation. To appreciate the symbolism of weapons, the key is again the body.

The body is also the first instrument of violence, the first weapon. Here the “first” doesn’t indicate a priority, but the beginning of an anthropological and ultimately a technical development that is still in full force. Three things are constantly increasing: the power of weapons, the skills needed to use them, and the distance to the target. Needless to say, it is the deadly ingenuity of the human mind that devised most weapons and optimized the skills needed to use them and to build them. Yet it was the bare body that was man’s first weapon, and all instruments of violence are extensions of the human body. Examples of simple weapons, such as a club, a knife, or a sword, can help illustrate this unity between user and instrument. Martial artists trained in the delicate techniques of fencing or kendo, a Japanese form of sword-fighting, will intuitively understand that the purpose of intensive training, mental as well as physical, is to make the weapon’s use as natural as the use of one’s arms or legs, and vastly more efficient. Hence expressions such as “going into the weapon,” “becoming one” with it, or “feeling” it. Such expressions are not just familiar to fencers, archers, or snipers. The unity between the fighter and his or her weapon is not limited to the relatively simple, traditional instruments of harm. It also applies to more complex weapon systems. The F-15 pilot or the artillery gunner equally accumulate large numbers of hours in flight or on range in order to routinize the use of these instruments of harm. The goal is that the pilot does not have to think about the basics any more, even in more complex maneuvers under high stress, and instead is able to focus on other aspects of an operation. The plane, ultimately, becomes an extension of the pilot’s body as well. And the more complex a weapon, the more its use becomes the prerogative of specialists in the use of force who are trained to operate it.

Technology affects the relationship between violence and the human body in an uneven way: technology drastically altered the instruments of violence—but technology did not alter the foundation, the ultimate vulnerability of the human body. Technology can physically remove the perpetrator from the inflicted violence, but technology cannot physically remove the victim from the inflicted violence. If the bare human body is the instrument of harm, it is a fist or foot that will hit the victim’s body. If the instrument of harm is a blade, the knife will cut the victim’s skin. An arrow will hit its victim’s shoulder from a small distance. A sniper’s bullet will tear tissue from hundreds of yards. A shell’s detonation will break bones from miles away. A long-range missile can kill human beings across thousands of miles, continents away. Man can delegate the delivery of violence to an artifact, to weapon systems, but the reception of violence remains an intimately personal, bodily experience. The complexity, the precision, and the destructive power of weapon systems, as well as the degree of specialization of those operating them, have been growing continuously.

This ever-increasing power of offensive weapons highlights the symbolic role of instruments of violence—and the third major limitation: using cyber weapons for symbolic purposes. The well-trained body of a boxer or wrestler is a symbol of his or her strength, even outside the ring, outside a fight. The sword is traditionally used as a symbol of glory, martial prowess, and social status. Showing weapons, consequently, becomes a crucial part of their use and justification. In Yemen, for instance, the jambeeya, a traditional dagger worn like a giant belt-buckle, is still a man’s most visible symbol of social standing. In New York City and elsewhere, a police officer with a holstered gun imposes more respect than an unarmed sheriff. In Beijing, military parades are perhaps the most obvious spectacle literally designed to show the state’s imposing power. There is even an entire range of military operations largely executed for the purpose of display. Examples can be found on strategic and tactical levels: deploying major assets, such as a carrier strike group, at strategically significant points, or merely a patrol, be it marching, driving, sailing, or flying.18 Most explicitly, airpower may be deployed in so-called “show of force” operations, such as a deliberately low flyover by a military aircraft, designed to intimidate the enemy tactically. As the power of weapons increases along with the required skills of their users and the distance they can bridge, the need for symbolism increases as well: displaying weapon systems and threatening their use, in many situations, becomes more cost-efficient than using them. Politically and ethically the symbolic use of weapons is also strongly preferable. Nuclear weapons are the most extreme expression of this trend. But cyber assets are different. Showing the power of cyber weapons is vastly more difficult than showing the power of conventional weapons, especially if the purpose is to administer a threat of force, not actually using force itself. Exploit code cannot be paraded on the battlefield in a commanding gesture, let alone displayed in large city squares on imposing military vehicles. In fact, publishing dangerous exploit code ready to be unleashed (say, for the sake of the argument, on the DoD’s website) would immediately lead to patched defenses and thus invalidate the cyber weapon before its use. Using cyber weapons, for instance to fire a warning shot in the general direction of a potential target once for a show-of-force operation, comes with a separate set of problems. The next chapter will explore some of these limitations in more detail. The argument here is that displaying force in cyberspace is fraught with novel and unanticipated difficulties.

So far, three limitations of violence in cyberspace have been introduced: code-induced violence is physically, emotionally, and symbolically limited. These limitations were straightforward and highlighting them did not require significant conceptual groundwork. This is different for both the most significant limitation of cyber attacks19 and their most significant potential. To bring both into relief, a more solid conceptual grounding in the political thought on violence is required. (Note to readers: this book caters to a diverse audience, those interested in conflict first and in computers only second, and to those interested primarily in new technologies and their impact on our lives. For readers from either group, political theory may not be the first reading choice. Yet especially those readers with a practical bent are encouraged to engage with the following pages. These conceptual considerations are not introduced here as a scholarly gimmick. Indeed theory shouldn’t be left to scholars; theory needs to become personal knowledge, conceptual tools used to comprehend conflict, to prevail in it, or to prevent it. Not having such conceptual tools is like an architect lacking a drawing board, a carpenter without a metering rule, or, indeed, a soldier without a rifle.)

Political violence is always instrumental violence, violence administered (or threatened) in the service of a political purpose, and that purpose is always to affect relationships of trust. Violence can either be used to establish and to maintain trust, or it can be used to corrode and to undermine trust. Terrorism, for instance, is a means to undermine a society’s trust in its government. Violence can be used to maintain or re-establish trust, for instance by identifying and arresting criminals (or terrorists), those who broke the social contract by using force against other citizens and their property. The political mechanic of violence has two starkly contrasting sides, a constructive and a destructive side, one designed to maintain trust, the other designed to undermine trust. The two are mutually exclusive, and will be considered in turn. Only then can the utility of cyber attacks be considered respectively.

A brief word on trust is necessary here. Trust is an essential resource in any society.20 Perhaps because of this towering significance, trust and trustworthiness are highly contested concepts in political theory. Because trust is so important yet so abstract and contested, concisely defining the use of the term is crucial for the purposes of the present argument. Political thought distinguishes between two rather different kinds of trusting: trust should first be understood as an interpersonal relationship between two individuals.21 Examples are the kind of trust relationships that exist between me and my brother, between you and your plumber, or between a customer and a taxi driver. Such relationships of trust are always directed towards an action: my brother may trust me to take care of his daughter while he’s away; you trust your plumber to fix the bathroom boiler while you are at work; and a traveler trusts the cabbie to go the right way and not rob them. The second kind of trust refers to a collective attribute rather than to an individual’s psychological state: the trust that individuals collectively place into institutions.22 Such an institution can be a bank, a solicitor, an airline, the police, the army, the government, or more abstract entities like the banking system or the legal order more generally. Again, such relationships of institutional trust are related to an activity or service: customers trust their bank not to steal money from them, they trust airlines to maintain and operate planes properly, and citizens trust the police to protect them when necessary. At second glance, interpersonal trust and institutional trust are connected. You are likely to trust your plumber not because he’s such a trustworthy person per se, but because you know that law enforcement in England or France or Germany is set up in a way that, if he betrays your trust, he will face legal punishment. Something similar applies to the taxi ride. Consequently, an astute traveler looking out for a taxi ride across Cairo just after the 2011 revolution, with law and order having partly collapsed, is likely to place far less trust into the orderly behavior of a random cab driver. This means that trustworthy and stable legal institutions only enable interpersonal trust between individuals who may represent those institutions or be bound by them.23

Focusing on trust significantly broadens the horizon of the analysis of political violence. At the same time it offers a more fine-grained perspective on the purpose of an attack, and this increased resolution is especially useful when looking at various kinds of cyber attacks. The goal of an attack—executed by code or not, violent or not—may be more limited than bringing down the government or challenging its legitimacy in a wholesale fashion. Not all political violence is revolutionary, not all activists are insurgents, and not all political attacks are violent. The goal of an attack may be as distinct as a specific policy of a specific government, a particular business practice of a particular corporation, or the reputation of an individual.

Violence, in the hands of the established order, is designed to maintain social trust. To appreciate the depth of this insight, consider three well-established maxims of political theory. First, violence is an implicit element of even the most modern legal order. Any established political order comes with a certain degree of violence built-in—consolidated states, after all, are states because they successfully maintain a monopoly over the legitimate use of force. Any legal order, to use the language of jurisprudence, is ultimately a coercive order.24 One of the most inspiring writers on this subject is Alexander Passerin d’Entrèves, a twentieth-century political philosopher from the multi-lingual Aosta Valley in the Italian Alps. In his book, The Notion of the State, the political philosopher discusses the state and its use of force at length. “The State ‘exists’ in as far as a force exists which bears its name,” he wrote, referring to the tacit nature of the potential threat of force that is the subtext and foundation of any rule of law. “The relations of the State with individuals as well as those between States are relations of force.”25 One of Hobbes’s most famous quotes captures this tacit presence of force in the authority of the law, “Covenants, without the sword, are but words.”26

This tacit violence, secondly, becomes power. And it is trust that turns violence into power. To be more precise: social trust, ultimately, relies on the functioning of the rule of law, and the rule of law in turn relies on the state effectively maintaining and defending a monopoly of force, internally as well as externally. This thought may appear complex at first glance, but it forms the foundation of the modern state. In fact this question is at the root of much of a vast body of political theory, a body of literature dedicated, in Passerin d’Entrèves’s words, to the long and mysterious ascent that leads from force to authority, to asking what transforms “force into law, fear into respect, coercion into consent—necessity into liberty.” It is obviously beyond the capacity of the present analysis to go into a great level of detail here. But a short recourse to a few of the most influential political thinkers will help make the case that trust is a critically important concept. Again Hobbes:

The Office of the sovereign, be it a monarch or an assembly, consisteth in the end, for which he was trusted with the sovereign power, namely the procuration of the safety of the people.27

Collective trust in the institutions of the state is one of the features that turn violence into power. John Locke, a philosopher of almost equal standing to Hobbes in the history of Western political thought, captured this dynamic eloquently. For Locke, trust is an even more central concept:

[P]olitical power is that power which every man having in the state of Nature has given up into the hands of the society, and therein to the governors whom the society hath set over itself, with this express or tacit trust, that it shall be employed for their good and the preservation of their property.28

Force, when it is used by the sovereign in order to enforce the law, ceases to be mere violence. By representing the legal order, force becomes institutionalized, “qualified” in Passerin d’Entrèves’s phrase, “force, by the very fact of being qualified, ceases to be force” and is being transformed into power.29

Thirdly, political violence—whether in its raw form or in its tacit, qualified form as power—is always instrumental. “Violence can be sought only in the realm of means, not of ends,” wrote Walter Benjamin, an influential German-Jewish philosopher and literary critic. His essay, Critique of Violence, published in the author’s native German as Kritik der Gewalt in 1921, is a classic in the study of violence.30 Benjamin also pointed to a fundamentally constitutional character of violence, its “lawmaking” character. “People give up all their violence for the sake of the state,” Benjamin writes, in agreement with realist political theorists and positivist legal thought.31 He then distinguishes between a “law-preserving function” of violence and a “lawmaking function” of political violence.32

While there is some consensus on a theory of war, and certainly on a theory of law, there is little consensus on a theory of violence—although both war and law employ force and ultimately violence. Perhaps not surprisingly, a significant amount of political thought on violence—like on war and law—comes from philosophers and political thinkers who were German, or at least German-speaking. The German word for violence is Gewalt. This is not surprising for two reasons: one, because the country’s history was exceptionally violent, especially during the nineteenth and the first half of the twentieth century, when most landmark texts were written. Its authors include Karl Marx, Max Weber, Walter Benjamin, Hannah Arendt, and Carl Schmitt, all of whom wrote classics in sociology and political science. But it also includes authors like Carl von Clausewitz, who wrote one of the founding texts of the theory of war, On War, and Hans Kelsen, whose œuvre includes one of the founding texts of jurisprudence, The Pure Theory of Law. Gewalt, a word that does not translate directly into English, plays a key role in all of these texts. It is indeed a forceful concept. This leads to the other reason for the prevalence of German authors in this field: the German language “qualifies” violence from the start—or more precisely, it never disqualified Gewalt, it never distinguished between violence, force, and power. Gewalt can be used as in Staatsgewalt, the power of the state, or as in Gewaltverbrechen, violent crime. These basic concepts of classical political theory bring into relief the most important limitation as well as the most important potential of cyber attacks.

The utility of cyber attacks—be they violent or non-violent—to establish and maintain trust is crucially limited. First, violent cyber attacks, or the threat of violent cyber attacks, are unlikely to ever be an implicit part of a legal order. Neither espionage, nor sabotage or subversion, has the potential to maintain let alone establish a coercive order. Domestic surveillance may be used as a supplemental and highly controversial tool for that purpose, but not as the actual instrument of coercion. From this follows, secondly, that the notion of “cyberpower,” properly defined, is so shaky and slippery as to be useless. The notion of Cybergewalt, in other words, doesn’t make a lot of sense at present. Hence, thirdly, code-borne violence is hugely limited in its instrumentality: it has little to no trust-maintaining potential, and may only contribute to undermining trust. This limiting insight at the same time points to the most significant potential of cyber attacks: cyber attacks can achieve similar or better effects in a non-violent way.

Political violence is also a means of eroding trust. The perpetrators of political violence, especially in its most extreme form, terrorism, almost always clamor for publicity and for media attention, even if no group claims credit for an attack. The rationale of maximum public visibility is to get one crucial statement across to the maximum number of recipients: see, your government can’t protect you, you can’t trust the state to keep you safe. This logic of undermining a government’s trustworthiness applies in two scenarios that otherwise have little in common. One is indiscriminate terrorist violence where random civilians are the victims, including vulnerable groups, such as the elderly, women, and children. All political violence, but especially the most brutal indiscriminate kind, also sends a message that is likely to cross the militants’ interest: we don’t discriminate, we’re brutal, we don’t respect the life of innocents. But in the heat of violent internal conflict, the utility of progressively undermining the population’s trust in the government outweighs these reputational costs. The other scenario is regular, state-on-state war. When one country’s army goes to war against another country’s armed forces, one of the key objectives is to undermine the link between the population and its own government. That link is a form of institutional trust. Clausewitz famously described the population as one of the three elements of a “fascinating trinity” (the other elements being the government and the army). The population, the Prussian philosopher of war wrote, is the source of passion and the energy that is required to sustain a nation’s war effort. Therefore, if the majority of the population loses faith in its government’s and army’s ability to prevail, then public opinion and morale will collapse. This dynamic does not just apply to democracies, but to all political systems, albeit in different ways. Modern armed forces, not unlike militant groups, are therefore seeking to maximize the public visibility of their superior firepower.33 In both regular and irregular conflict, for generals and terrorists, one of the main goals is to convince opinion leaders in a country’s wider population that their present regime will be unable to resurrect and maintain its monopoly of force—either externally or internally.

Cyber attacks can undermine an institution’s trustworthiness in a non-violent way. Here the context matters greatly. Of course the use of political violence is not the only way to undermine trust in a government, an institution, a policy, a company, or somebody’s competence. It is not even the most common method, nor is it the most efficient or most precise one. Violence is merely the most extreme form of political attack. In liberal democracies, such as the United States, most forms of non-violent political activism and protest are not just legal, but are also considered legitimate, especially in hindsight, for instance the civil rights movement in the United States. But even extreme forms of activism and political speech are protected by the American Constitution’s First Amendment.34 In less liberal political communities, certain forms of non-violent political activism will be illegal. The more illiberal a system, the more non-violent dissent will be outlawed (the chapter on subversion below will explore this problem in more detail).

Cyber attacks, both non-violent as well as violent ones, have a significant utility in undermining social trust in established institutions, be they governments, companies, or broader social norms. Cyber attacks are more precise than conventional political violence: they do not necessarily undermine the state’s monopoly of force in a wholesale fashion. Instead they can be tailored to specific companies or public sector organizations and used to undermine their authority selectively. The logic of eroding trust by means of cyber attack is best illustrated with examples. Four examples will help extract several insights.

The first and most drastic is the DigiNotar case, a hacking attack on a computer security company. DigiNotar used to be a leading certificate authority based in the Netherlands, initially founded by a private lawyer in cooperation with the official body of Dutch civil law notaries. Certificate authorities issue digital certificates, and DigiNotar was founded to do so for the Dutch government as well as commercial customers. In cryptographic terms, the certificate authority is a so-called trusted third party, often abbreviated as TTP, and effectively acts as a provider of trust between two other parties. It does so by certifying the ownership of a public key by the named “subject” (the owner) of the certificate. A browser usually displays the presence of a certified website by a small green lock or other green symbol on the left side of the browser’s address bar. All mainstream browsers were configured to trust DigiNotar’s certificates automatically. Significantly, some of the Dutch government’s most widely used electronic services relied on certificates issued by the compromised firm, for example the country’s central agency for car registration, Rijksdienst voor het Wegverkeer, or DigiD, an identity management platform used by the Dutch Tax and Customs Administration to verify the identity of citizens online by using the country’s national identification number, the Burgerservicenummer. Given the nature of DigiNotar’s services as a certificate authority—trust was its main product—it was by definition highly vulnerable to cyber attack.

That attack happened on 10 July 2011.35 The self-styled “Comodo Hacker” gained access to the firm’s computer systems and, over the next ten days, issued more than 530 fraudulent certificates, including certificates pretending to be from Google, Skype, and Mozilla, but also from major Western intelligence agencies, such as the CIA, the Mossad, and the British MI6.36 Nine days later, on 19 July, DigiNotar’s staff had detected an intrusion into its certificate infrastructure, but the firm did not publicly disclose this information at the time. The company revoked all the fraudulent certificates it detected, about 230 in total, but that meant that it failed to revoke more than half of the fraudulent certificates.37 At least one certificate was then used for so-called man-in-the-middle attacks against users of Gmail and other encrypted Google services. By intercepting the traffic between Google and the user, unknown attackers were able to steal passwords and everything else that these unsuspecting users typed or stored in their account. The Dutch security firm did not usually issue certificates for the Californian search giant. But the fact that Google had no business relationship with the issuer of the certificate did not make the attack any less effective—all mainstream browsers, from Firefox to Chrome to Internet Explorer, accepted DigiNotar’s faked certificate as credible anyway. Only six weeks later, on Saturday, 27 August, one user in Iran noticed something was wrong. Alibo, as he chose to call himself, was running the latest version of Google’s Chrome browser and noticed an unusual warning when he checked his emails on Gmail. Just two months earlier, in May 2011, Google added the “public key pinning” feature to its browser. This meant that Google essentially “hard-coded” fingerprints for its own web services into Chrome. Chrome then simply ignored contrary information from certificate authorities, and displayed a warning to users.38 Alibo saw this warning and posted a question on a Google forum that Saturday, “Is this MITM to Gmail’s SSL?” he asked in broken geek jargon, referring to a man-in-the-middle attack and a common encryption format, secure socket layer.39

The reaction came quickly. After all, this was the first time that the malicious use of a fake certificate on the Internet had come to light, as was pointed out by the Electronic Frontier Foundation, a group that protects civil liberties online.40 By Monday, Internet Explorer, Firefox, and Chrome had been updated, and patched versions rejected all of DigiNotar’s certificates and displayed a warning to the user. “Browser makers Google, Mozilla and Microsoft subsequently announced that they would permanently block all digital certificates issued by DigiNotar, suggesting a complete loss of trust in the integrity of its service,” Wired magazine reported. The affair, and the Dutch certificate authority’s handling of the breach, fatally undermined the trust that its users and customers had placed in the firm. The Dutch Ministry of the Interior even went so far as to announce that the government of the Netherlands was no longer able to guarantee that it was safe to use its own websites. On 20 September 2011, DigiNotar’s owner, VASCO, announced that the company was bankrupt.

The hack was highly significant. It enabled a deluge of second-order attacks. When a certificate is used, the browser sends a request to a responding server at the certificate issuing company by using a special Internet protocol, the Online Certificate Status Protocol, or OCSP, to obtain a certificate’s revocation status. The protocol reveals to the responder that a particular network host is using a particular certificate at a particular time. The first fake *.google.com certificate status request came on 27 July, seventeen days after the certificate had been issued. A week later, on 4 August, the numbers of requests to DigiNotar’s OCSP responders “massively” surged, the official investigation found. The affected users, Google reported, were mostly in Iran.41 The suspicion was that Iranian dissidents, many of whom trusted Google for secure communications, had been targeted in the attack.42 Around 300,000 unique IPs requesting access to Google were identified, with more than 99 per cent coming from Iran. Those that did not come from Iran were mainly due to Iranian users hiding behind a proxy server abroad. This means that somebody in Iran was trying to spy on more than a quarter of a million users in a very short period of time.43 The question of who that somebody was remains unanswered. The initial attack on DigiNotar may be the work of a single hacker. The infamous Comodo Hacker, named after an earlier attack on a company called Comodo, described himself as a 21-year-old student of software engineering in Tehran—and a pro-establishment patriot.44 In an interview with The New York Times shortly after the notorious attack, he claimed to revere Ayatollah Ali Khamenei and despise Iran’s dissident Green Movement. Comodo Hacker chose that particular certificate authority from a larger list of such companies because it was Dutch, he told the Times.45 The allegedly angry student blamed the Dutch government for the murder of more than 8,000 Muslims in Srebrenica in 1995 during the Bosnian War:

When Dutch government, exchanged 8000 Muslim for 30 Dutch soldiers and Animal Serbian soldiers killed 8000 Muslims in same day, Dutch government have to pay for it, nothing is changed, just 16 years has been passed. Dutch government’s 13 million dollars which paid for DigiNotar will have to go directly into trash, it’s what I can do from KMs [kilometers] away! It’s enough for Dutch government for now, to understand that 1 Muslim soldier worth 10000 Dutch government.46

This justification is somewhat dubious. It is more likely that DigiNotar simply offered “low-hanging fruit,” a term malware experts often use when referring to easy-to-pick and obvious targets. An audit immediately after the hack found that the compromised firm lacked even basic protection: it had weak passwords, lacked anti-virus shields, and even up-to-date security patches. What the industry calls “bad hygiene” enabled the consequential breach. As late as 30 August, F-Secure, a cutting-edge Finnish security firm, discovered several defacements to less often visited parts of DigiNotar’s website, some of which were more than two years old and related to older hacks committed by other groups.47 It is therefore likely that DigiNotar had a reputation for being an easy target among hackers. The question of whether or not the hacker acted on his own initiative is of secondary importance. The online vigilante openly admitted that he shared his information with the Iranian government. “My country should have control over Google, Skype, Yahoo, etc.,” the alleged Comodo Hacker told The New York Times by email. “I’m breaking all encryption algorithms and giving power to my country to control all of them.” It is highly likely that the spying surge after 4 August was the result of Iranian government agencies or their representatives, possibly ISPs, spying on unsuspecting citizens using Gmail, an email service popular among techy users for its high security standards.

The DigiNotar case offers a triple example of how cyber attacks can undermine trust: first, the trust in the Dutch certificate authority Digi-Notar was fatally destroyed, leading straight to bankruptcy for the company. Secondly, because the Dutch government relied on the company’s credibility, its own trustworthiness received a temporary hit, which the Ministry of the Interior tried to limit by being transparent about the crisis.

A second example also involves a joint public—private target set, the infamous cyber attacks against Estonia’s government and some private sector companies in May 2007. The perpetrators, likely Russians with a political motivation, most certainly did not anticipate the massive response and high public visibility that their DDoS attacks received. Estonia’s political leadership was taken aback by the attack and scrambled for an appropriate response, both practical and conceptual. “The attacks were aimed at the essential electronic infrastructure of the Republic of Estonia,” said Jaak Aaviksoo, then Estonia’s new minister of defense:

All major commercial banks, telcos, media outlets, and name servers—the phone books of the Internet—felt the impact, and this affected the majority of the Estonian population. This was the first time that a botnet threatened the national security of an entire nation.48

One of the questions on Aaviksoo’s mind at the time was if he should try to invoke Article 5 of the North Atlantic Treaty, which guarantees a collective response to an armed attack against any NATO country. Ultimately that was not an option as most NATO states did not see a cyber attack as an “armed attack,” not even in the heat of the three-week crisis. “Not a single Nato defence minister would define a cyber-attack as a clear military action at present,” Aarviksoo conceded, adding: “However, this matter needs to be resolved in the near future.”49 One Estonian defense official described the time leading up to the launch of the attacks as a “gathering of botnets like a gathering of armies.”50 Other senior ministers shared his concern. Estonia’s foreign minister at the time of the attack was Urmas Paet. From the start, he pointed the finger at the Kremlin: “The European Union is under attack, because Russia is attacking Estonia,” he wrote in a statement on 1 May 2007, and added: “The attacks are virtual, psychological, and real.”51 Ansip, the prime minister, was already quoted above: “What’s the difference between a blockade of harbors or airports of sovereign states and the blockade of government institutions and newspaper web sites?”52 Ene Ergma, the speaker of the Estonian parliament with a PhD from Russia’s Institute of Space Research, preferred yet another analogy. She compared the attack to the explosion of a nuclear weapon and the resulting invisible radiation. “When I look at a nuclear explosion and the explosion that happened in our country in May,” Ergma told Wired magazine, referring to the cyber attack, “I see the same thing. Like nuclear radiation, cyber warfare doesn’t make you bleed, but it can destroy everything.”53 The panic was not confined to the small Baltic country. In the United States, hawkish commentators were alarmed at what they saw as a genuine, new, and highly dangerous threat. Ralph Peters, a retired Army intelligence officer and prolific hawkish commentator, published a red-hot op-ed in Wired two months after the Estonia attack. He accused the Department of Defense of underestimating a novel and possibly devastating new threat:

[T]he Pentagon doesn’t seem to fully grasp the dangerous potential of this new domain of warfare. If you follow defense-budget dollars, funding still goes overwhelmingly to cold war era legacy systems meant to defeat Soviet tank armies, not Russian e-brigades.54

The United States could face a devastating surprise attack, Peters held, an attack that could make Pearl Harbor look like “strictly a pup-tent affair,” he wrote, borrowing an expression from Frank Zappa’s song “Cheepnis.”

In hindsight, these comparisons and concerns may appear overblown and out of sync with reality. But they should not be dismissed as hyperbole too easily. Aaviksoo’s and Ansip’s and Peters’s concerns were genuine and honest—they are expressions of a successful erosion of trust in previous security arrangements. It is important to point out—especially against the background of all these martial analogies—that both the DigiNotar hack and the Estonian DDoS were non-violent. Yet they effectively undermined public trust in a company and in a country’s ability to cope with a new problem. In the one case the erosion of trust was terminal (DigiNotar filed for bankruptcy); in the other case it was temporary: a few years after the attack Estonia had better defenses, better staff, and excellent skills and expertise on how to handle a national cyber security incident.

Therefore, thirdly, examining the only possibly violent cyber attack to have taken place in the wild—Stuxnet—is instructive.55 Even this one cyber attack that created a certain amount physical destruction, albeit directed against technical equipment, had a strong psychological element. It was intended to undermine trust, the trust of scientists in their systems and in themselves, and the trust of a regime in its ability to succeed in its quest for nuclear weapons. When Stuxnet started successfully damaging the Iranian centrifuges, the Iranian operators did not know what was happening for more than two years. The operation started long before Barack Obama was sworn in as president in January 2009, possibly as early as November 2005. Independent security companies would discover the malicious code only in June 2010. The original intention was to cause physical damage to as many of the Iranian centrifuges as possible. But the American—Israeli attackers probably knew that the physical effect could be exploited to unleash a much more damaging psychological effect: “The intent was that the failures should make them feel they were stupid, which is what happened,” an American participant in the attacks told The New York Times.56 The rationale was that once a few machines failed, the Iranian engineers would shut down larger groups of machines, so-called “stands” that connected 164 centrifuges in a batch, because they distrusted their own technology and would suspect sabotage in all of them. In the International Atomic Energy Agency, a powerful UN watchdog organization based in Vienna, rumors circulated that the Iranians had lost so much trust in their own systems and instruments that the management in Natanz, a large nuclear site, had taken the extraordinary step of assigning engineers to sit in the plant and radio back what they saw to confirm the readings of the instruments.57 Such confusion would be useful to the attackers: “They overreacted,” one of the attackers revealed, “And that delayed them even more.”58 The Iranians working on the nuclear enrichment program began to assign blame internally, pointing fingers at each other, even firing people. Stuxnet, it turned out, was not a stand-alone attack against the self-confidence of Iranian engineers. It is important to note that the Stuxnet operation was probably designed to remain entirely clandestine. The best trust-eroding effect would have been achieved if Iran’s engineers and leaders had not realized that their work was being sabotaged at all. The most effective cyber attacks may be those that remain entirely secret.

A most curious follow-on cyber assault occurred on 25 July 2012, and provides an insightful fourth example of undermining trust. A rather unusual type of attack struck two of Iran’s uranium enrichment plants: some computers shut down, while others played “Thunderstruck,” an aggressive and energetic song by the Australian rock band AC/DC. An Iranian scientist based at the Atomic Energy Organization of Iran, AEOI in short, had taken the initiative and reached out to Mikko Hypponen, the prominent and highly respected head researcher of the Finland-based anti-virus company F-Secure. Hypponen confirmed that the emails came from within the AEOI:

I am writing you to inform you that our nuclear program has once again been compromised and attacked by a new worm with exploits which have shut down our automation network at Natanz and another facility Fordo near Qom.59

F-Secure couldn’t confirm any details mentioned in the email. The anonymous Iranian scientists apparently quoted from an internal email that the organization’s “cyber experts” had sent to the teams of scientists. The email mentioned a common tool for finding vulnerabilities and developing exploit code, Metasploit, and that the mysterious attackers allegedly had access to the AEOI’s virtual private network (VPN). The attack, the scientist volunteered, shut down “the automation network and Siemens hardware.” He then revealed the most curious element that hundreds of media articles had seized on after Bloomberg first reported the news of the email published on F-Secure’s blog:

There was also some music playing randomly on several of the workstations during the middle of the night with the volume maxed out. I believe it was playing “Thunderstruck” by AC/DC.60

Some caution is in order. The only report from the episode comes from an Iranian scientist who volunteered this information to an antivirus company. It is also unclear if the attack should be seen as the latest incident in a series of US-designed cyber attacks that may have started with Stuxnet. At first glance, literally blasting an attack in the face of the Iranian engineers stands in stark contrast to clandestine attacks like Stuxnet or Flame, which were sophisticated pieces of spying software—but then, maybe the attackers didn’t expect news of the AC/DC blast to leak out, embarrassing the Iranians publicly. So either way it was not a surprise when Fereydoun Abbasi, the head of the AEOI, disputed the attack a few days later: “Who seriously believes such a story? It is baseless and there has never been such a thing,” Abbasi said in a statement to the Iranian ISNA news agency.61 The story may well be a hoax. But it should not be dismissed out of hand. AC/DC, after all, seem to be a favorite soundtrack for American warriors in battle; during the war in neighboring Iraq in 2004, for example, the Marines blasted into Fallujah to the loud riffs of AC/DC’s “Hells Bells.”62 It would be plausible to assume that the operation was part of a larger psychological campaign of attrition, designed to undermine the Iranian engineers’ trust in their systems, their skills, and their entire project, in a blow-by-blow fashion. News consumers in Europe or the United States may not seriously believe such a story—but Abbasi’s engineers, if it happened, would certainly wonder what else the mysterious attackers were able to do, after yet another entirely unpredicted attack hit their systems.

Violence administered through weaponized code, in sum, is limited in several ways: it is less physical, because it is always indirect. It is less emotional, because it is less personal and intimate. The symbolic uses of force through cyberspace are limited. And, as a result, code-triggered violence is less instrumental than more conventional uses of force. Yet, despite these limits, the psychological effects of cyber attacks, their utility in undermining trust, can still be highly effective.

This chapter opened by asking if the Israeli cyber attack on the Syrian air defense system in August 2007 was violent or not. Against the background of this analysis, the answer is clear: it was not violent. Only the combined airstrike on the soon-to-be-finished nuclear reactor was violent. But the cyber attack on its own achieved two effects that previously would have required a military strike: first, it neutralized the threat of the Syrian air defense batteries. This was a significant achievement that enabled a stealthier and possibly faster and more successful air incursion. But the second effect is possibly even more significant: the cyber attack helped undermine the Syrian regime’s trust in its own capabilities and the belief that it could defend its most critical installations against future Israeli attacks. Bashar al-Assad’s government subsequently decided not to restart Syria’s enrichment program, so this second less tangible result may have had the more sustainable effect.