1

WHAT IS CYBER WAR?

Carl von Clausewitz still offers the most concise and the most fundamental concept of war. Sun Tzu, a much older strategic thinker, often made a showing in the debate on information warfare in the 1990s. But the ancient Chinese general and philosopher is better known for punchy aphorisms than systematic thought—long sections of his book, The Art of War, read like a choppy Twitter-feed from 500 BC. Sun’s modern Prussian nemesis offers a far more coherent and finely tuned toolset for rigorous analysis. Clausewitz’s concepts and ideas, although limited in many ways, continue to form the core vocabulary of professionals and experts in the use of force. Clausewitz identifies three main criteria any aggressive or defensive action that aspires to be a stand-alone act of war, or may be interpreted as such, has to meet all three. Past cyber attacks do not.

The first element is war’s violent character. “War is an act of force to compel the enemy to do our will,” wrote Clausewitz on the first page of On War.1 All war, pretty simply, is violent. If an act is not potentially violent, it’s not an act of war and it’s not an armed attack—in this context the use of the word will acquire a metaphorical dimension, as in the “war” on obesity or the “war” on cancer. A real act of war or an armed attack is always potentially or actually lethal, at least for some participants on at least one side. Unless physical violence is stressed, war is a hodgepodge notion, to paraphrase Jack Gibbs.2 The same applies to the idea of a weapon. In Clausewitz’s thinking, violence is the pivotal point of all war. Both enemies—he usually considered two sides—would attempt to escalate violence to the extreme, unless tamed by friction, imponderables, and politics.3

The second element highlighted by Clausewitz is war’s instrumental character. An act of war is always instrumental, and to be instrumental there has to be a means and an end: physical violence or the threat of force is the means; forcing the enemy to accept the offender’s will is the end. Such a definition is “theoretically necessary,” Clausewitz argued.4 To achieve the end of war, one opponent has to be rendered defenseless. Or, to be more precise, the opponent has to be brought into a position, against their will, where any change of that position brought about by the continued use of arms would only bring more disadvantages, at least in that opponent’s view. Complete defenselessness is only the most extreme of those positions. Both opponents in a war use violence in this instrumental way, shaping each other’s behavior, giving each other the law of action, in the words of the Prussian philosopher of war.5 The instrumental use of means takes place on tactical, operational, strategic, and political levels. The higher the order of the desired goal, the more difficult it is to achieve. As Clausewitz put it, in the slightly stilted language of his time: “The purpose is a political intention, the means is war; never can the means be understood without the purpose.”6

This leads to the third and most central feature of war—its political nature. An act of war is always political. The objective of battle, to “throw” the enemy and to make him defenseless, may temporarily blind commanders and even strategists to the larger purpose of war. War is never an isolated act, nor is it ever only one decision. In the real world, war’s larger purpose is always a political purpose. It transcends the use of force. This insight was famously captured by Clausewitz’s most famous phrase, “War is a mere continuation of politics by other means.”7 To be political, a political entity or a representative of a political entity, whatever its constitutional form, has to have an intention, a will. That intention has to be articulated. And one side’s will has to be transmitted to the adversary at some point during the confrontation (it does not have to be publicly communicated). A violent act and its larger political intention must also be attributed to one side at some point during the confrontation. History does not know of acts of war without eventual attribution.8

One modification is significant before applying these criteria to cyber offenses. The pivotal element of any warlike action remains the “act of force.” An act of force is usually rather compact and dense, even when its components are analyzed in detail. In most armed confrontations, be they conventional or unconventional, the use of force is more or less straightforward: it may be an F-16 striking targets from the air, artillery barrages, a drone-strike, improvised explosive devices placed by the side of a road, even a suicide bomber in a public square. In all these cases, a combatant’s or an insurgent’s triggering action—such as pushing a button or pulling a trigger—will immediately and directly result in casualties, even if a timer or a remote control device is used, as with a drone or a cruise missile, and even if a programmed weapon system is able to semi-autonomously decide which target to engage or not.9 An act of cyber war would be an entirely different game.

In an act of cyber war, the actual use of force is likely to be a far more complex and mediated sequence of causes and consequences that ultimately result in violence and casualties.10 One often-invoked scenario is a Chinese cyber attack on the US homeland in the event of a political crisis in, say, the Taiwan Straits. The Chinese could blanket a major city with blackouts by activating so-called logic-bombs that had been pre-installed in America’s electricity grid. Financial information could be lost on a massive scale. Derailments could crash trains. Air traffic systems and their backups could collapse, leaving hundreds of planes aloft without communication. Industrial control systems of highly sensitive plants, such as nuclear power stations, could be damaged, potentially leading to loss of cooling, meltdown, and contamination11—people could suffer serious injuries or even be killed. Military units could be rendered defenseless. In such a scenario, the causal chain that links somebody pushing a button to somebody else being hurt is mediated, delayed, and permeated by chance and friction. Yet such mediated destruction caused by a cyber offense could, without doubt, be an act of war, even if the means were not violent, only the consequences.12 Moreover, in highly networked societies, non-violent cyber attacks could cause economic consequences without violent effects that could exceed the harm of an otherwise smaller physical attack.13 For one thing, such scenarios have caused widespread confusion, “Rarely has something been so important and so talked about with less clarity and less apparent understanding than this phenomenon,” commented Michael Hayden, formerly director of the Central Intelligence Agency (CIA) as well as the National Security Agency (NSA).14 And secondly, to date all such scenarios have another major shortfall: they remain fiction, not to say science fiction.

If the use of force in war is violent, instrumental, and political, then there is no cyber offense that meets all three criteria. But more than that, there are very few cyber attacks in history that meet only one of these criteria. It is useful to consider the most-quoted offenses case by case, and criterion by criterion.

The most violent “cyber” attack to date is likely to have been a Siberian pipeline explosion—if it actually happened. In 1982, a covert American operation allegedly used rigged software to cause a massive explosion in Russia’s Urengoy-Surgut-Chelyabinsk pipeline, which connected the Urengoy gas fields in Siberia across Kazakhstan to European markets. The gigantic pipeline project required sophisticated control systems for which the Soviet operators had to purchase computers on open markets. The Russian pipeline authorities tried to acquire the necessary Supervisory Control and Data Acquisition software, known as SCADA, from the United States but were turned down. The Russians then attempted to get the software from a Canadian firm. The CIA is said to have succeeded in inserting malicious code into the control system that ended up being installed in Siberia. The code that controlled pumps, turbines, and valves was programmed to operate normally for a time and then “to reset pump speeds and valve settings to produce pressures far beyond those acceptable to pipeline joints and welds,” recounted Thomas Reed, an official in the National Security Council at the time.15 In June 1982, the rigged valves probably resulted in a “monumental” explosion and fire that could be seen from space. The US Air Force reportedly rated the explosion at 3 kilotons, equivalent to a small nuclear device.16

But there are three problems with this story. The first pertains to the Russian sources. When Reed’s book came out in 2004, Vasily Pchelintsev, a former KGB head of the Tyumen region where the alleged explosion was supposed to have taken place, denied the story. He surmised that Reed might have been referring to an explosion that happened not in June but on a warm April day that year, 50 kilometers from the city of Tobolsk, caused by shifting pipes in the thawing ground of the tundra. No one was hurt in that explosion.17 There are no media reports from 1982 that would confirm Reed’s alleged explosion, although regular accidents and pipeline explosions in the USSR were reported in the early 1980s. Later Russian sources also fail to mention the incident. In 1990, when the Soviet Union still existed, Lieutenant General Nikolai Brusnitsin published a noteworthy and highly detailed small book, translated as Openness and Espionage. Brusnitsin was the deputy chairman of the USSR’s State Technical Commission at the time. His book has a short chapter on “computer espionage,” where he discusses several devices that Soviet intelligence had discovered over previous years. He recounts three different types of discoveries: finding “blippers” inserted into packages to monitor where imported equipment would be installed; finding “additional electronic ‘units’ which have nothing to do with the machine itself,” designed to pick up and relay data; and finding “gimmicks which render a computer totally inoperative” by destroying “both the computer software and the memory.”18 Brusnitsin even provided examples. The most drastic example was a “virus,” the general wrote, implanted in a computer that was sold by a West German firm to a Soviet shoe factory. It is not unreasonable to assume that if the pipeline blitz had happened, Brusnitsin would have known about it and most likely written about it, if not naming the example then at least naming the possibility of hardware sabotage. He did not do that.

A second problem concerns the technology that was available at the time. It is uncertain if a “logic bomb” in 1982 could have been hidden easily. Retrospectively analyzing secretly modified software in an industrial control system three decades after the fact is difficult to impossible. But a few generalizations are possible: at the time technology was far simpler. A system controlling pipelines in the early 1980s would probably have been a fairly simple “state machine,” and it would probably have used an 8-bit micro-controller. Back in 1982, it was most likely still possible to test every possible output that might be produced by all possible inputs. (This is not feasible with later microprocessors.) Any hidden outputs could be discovered by such a test—an input of “X” results in dangerous output “Y.”19 Testing the software for flaws, in other words, would have been rather easy. Even with the technology available at the time, a regression test would have needed less than a day to complete, estimated Richard Chirgwin, a long-standing technology reporter.20 In short, in 1982 it was far more difficult to “hide” malicious software.

Thirdly, even after the CIA declassified the so-called Farewell Dossier, which described the effort to provide the Soviet Union with defective technology, the agency did not confirm that such an explosion took place. If it happened, it is unclear if the explosion resulted in casualties. The available evidence on the event is so thin and questionable that it cannot be counted as a proven case of a successful logic bomb.

Another oft-quoted example of cyber war is an online onrush on Estonia that began in late April 2007. At that time Estonia was one of the world’s most connected nations; two-thirds of all Estonians used the Internet and 95 per cent of banking transactions were done electronically.21 The small and well-wired Baltic country was vulnerable to cyber attacks. The story behind the much-cited incident started about two weeks before 9 May, a highly emotional day in Russia when the victory against Nazi Germany is remembered. With indelicate timing, authorities in Tallinn decided to move the two-meter Bronze Soldier, a Russian Second World War memorial of the Unknown Soldier, from the center of the capital to its outskirts. The Russian-speaking population, as well as neighboring Russia, was aghast. On 26 and 27 April, Tallinn saw violent street riots, with 1,300 arrests, 100 injuries, and one fatality.

The street riots were accompanied by online commotions. The cyber attacks started in the late hours of Friday, 27 April. Initially the offenders used rather inept, low-tech methods, such as ping floods or denial of service (DoS) attacks—basic requests for information from a server, as when an Internet user visits a website by loading the site’s content. Then the assault became slightly more sophisticated. Starting on 30 April, simple botnets were used to increase the volume of distributed denial of service (DDoS) attacks, and the timing of these collective activities became increasingly coordinated. Other types of nuisances included email and comment spam as well as the defacement of the Estonian Reform Party’s website. Estonia experienced what was then the worst-ever DDoS. The attacks came from an extremely large number of hijacked computers, up to 85,000, and they went on for an unusually long time, for three weeks, until 19 May. The attacks reached a peak on 9 May, when Moscow celebrates Victory Day. Fifty-eight Estonian websites were brought down at once. The online services of Estonia’s largest bank, Hansapank, were unavailable for ninety minutes on 9 May and for two hours a day later.22 The effect of these coordinated online protests on business, government, and society was noticeable, but ultimately remained minor. The only long-term consequence of the incident was that the Estonian government succeeded in getting NATO to establish a permanent agency in Tallinn, the Cooperative Cyber Defence Centre of Excellence.

A few things are notable about the story. It remained unclear who was behind the attacks. Estonia’s defense minister as well as the country’s top diplomat pointed their fingers at the Kremlin, but they were unable to muster evidence, retracting earlier statements that Estonia had been able to trace the IP addresses of some of the computers involved in the attack back to the Russian government. Neither experts from the Atlantic Alliance nor from the European Commission were able to identify Russian fingerprints in the operations. Russian officials described the accusations of their involvement as “unfounded.”23

Keeping Estonia’s then-novel experience in perspective is important. Mihkel Tammet, an official in charge of ICT for the Estonian Ministry of Defense, described the time leading up to the launch of the attacks as a “gathering of botnets like a gathering of armies.”24 Andrus Ansip, then Estonia’s prime minister, asked, “What’s the difference between a blockade of harbors or airports of sovereign states and the blockade of government institutions and newspaper web sites?”25 It was of course a rhetorical question. Yet the answer is simple: unlike a naval blockade, the mere “blockade” of websites is not violent, not even potentially; unlike a naval blockade, the DDoS assault was not instrumentally tied to a tactical objective, but rather to an act of undirected protest; and unlike ships blocking the way, the pings remained anonymous, without political backing. Ansip could have asked what the difference was between a large popular demonstration blocking access to buildings and the blocking of websites. The comparison would have been more adequate, but still flawed for an additional reason: many more actual people have to show up for a good old-fashioned demonstration.

A year later a third major event occurred that would enter the Cassandra’s tale of cyber war. The context was the ground war between the Russian Federation and Georgia in August of 2008. The short armed confrontation was triggered by a territorial dispute over South Ossetia. On 7 August, the Georgian army reacted to provocations by attacking South Ossetia’s separatist forces. One day later, Russia responded militarily. Yet a computer attack on Georgian websites had started slowly on 29 July, weeks before the military confrontation and with it the main cyber offense, both of which started on 8 August. This may have been the first time that an independent cyber attack has taken place in sync with a conventional military operation.26

The cyber attacks on Georgia comprised three different types. Some of the country’s prominent websites were defaced, for instance that of Georgia’s National Bank and the Ministry of Foreign Affairs. The most notorious defacement was a collage of portraits juxtaposing Adolf Hitler and Mikheil Saakashvili, the Georgian president. The second type of offense was denial-of-service attacks against websites in the Georgian public and private sectors, including government websites and that of the Georgian parliament, but also news media, Georgia’s largest commercial bank, and other minor websites. The online onslaughts, on average, lasted around two hours and fifteen minutes, the longest up to six hours.27 A third method was an effort to distribute malicious software to deepen the ranks of the attackers and the volume of attacks. Various Russian-language forums helped distribute scripts that enabled the public to take action, even posting the attack script in an archived version, war.rar, which prioritized Georgian government websites. In a similar vein, the email accounts of Georgian politicians were spammed.

The effects of the episode were again rather minor. Despite the warlike rhetoric of the international press, the Georgian government, and anonymous hackers, the attacks were not violent. And Georgia, a small country with a population of 4.6 million, was far less vulnerable to attacks than Estonia; web access was relatively low and few vital services like energy, transportation, or banking were tied to the Internet. The entire affair had little effect beyond making a number of Georgian government websites temporarily inaccessible. The attack was also only minimally instrumental. The National Bank of Georgia ordered all branches to stop offering electronic services for ten days. The main damage caused by the attack was in limiting the government’s ability to communicate internationally, thus preventing the small country’s voice being heard at a critical moment. If the attackers intended this effect, its utility was limited: the foreign ministry took a rare step, with Google’s permission, and set up a blog on Blogger, the company’s blogging platform. This helped keep one more channel to journalists open. Most importantly, the offense was not genuinely political in nature. As in the Estonian case, the Georgian government blamed the Kremlin. But Russia again denied official sponsorship of the attacks. NATO’s Tallinn-based cyber security center later published a report on the Georgia attacks. Although the onrush appeared coordinated and instructed, and although the media were pointing fingers at Russia, “there is no conclusive proof of who is behind the DDoS attacks,” NATO concluded, “as was the case with Estonia.”28

The cyber scuffles that accompanied the street protests in Estonia and the short military ground campaign in Georgia were precedents. Perhaps the novelty of these types of offenses was the main reason for their high public profile and the warlike rhetoric that surrounded them. The same observation might be true for another type of “cyber war,” high-profile spying operations, an early example of which is Moonlight Maze. This lurid name was given to a highly classified cyber espionage incident that was discovered in 1999. The US Air Force coincidentally discovered an intrusion into its network, and the Federal Bureau of Investigation (FBI) was alerted. The federal investigators called the NSA. An investigation uncovered a pattern of intrusion into computers at the National Aeronautics and Space Administration (NASA), at the Department of Energy, and at universities as well as research laboratories, which had started in March 1998. Maps of military installations were copied, as were hardware designs and other sensitive information. The incursions went on for almost two years. The Department of Defense (DoD) was able to trace the attack to what was then called a mainframe computer in Russia. But again: no violence, unclear goals, no political attribution.

Yet the empirical trend is obvious: over the past dozen years, cyber attacks have been steadily on the rise. The frequency of major security breaches against governmental and corporate targets has grown. The volume of attacks is increasing, as is the number of actors participating in such episodes, ranging from criminals to activists to the NSA. The range of aggressive behavior online is widening. At the same time the sophistication of some attacks has reached new heights, and in this respect Stuxnet has indeed been a game-changing event. Yet despite these trends the “war” in “cyber war” ultimately has more in common with the “war” on obesity than with the Second World War—it has more metaphorical than descriptive value. It is high time to go back to classic terminology and understand cyber offenses for what they really are.

Aggression, whether it involves computers or not, can be criminal or political in nature. It is useful to group offenses along a spectrum, stretching from ordinary crime all the way up to conventional war. A few distinctive features then become visible: crime is mostly apolitical, war is always political; criminals conceal their identity, uniformed soldiers display their identity openly. Political violence (or “political crime” in criminology and the theory of law) occupies the muddled middle of this spectrum, being neither ordinary crime nor ordinary war. For reasons of simplicity, this middle stretch of the spectrum will be divided into three segments here: subversion, espionage, and sabotage. All three activities may involve states as well as private actors. Cyber offenses tend to be skewed towards the criminal end of the spectrum. So far there is no known act of cyber “war,” when war is properly defined. This of course does not mean that there are no political cyber offenses. But all known political cyber offenses, criminal or not, are neither common crime nor common war. Their purpose is subverting, spying, or sabotaging.

In all three cases, Clausewitz’s three criteria are jumbled. These activities need not be violent to be effective. They need not be instrumental to work, as subversion may often be an expression of collective passion and espionage may be an outcome of opportunity rather than strategy. And finally: aggressors engaging in subversion, espionage, or sabotage do act politically; but in sharp contrast to warfare, they are likely to have a permanent or at least a temporary interest in avoiding attribution. This is one of the main reasons why political crime, more than acts of war, has thrived online, where non-attribution is easier to achieve than waterproof attribution. It goes without saying that subversion, espionage, and sabotage—digitally facilitated or not—may accompany military operations. Both sides may engage in these activities, and have indeed done so since time immemorial. But the advent of digital networks had an uneven effect. Understanding this effect requires surveying the foundation: the notion of violence.