The world economy depends on energy. The most essential source of energy is oil and the world’s biggest source of oil is Saudi Arabia. The bulk of the country’s oil production is in the hands of its national oil company, Saudi Aramco. On 15 August 2012, this pivotal corporate behemoth with a workforce of about 54,000 people became the target of a cyber attack that knocked out 30,000 of its workstations, about three-quarters of the total, turning their Microsoft Windows machines into bricks that could not even be booted up.1
The attack could have been disastrous. The company has the largest proven crude reserves and produces more units of oil than anybody else, pumping 9.1 million barrels a day during 2011, 15 per cent more than the year before. It manages more than 100 oil and gas fields, including Ghawar Field, the world’s largest oil field. The firm’s reach is global. Its operations include exploration, drilling, producing, refining, distributing, and marketing oil, gas, petroleum, and other petrochemicals. Saudi Aramco is headquartered in Dhahran, a small city in Saudi Arabia’s Eastern Province; the Kingdom of Bahrain is a 20-mile driving distance to the west. This geographical location contains a clue to the alleged motivation of the cyber attack that hit on 15 August, just when many employees were about go on leave for Eid ul-Fitr, the Muslim holiday that marks the end of Ramadan. A targeted computer virus managed to penetrate the company’s business network and, once inside, rapidly spread through shared network connections. The malware’s main function was deleting data and making the infected Windows computer unbootable by overwriting the machine’s Master Boot Record files. Saudi Aramco instantly put out word that its core operations, oil exploration, production, and refinement were not affected by the attack.
Wider operations could easily have been affected. The computer systems that control plant operations and oil production on the company’s oil and gas fields may have been isolated from the Internet, as Saudi Aramco claimed they were. In any case, the virus used for the attack was incapable of harming any of the systems that commonly run on Industrial Control Systems. Such SCADA systems, which control valves and pumps in remote oil installations, are not well defended and present rather easy targets for skilled attackers. So a more sophisticated attack could well have affected oil production. But even without directly affecting field operations—one must assume that almost all other business operations took a hard hit for two chaotic weeks, including general administration, human resources, customer support, marketing, etc.—the hours after the attack were “critical” and a “humongous challenge,” in the words of one company insider.2 Some of the company’s websites remained offline for more than a week. Emails bounced back. Engineers feared a follow-on attack. In the end Saudi Aramco managed to put its network back online only on Saturday, 31 August, more than ten days after the initial attack.
The Aramco attack raises a number of intriguing conceptual questions. The attack was not violent, and it did not have a direct potential to be violent, as the more detailed analysis below will show. Yet the attackers managed to damage Saudi Aramco’s good reputation and significantly disrupted its day-to-day business operations. Was the attack an act of sabotage? What is sabotage in general terms, and what is its purpose? Does sabotage have to be violent or potentially violent? And what is the potential of sabotage in future cyber attacks?
This chapter argues that malicious software and cyber attacks are ideal instruments of sabotage. Cyber attacks which are designed to sabotage a system may be violent or, in the vast majority of cases, non-violent. The higher the technical development and the dependency of a society on technology (including public administration, the security sector, and industry), the higher the potential for both violent and non-violent sabotage, especially cyber-enabled sabotage. This has a seemingly contradictory effect: the higher the number of activists or adversaries that choose computer sabotage over physical sabotage, the easier it will be to distinguish between violence and non-violence, and the more likely it is that saboteurs choose non-violence over violence.
The argument is unveiled in four stages. The first section will define the nature of sabotage and highlight what remains unaffected by the rise of cyber attacks. The second section will illustrate the historically deep-rooted tension between disablement and destruction, and introduce what is affected by the rise of cyber attacks. The chapter then discusses the details of a few high-profile computer-sabotage examples. It closes with some considerations on the new vulnerabilities of industrial control systems that are likely to affect the future of sabotage.
Sabotage is the deliberate attempt to weaken or disable an economic or military system. All sabotage is predominantly technical in nature, but it may of course use social enablers. The means used in sabotage may not always lead to physical destruction and overt violence. Sabotage may be designed merely to disable machines or production processes temporarily, and explicitly to avoid damaging anything in a violent way. If violence is used, things are the prime targets, not humans, even if the ultimate objective may be to change the cost–benefit calculus of decision-makers. Sabotage tends to be tactical in nature and will only rarely have operational or even strategic effects. Sabotage on its own may not qualify as an armed attack because the saboteurs may deliberately avoid open violence, they may avoid political attribution, but they always aim to be instrumental. Both avoiding excessive violence and avoiding identification may serve the ultimate goal of sabotage: impairing a technical system. Sabotage is therefore an indirect form of attack. The ultimate target of all political violence is the mind of human decision-makers, as a previous chapter has argued. Political violence against humans is designed to affect decision-makers, for instance by grabbing as much public visibility as possible. Sabotage, in contrast to the use of guns and explosives (or cyber weapons), is not ultimately focused on the human body as a vehicle to the human mind—instead, sabotage, first and foremost, attempts to impair a technical or commercial system and to achieve a particular effect by means of damaging that system.
The core ideas of sabotage have barely changed in the past century, despite the advent of sabotage by cyber attack. Looking back seventy years will illustrate this continuity. In 1944, the United States Office of Strategic Services, the CIA’s precursor organization, issued the Simple Sabotage Field Manual, and stamped it secret. The document was declassified in 1963. It set out in great detail how to slow down the Axis powers. The manual was mainly written as a guide to help recruit insiders working for the Axis powers who did not support the Nazis and Fascists and wanted to sabotage the war effort from within. The manual was hands-on, and recommended how to use salt, nails, candles, and pebbles as weapons, or how to slow down an organization by making meetings and bureaucratic procedures as inefficient and faulty as possible. But the manual also contained a short paragraph on the idea itself, which may help clarify the notion of sabotage held by the US intelligence community:
Sabotage varies from highly technical coup de main acts that require detailed planning and the use of specially trained operatives, to innumerable simple acts which the ordinary individual citizen-saboteur can perform.… Simple sabotage does not require specially prepared tools or equipment; it is executed by an ordinary citizen who may or may not act individually and without the necessity for active connection with an organized group; and it is carried out in such a way as to involve a minimum danger of injury, detection, and reprisal.3
All four of the main features contained in the manual and this key paragraph still hold true in the context of twenty-first-century cyber attacks: online sabotage, firstly, still ranges from highly technical, planned, and skill-intensive operations on the one end of the spectrum to manifold simple acts that citizen-saboteurs can perform. Computer sabotage, secondly, may be executed by organized groups and even agencies representing states, or such attacks may be designed and executed by single individuals. Software attacks with the goal of sabotaging a system, thirdly, are still mostly carried out in ways that involve minimum danger of detection, attribution, and reprisal. And finally it is still uniquely skilled insiders who are the potentially most devastating enablers of sabotage, either acting on their own or as representatives of outside adversaries. These four dimensions of continuity raise the question of how sabotage has changed in the digital age.
A brief look at the concept’s origins greatly helps to understand some of today’s novel features. The word sabotage has a controversial history. Its origins date back to the heyday of industrialization in the nineteenth century, when workers rebelled against dire conditions in mechanized factories. Émile Pouget, a French anarchist active at the turn of the twentieth century, promoted sabotage in pamphlets and other publications. A sabot is a simple shoe, hollowed out from a single block of soft wood, traditionally worn by Breton peasants, and today one of the main tourist souvenirs in the Netherlands. The symbol of the wooden shoe goes back to the urban myth of French workmen throwing their wooden shoes into finely tuned moving machinery parts to clog them up. That metaphorical use of “sabotage,” Pouget wrote in 1910, had already been around in street slang for decades. “Comme à coups de sabots,” as if hit with wooden shoes, stood for working intentionally clumsily, slowly, without thought and skill, thus slowing down or halting the process of production.4 The expression soon became more widespread and its metaphorical origins were forgotten, especially in cultures that didn’t know the sabot. An equivalent in American English is “monkeywrenching,” which refers to the comparable practice of throwing a heavy adjustable wrench into the gears of industrial machinery to damage it and keep strike-breakers from continuing work. Elizabeth Gurly Flynn, a leading organizer for the Industrial Workers of the World, a large union also known as the Wobblies, defined sabotage as “the withdrawal of efficiency:”
Sabotage means either to slacken up and interfere with the quantity, or to botch in your skill and interfere with the quality of capitalist production or to give poor service … And these three forms of sabotage—to affect the quality, the quantity and the service are aimed at affecting the profit of the employer. Sabotage is a means of striking at the employer’s profit for the purpose of forcing him into granting certain conditions, even as workingmen strike for the same purpose of coercing him. It is simply another form of coercion.5
Some labor activists and syndicalists explicitly understood sabotage as a way to inflict physical damage against the oppressive machinery that made their work miserable or threatened the unskilled worker’s livelihood altogether by making manual labor obsolete. Pouget quotes from an article published in 1900, a few weeks ahead of an important workers’ congress in Paris. In it, the Bulletin de la bourse du travail de Montpellier gives recommendations for sabotage:
If you are a mechanic, it’s very easy for you with two pence worth of ordinary powder, or even just sand, to stop your machine, to bring about a loss of time and a costly repair for your employer. If you are a joiner or a cabinet maker, what is more easy than to spoil a piece of furniture without the employer knowing it and making him lose customers?6
The pamphlet achieved some modest fame for its clever advice on how workers could cause accidents and damage without attribution: shop-assistants may drop fabrics onto dirty ground, garment workers may ignore faults in textiles; engineers may deliberately neglect oiling the moving parts of the machines they were supposed to maintain. Sabotage was historically understood as a coercive tactic directed against property, not against people. Even when it was directed merely against machines, the question of whether restive workers should try to destroy machinery or merely disable it for a limited period of time was controversial given that the use of violence, even if only directed at machines, was a matter of some dispute among syndicalists at the time. Delaying production was one thing; destroying property was something else, something that could have dire consequences, legally as well as politically. In America, political opponents had accused the Industrial Workers of the World, popularly known as the “Wobblies,” of relying mainly on crude violence to achieve their goals. Some labor organizers therefore considered it necessary to distinguish between violence on the one hand and sabotage on the other. Arturo Giovannitti, a prominent Italian-American union leader and poet, argued for the latter in the foreword to the 1913 English translation of Pouget’s book Sabotage. Sabotage, Giovannitti wrote, was:
Any skilful operation on the machinery of production intended not to destroy or render it defective, but only to disable it temporarily and to put it out of running condition in order to make impossible the work of scabs and thus to secure the complete and real stoppage of work during a strike.7
Sabotage is this and nothing but this, he added, using the language of political activism rather than the language of scholarship, “It has nothing to do with violence, neither to life nor to property.”8
Such subtle differences made sense in theory. In practice it was often difficult to distinguish between permanent destruction and temporary disablement—for several reasons, two of which will serve to highlight the novelties of sabotage by cyber attack. The first is the difference between hardware and software. If temporarily interrupting a process required damaging hardware, then the line between violence and sabotage is hard to draw. This is illustrated by an example from the early twentieth century, when telecommunication installations became a target of sabotage. Sabotage had to target hardware, pretty simply, because software did not exist yet. During French postal and railway strikes in 1909 and 1910, for instance, saboteurs cut signal wires and tore down telegraph posts. Cutting a telegraph wire may have been intended as temporary disablement, yet it also effectively destroyed property. Distinguishing between violence and non-violence was also difficult for a second reason: the dynamics of group confrontations. Again the worker confrontations around the time of the First World War are an instructive example: many union activists knew that situations where striking workers squared off with the capitalist forces of the state could turn violent. Vincent Saint John, a miner, a Wobbly, and one of America’s most influential labor leaders, made this point explicit: “I don’t mean to say that we advocate violence; but we won’t tell our members to allow themselves to be shot down and beaten up like cattle. Violence as a general rule is forced upon us.”9 Such concern was not necessarily unjustified. Strikes and worker demonstrations could easily intensify into violent riots. A graphic example was the Grabow Riot of 7 July 1912, a violent confrontation between unionized Louisiana sawmill workers and the Galloway Lumber Company, which left four men dead and around fifty wounded. Pre-Internet-age sabotage, in short, easily escalated into violence against machines and, in groups, against people.
Both of these difficulties largely disappear in an age of computer attack. Distinguishing violent from non-violent attacks becomes easier. Violence is more easily contained and avoided: by default, software attacks maliciously affect software and business processes—but damaging hardware and mechanical industrial processes through software attack has become far more difficult. The remit of non-violent cyber attack, as a consequence, has widened: a well-crafted cyber attack that destroys or damages data, although without interfering with physical industrial processes, remains non-violent. The Shamoon attack against Saudi Aramco of August 2012 is an ideal example. Neither hardware nor humans were physically harmed. Yet, by allegedly wiping the hard disks of 30,000 computers, the attack created vastly more delay and monetary damage for Saudi Aramco than a minor act of sabotage against machinery in one of Aramco’s plants. That may have been easier to fix and conceal. The oil giant reportedly hired six specialized computer security firms to help the forensic investigation and the post-attack cleanup. Liam Murchu was involved in Symantec’s research into the attack. “We don’t normally see threats that are so destructive,” Murchu told Reuters, “It’s probably been 10 years since we saw something so destructive.”10 Non-violent cyber attacks, in short, may be more efficient, more damaging, and more instrumental than violent attacks, whether executed through cyberspace or not.
Online attacks also made it easier, or possible in the first place, to isolate sabotage from volatile group dynamics. Online sabotage, if it relies on group participation at all, is highly unlikely to escalate into real bloodshed and street violence—activists and perpetrators of code-borne sabotage, after all, may not even be physically present on a street or anywhere else. Both of these dynamics are novel. And both will be illustrated by a more detailed examination of recent cases that witnessed serious acts of sabotage administered through cyber attack.
A more granular and technical examination of the Shamoon attack is instructive. The initially mysterious outage in the then cyber-attack-ridden Middle East occurred in the otherwise calm summer month of August in 2012. This time the attack became known as “Shamoon,” again because the anti-virus researchers who analyzed the software chose this name. That name, curiously, was taken from a folder name in one of the malware’s strings,
C:\Shamoon\ArabianGulf\wiper\release\wiper.pdb
Shamoon simply means “Simon” in Arabic. One initial and, as it turned out, wrong suspicion was that it could be related to the Sami Shamoon College of Engineering, Israel’s largest and most well-reputed engineering school. The malware came in the form of a small 900-kilobyte file that included encrypted elements. It had three functional components: a dropper, to install additional modules; a wiper, responsible for deleting files; and a reporter, to relay details back to the software’s handlers. After the small file was introduced into a target network, most likely as an email attachment, it would spread via shared network connections to other machines on the same network. The software’s payload was designed to destroy data. It overwrote the segment of a hard drive responsible for rebooting the system as well as partition tables and most files with random data, including a small segment of an image that allegedly shows a burning American flag.11 As a result of the software’s destructive capabilities, the US government’s computer emergency response team pointed out that “an organization infected with the malware could experience operational impacts including loss of intellectual property and disruption of critical systems.” The US agency in charge of responding to computer emergencies also pointed out that the software’s destructive potential remained limited: “no evidence exists that Shamoon specifically targets industrial control systems components or U.S. government agencies.”12 There is equally no evidence that the software succeeded in disrupting critical systems—and this is the case despite its initial success.
A previously unknown entity, the “Cutting Sword of Justice,” claimed credit for the attack against Saudi Aramco by pasting a poorly crafted message on Pastebin, a platform used by hackers to dump raided data in simple text form. First the attackers made clear what their intention was, aligning themselves with anti-oppression rebels in the countries affected by the Arab Spring:
We, behalf of an anti-oppression hacker group that have been fed up of crimes and atrocities taking place in various countries around the world, especially in the neighboring countries such as Syria, Bahrain, Yemen, Lebanon, Egypt and …, and also of dual approach of the world community to these nations, want to hit the main supporters of these disasters by this action.
One of the main supporters of this disasters [sic] is Al-Saud corrupt regime that sponsors such oppressive measures by using Muslims oil resources. Al-Saud is a partner in committing these crimes. It’s [sic] hands are infected with the blood of innocent children and people.
With their motivation and their target set out, the Cutting Sword of Justice announced some initial action:
In the first step, an action was performed against Aramco company, as the largest financial source for Al-Saud regime. In this step, we penetrated a system of Aramco company by using the hacked systems in several countries and then sended [sic] a malicious virus to destroy thirty thousand computers networked in this company. The destruction operations began on Wednesday, Aug 15, 2012 at 11:08 AM (Local time in Saudi Arabia) and will be completed within a few hours.13
This anonymous claim was most likely genuine. Symantec only learned about the new malware after this message had been posted. The security firm confirmed that the timing of 11:08 a.m. was hard-wired into Shamoon, as announced on Pastebin. Two days later the hackers followed up with a separate post, publishing thousands of IP addresses which, they claimed, belonged to the infected computers.14 Saudi Aramco did not respond to those claims, but Symantec assumed that the addresses did indeed belong to the Saudi oil producer.15 Aramco later confirmed that the number of infected computers was 30,000, as claimed by the Cutting Sword of Justice. The attacks remained targeted, with probably less than fifty separate infections worldwide, most of them inconsequential. RasGas of Qatar, also one of the world’s largest exporters of natural gas, was the second victim and was more seriously affected.
Shamoon was focused on the energy sector; it was designed to destroy data, and it even contained a reference to “wiper” in the above-quoted string. So was Shamoon in fact that mysterious Wiper? After all, like Saudi Arabia, the Iranian regime was highly unpopular with anti-government activists and rebels across the Arab world. The answer, however, is most likely “no.” Kaspersky Lab, which did the most detailed research into Wiper, pointed out that the deletion routine is different and that the hacker group’s attack against Saudi Aramco used different filenames for its drivers. Perhaps most notably, the politico-hackers made some programming errors, including a crude one. They wanted Shamoon to start thrashing Saudi files on 15 August 2012, 08:08 UTC—but their date-checking routine contained a logical flaw: the software’s date-testing query would return the order to attack even when it should not, in any year after 2012 provided the month and time was before 15 August. February 2013, for instance, would therefore qualify as being before 15 August 2012. To the developers at Kaspersky Lab, this mistake was additional proof that Shamoon was a copy-cat attack by far less sophisticated hacktivists, and not a follow-on attack by the highly professional developers that had written Wiper or even Stuxnet; “experienced programmers would hardly be expected to mess up a date comparison routine,” wrote Dmitry Tarakanov, one of Kaspersky’s ana-lysts.16 It remains unclear precisely how Shamoon’s authors managed to breach Saudi Aramco’s networks.
Another case of Middle Eastern cyber sabotage is equally instructive. At the end of April 2012, a new and rather mysterious piece of malware appeared. Alireza Nikzad, a spokesman for the Iranian oil ministry, confirmed that an attack on data systems had taken place. Probably as a precaution, Iran took its main Persian Gulf oil terminals off the Internet. But Nikzad stressed that the attackers had failed to damage or destroy data:
This cyber attack has not damaged the main data of the oil ministry and the National Iranian Oil Company (NIOC) since the general servers are separate from the main servers, even their cables are not linked to each other and are not linked to internet service … We have a backup from all our main or secondary data, and there is no problem in this regard.17
The malware posed an unusual problem to its victims and those trying to understand the new threat: the software was designed to delete not only files on the attacked computer networks, but also itself and all traces of the attack. Consequently, no samples of the malware had been available for analysis. For months, leading anti-virus companies were unable to find Wiper, prompting some outside observers to question the veracity of news reports about yet another Iran-bashing cyber attack. But Kaspersky Lab, the Russian software security firm that seems to be especially popular in countries that harbor suspicions against private American security companies, established some facts. The Russian firm was able to obtain “dozens” of hard drive images from computer systems that had been attacked by Wiper, presumably from some of its Iranian customers, either directly or with help from Russian state intelligence. A hard drive image is an exact copy of an entire hard drive at a given moment. Kaspersky Lab was able to confirm that the attack indeed took place in the last ten days of April 2012.18 The designers of the attack had two priorities: the first was destroying data as efficiently as possible. Deleting the full contents of a large storage device, such as a hard drive several hundred gigabytes in size, can take a long time, up to several hours, depending on the capacity of the processor and other technical characteristics. So the attackers decided to craft a wiping algorithm that prioritized speed.
The attackers’ second priority was stealth. Wiper’s creators, Kaspersky Lab pointed out, “were extremely careful to destroy absolutely every single piece of data which could be used to trace the incidents.” Yet in some cases traces did remain. Some of the attacked systems were able to recover a copy of the Windows registry hives, parts of a computer’s underlying master database that are saved in separate files on the hard disk. On some of the systems analyzed by Kaspersky Lab, Wiper deleted all .PNF files in a specific Windows folder where important system files are stored (the INF folder). It deleted those files with a higher priority than other files on the system. The researchers suspected that Wiper kept its own main body in that folder as an encrypted .PNF file. Other sophisticated cyber attacks, notably Stuxnet and Duqu, also stored most of their primary files in PNF, something that is uncommon for malware. The rationale, the Russian experts reasoned, was that Wiper would wipe itself and all its malware components first, and only then proceed to delete other targeted files, including the system files that would ultimately crash the system. If Wiper had started deleting system files randomly, the chances would be significant that it would “forget” to delete itself on some machines before the operating system crashed, thus conserving all non-deleted files on the hard drive, leaving forensic traces of the malware itself behind. But this particular malware was so expertly written that no data survived in each instance where it was activated. Although Kaspersky Lab has seen traces of infections, the malware itself remains unknown, as is the software’s targeting priority. Who the attacker was, and how many victims got hit, remains a matter of speculation.
Both Shamoon and Wiper had one critical limitation. They targeted large energy companies, companies that move vast quantities of oil and gas. Yet production and logistics remained unaffected by the attacks. The business network was hit, but not the industrial control network that made sure the crucial combustible fossil fuel was still pumped out of the ground and into pipelines and tankers. In December 2012, Saudi Aramco’s forensic investigation into Shamoon brought to light the fact that the attackers had tried for a full month to disrupt the industrial control systems that manage the company’s oil production, “The main target in this attack was to stop the flow of oil and gas to local and international markets,” said Abdullah al-Saadan, the firm’s vice-president for corporate planning.19 The attack on the SCADA network was unsuccessful. More serious sabotage has to overcome this limitation.
Successful attacks on industrial control systems that cause physical damage are very rare, but real. Sabotage, which dates back to industrial confrontations in the late nineteenth century, is again going industrial in the digitized twenty-first century: today’s most formidable targets are industrial control systems, also known as systems in charge of Supervisory Control and Data Acquisition. Such systems are used in power plants, the electrical grid, refineries, pipelines, water and wastewater plants, trains, underground transportation, traffic lights, heat and lighting in office buildings and hospitals, elevators, and many other physical processes. An alternative abbreviation that is sometimes used is DCS, which stands for Distributed Control Systems. DCS tend to be used to control processes in smaller geographical areas, such as factory floors, whereas SCADA systems can span entire regions, for instance in the case of pipelines or transportation grids. Both are subsets of Industrial Control Systems, or ICS. Attacking an industrial control system is the most probable way for a computer attack to create physical damage and indirectly injure or kill people.
Although “Supervisory control and data acquisition” sounds complicated, the earliest control networks were actually made up of a simple monitoring device, say a meter with a switch, and remote sensors and actuators: if the temperature in a factory tank a mile away dropped below 100 degrees, for instance, the operator would remotely switch on the boiler. As industrial production became more complex, so did the computerized control networks. Yet most industrial control systems have basic design features in common, be they oil refineries, electrical power stations, steel plants, chemical factories, or water utilities. To understand the potential of sabotage against SCADA systems, some of these basics are important.
The first part, at least from the point of view of the operator, is the so-called human–machine interface, often abbreviated as HMI. The community of engineers specialized in SCADA systems commonly use shorthand, and their analysis is hard to penetrate without knowing at least some of the jargon. Plant operators or maintenance personnel would mostly control the system through that interface. In modern systems the HMI is in effect a large screen showing a mimic of a plant or large apparatus, with small images perhaps showing pipes and joints and tanks, equipped with bright lights and small meters that would allow the operator to get readings on critical values, such as pressures and speeds. If a certain parameter requires action, a light may start blinking red or a meter may indicate a potential problem. A second component is the supervisory computer system. This is the system’s brain. The supervisory computer is able to gather data and respond by sending commands back to field devices to control the process. Field devices may be on the plant floor, or actually and quite literally in the field, such as a pipeline network that spans large distances outdoors. In simpler systems, so-called Programmable Logic Controllers (PLCs) can replace a more complex supervisory computer system. Control systems and PLCs are so-called “master devices.” The third set of components are Remote Terminal (or Telemetry) Units, known as RTUs in the industry jargon. These are “slave devices” which carry out orders from their masters. These devices would sit close to motors or valves that need to be controlled. RTUs act in both directions; they transmit sensor data to the control system and transmit orders from the supervisory system to remote motors and valves.
All SCADA systems require a communication infrastructure. This communication infrastructure can be complex and costly, especially for systems that are geographically spread out over a wider area. Distances can be significant in the case of pipeline networks, water grids, and large chemical plants. Some industrial plants have to withstand extreme temperatures generated in the production process, electro-magnetic radiation, or rugged environmental conditions. The requirements for the communication hardware are not only rough but also diverse, because the industrial applications are so diverse. The companies that produce components for SCADA networks have therefore developed approximately 200 proprietary protocols, often individually. For a long time, this diverse and somewhat chaotic situation made it difficult for attackers to penetrate and understand a particular control network. A SCADA network is often connected to a company’s business network through special gateways. Enabling data-links between the business network and automated production processes enables increased efficiency, better supply management, and other benefits. Such gateways provide an interface between IP-based networks, such as a company’s business network or the open Internet, and the simpler, fieldbus protocol-based SCADA network that controls field devices.20 Sometimes, especially in large networks like electricity grids, there may be unexpected links to the outside world such as phone connections or open channels for radio communication. SCADA networks are often old legacy systems, and as a result of complexity and staff turnover no single person may be able to understand the full reaches of the network. A large but ultimately unknown number of SCADA systems are connected to the Internet, also possibly without their operators’ awareness.21
Three trends are making SCADA systems potentially more vulnerable. The first trend is standardization in communication protocols. Increasing efficiency at minimum costs also creates pressures for the operators of control systems. In a dynamic that is comparable to open-source software, open protocol standards create significant gains in efficiency. But they also make the systems more vulnerable overall. “The open standards make it very easy for attackers to gain in-depth knowledge about the working of these SCADA networks,” one research paper pointed out in 2006.22 One possibility, for instance, is that an attacker, once the network is penetrated, could observe and “sniff” communications on a network. Once malicious actors have learned more about the data and control commands, they could utilize their new knowledge to tamper with the operation. In some ways this is precisely what Stuxnet did.
The second trend that is making industrial control systems more vulnerable to outside attack is an increase in connectivity. Technically, SCADA communication systems used to be organized as point-to-multipoint “serial” communications over various channels, for example phone lines or private radio systems. But increasingly the communication within a SCADA system relies on Internet Protocols. This means that terminal servers are increasingly set up to convert serial asynchronous data (i.e. bit- or byte-oriented data) into IP or “frame relay packets” for transmission in upgraded systems. This change brings many benefits. Maintenance of devices becomes easier when these devices are easy to connect to, both from a company’s business network, which connects air-conditioned office space in headquarters with noisy factory floors, and from the wider Internet, which is often the bridge to contractors and complex supply chains. This trend is amplified by the push for smart grids, which can save money by automatically moving peak production to times of cheap energy prices. Telvent is a leading industrial automation company, valued at $2bn, and a subsidiary of the giant Schneider Electric, a French international firm that employs 130,000 people worldwide. One of Telvent’s main smart grid products, OASyS, is specifically designed to bridge the gap between an energy firm’s enterprise network and its activities in the field that are run by older legacy systems. In the fall of 2012, Telvent reported that attackers had installed malicious software and stolen project files related to Oasys.23 The intruders were likely to have been members of the possibly Shanghai-based “Comment Group,” a large-scale cyber espionage operation allegedly linked to the Third Department of the People’s Liberation Army of China.24
A third trend is more visibility. Search technology has made many things far easier to find, and this includes previously hard to get manuals from PLC manufacturers which may occasionally even contain hard-coded login credentials. But the biggest change in visibility is probably due to one private programmer. In 2009, the then 26-year-old John Matherley started operating a search engine for all sorts of devices connected to the Internet. Shodanhq.com boasts of listing: “Webcams. Routers. Power Plants. iPhones. Wind Turbines. Refrigerators. VoIP Phones.”25 The platform has been dubbed the Google for hackers. Its search functionality offers to find computers based on software, geography, operating system, IP address, and other variables. Shodan’s crawlers scan the Internet for the ports usually associated with mainstream protocols such as HTTP, FTP, SSH, and Telnet. On 28 October 2010, the US Department of Homeland Security warned that the resources to identify control systems openly connected to the Internet have been greatly reduced. The ICS-CERT alert pointed out that there was an “increased risk” posed by Shodan’s growing database.26 In June of the following year, Éireann Leverett, a computer science student at Cambridge University, finished his MPhil dissertation. Using Shodan, Leverett had found and mapped 10,358 Internet-facing industrial control systems, although it remained unclear how many of them were in working condition. Yet, remarkably, only 17 per cent of all systems, Leverett found, required a login authentification. When Leverett presented his findings at the S4 conference, a special event for control system specialists, many were surprised and even shocked. Some vendors started using Shodan to notify customers whose systems they would find online. One attendee who worked for Schweitzer, a PLC manufacturer, admitted the ignorance of some operators: “At least one customer told us ‘We didn’t even know it was attached’,” he told Wired magazine.27
Yet the picture would not be complete without acknowledging the counter-trends. SCADA systems are not only becoming more vulnerable, but are also subject to several trends that make them less vulnerable. The question of security becomes a question of balance. And it may only be possible to answer that question in a case-by-case analysis. For the following three reasons industrial control systems may be getting safer, not more vulnerable. These reasons may also help explain why the world, by the end of 2012, had still not witnessed a destructive cyber attack that actually injured or killed human beings.
The first reason is closely related to more visibility: improved oversight and red-teaming. A red team, an expression common in national defense as well as in computer security, refers to a group of mock adversaries with the task of trying to test and expose an organization’s or a plan’s flaws and weaknesses. On 12 December 2011, for instance, a 29-year-old independent European researcher, Rubén Santamarta, blogged28 about his discovery of a flaw in the firmware of one of Schneider’s programmable logic controllers, more precisely the so-called NOE 771 module. The module contained at least fourteen hard-coded passwords, some of which were apparently published in support manuals. Before Santamarta published the vulnerabilities, he had informed the US government’s computer emergency response team in charge of control systems. ICS-CERT promptly reacted. On the same day, Homeland Security published an alert pointing to the new vulnerability, and coordinated with Schneider Electric “to develop mitigations.”29 Santamarta’s hack effectively resulted in considerable pressure on Schneider to fix the problem. A better-known white-hat PLC hacker is Dillon Beresford. White-hat is jargon for ethical hackers whose goal is to improve security, as opposed to black-hats who seek to exploit security flaws. Beresford, a researcher with little previous experience in control systems, uncovered critical vulnerabilities in the Siemens Simatic S7 programmable logic controllers, a widely used product.30 The security analyst at NSS Labs gained recognition among control system experts when Siemens and the US Department of Homeland Security requested that he cancel his presentation on newly discovered Siemens S7 vulnerabilities scheduled at a hacker conference in Dallas in May 2011, TakeDownCon.31 A number of harmless but high-profile SCADA breaches in the past years have also contributed to a sense of alarm in the control system community. A third example is Justin Clarke’s exposures of vulnerabilities in Rugged-Com’s products, which were also highlighted by an ICS-CERT alert.32 The pressure on the vendors kept mounting.
The second reason, partly a result of the first, is slowly improving vendor security. There are dozens of companies that produce Programmable Logic Controllers, but the worldwide market is dominated by only a few companies, with Siemens covering more than 30 per cent, Rockwell Automation just over 20 per cent, Mitsubishi at about 14 per cent, and Schneider Electric just under 10 per cent. Specific applications tend to be in the hands of specific vendors. Refineries, for instance, use mostly Honeywell, Emerson, and Yokogawa products.33 Some of these manufacturers have a highly problematic track record in fixing flaws. Various critics had been pointing out for years, for instance, that Siemens had failed to remove critical vulnerabilities in its Simatic Step 7 and Simatic PCS 7 software. A bug in the software enabled attackers to inject a malicious dynamic-link library, a .dll file, into an unprotected Step 7 project folder. Stuxnet exploited this type of flaw to destroy the centrifuges in Natanz. Among the most prominent critics were Ralph Langner, a German engineer and early Stuxnet detective,34 as well as Dale Peterson’s team at Digital Bond, a leading consultancy on industrial control systems.35 Digital Bond runs an active blog and one of the industry’s most highly reputed conferences, the yearly S4 conference in Miami Beach. Peterson is feared by many in the control system industry for publishing far too many unpleasant details on systems that are insecure by design. But, he says, “we publish probably only about 10 per cent of what we find.”36 After years of pressure, Siemens and other PLC vendors seem to have started responding and improving the security of their products, albeit still far too slowly.
The final reason is continued, and possibly increasing, obscurity. Despite open protocols, more connectivity, and better documentation through a new search engine focused on hidden Internet-facing devices, the obscurity of systems remains a huge hurdle for successful outside attacks (but not for inside attackers). Merely gaining access to a system and even sniffing it out may not be enough to prepare a sophisticated attack, especially in highly complex industrial production processes. “You don’t have the human machine interface so you don’t really know what the PLC is plugged into,” explained Reid Wightman, a well-known expert on industrial control systems. “I really don’t know if the [device] is a release valve, an input valve, or a lightbulb.” Superb intelligence is needed for success, and possibly even test-flying the attack agent in an experimental setup that resembles the original target as closely as possible. Something like this is hard to design, as Stuxnet demonstrated. As production systems become more complex, and often more bespoke, their obscurity may increase rather than decrease.
The most effective saboteur has always been the insider—a feature that remains as true in the twenty-first century as it did in the 1910s. If anything, computer sabotage has empowered the outsider vis-à-vis the inside threat, although the most violent acts of computer sabotage remain inside jobs. The reason is simple. The best-placed person to damage a machine is the engineer who built it or maintains it, the manager who designed and runs a production process, or the IT administrator who adapted or installed a software solution. It therefore comes as no surprise that sabotage manuals tend to be written largely for insiders, and this insight seems to apply to French anarchists as well as to American spies: in 1900, the bulletin of Montpellier’s Bourse de Travail was meant to be applied on the job, by the very factory workers best placed to monkeywrench the appliances of their despised capitalist bosses. In 1944, the OSS’s Simple Sabotage Field Manual also hoped to assist privy personnel in devising methods that would cause dithering, delay, distress, and destruction, from telegraph operators to railway engineers. In the context of cyber attacks, the insider threat is especially pertinent for complex SCADA systems. Engineers and administrators who work for a power plant or utility company know the systems best. They have the high degree of process knowledge that is required to mount an effective attack against bespoke legacy systems. If there is a risk of somebody secretly installing “logic bombs” that could be timed or activated from afar, it is the insider that poses the greatest risk.
The saboteurs’ long-standing emphasis on inside knowledge has a problematic flipside, both for those trying to sabotage and for those trying to defend against sabotage: the most effective acts of industrial incapacitation require supreme access, supreme skill, and supreme intelligence regarding the target. As systems become more complex and arcane, the knowledge required to fiddle with them also becomes more complex and arcane. The result is a tenuous security advantage for the defender. Security engineers in computer science even have a technical term for this double-edged sword, “security-by-obscurity.” “There is no security-by-obscurity” is a popular pejorative phrase among computer scientists. It goes back to Auguste Kerckhoffs, a nineteenth-century Parisian cryptographer and linguist. A core assumption broadly held in cryptography, known as Kerckhoffs’s Principle, is widely considered incompatible with the idea of securing a system by obscuring knowledge about how to attack it.37 Kerckoffs’s idea holds that a cryptosystem must be secure even if that system’s design—except the encryption key—is public knowledge. It is therefore no surprise that the notion of security-by-obscurity is looked down upon by leading cryptographers. Yet the relationship between security and obscurity is more complicated outside the narrow remit of cryptographers.38 For engineers in charge of industrial control systems, security-by-obscurity is a fact of life, even if they don’t like the idea in theory.39
Yet the debate about the pros and cons of security-by-obscurity misses one central point: the insider. Claude Shannon, an American pioneer in information theory, famously reformulated Kerckhoffs’s Principle as “The enemy knows the system.” For Shannon this statement was a theoretical assumption that may be true in exceptional cases, rather than being a factual statement. The situation is different for the saboteur who is already on his target’s payroll. The insider actually knows the system. And the insider may be that enemy. The most successful sabotage operations by computer attack, including Stuxnet and Shamoon, allegedly relied on some form of inside support. Precisely what that inside support looked like remains unclear in both cases. Yet in other cases it is better documented. Three examples will illustrate this.
In early 2000, Time magazine reported two years after the fact that the Russian energy giant Gazprom had suffered a serious breach as a result of an insider. A disgruntled employee, Russian officials allegedly told Time, helped “a group of hackers” penetrate the company’s business network and “seize” Gazprom’s computers for several hours. The intruders could allegedly control even the SCADA systems that monitor and regulate the gas flow through the company’s vast network of pipelines. Executives in the politically well-connected firm were reportedly furious when the information was made public. Fearing embarrassment, Gazprom denied reports of the incident in the Russian press. “Heads rolled in the Interior Ministry after the newspaper report came out,” Time quoted another senior official. “We were very close to a major natural disaster.”40 A small natural disaster happened as a result of a successful breach a few years later.
The second example occurred in March and April 2000 in the Shire of Maroochy, on Queensland’s Sunshine Cost in Australia. The Maroochy incident is one of the most damaging breaches of a SCADA system to have ever taken place. After forty-six repeated wireless intrusions into a large wastewater plant over a period of three months, a lone attacker succeeded in spilling more than a million liters of raw sewage into local parks, rivers, and even the grounds of a Hyatt Regency hotel. The author of the attack was 49-year-old Vitek Boden. His motive was revenge; the Maroochy Shire Council had rejected his job application.41 At the time Boden was an employee of the company that had installed the Maroochy plant’s SCADA system. The Australian plant’s system covered a wide geographical area and radio signals were used to communicate with remote field devices, which start pumps or close valves. And Boden had the software to control the management system on his laptop and the knowledge to operate the radio transmitting equipment. This allowed him to take control of 150 sewage pumping stations. The attack resulted in hundreds of thousands of liters of raw sewage being pumped into public waterways. The Maroochy Shire Council’s clean-up work took one week and cost $13,000, plus an additional $176,000 to update the plant’s security. “Vitek Boden’s actions were premeditated and systematic, causing significant harm to an area enjoyed by young families and other members of the public,” said Janelle Bryant at the time, the investigations manager at the Queensland Environmental Protection Agency. “Marine life died, the creek water turned black and the stench was unbearable for residents.”42 Boden was eventually jailed for two years.43
Another, lesser-known ICS insider attack happened in early 2009 in a Texas hospital, the W.B. Carrell Memorial Clinic in Dallas. The incident did not cause any harm, but resulted in a severe criminal conviction. A night guard at the hospital, Jesse William McGraw, had managed to hack Carrell’s Heating, Ventilation and Air Conditioning (HVAC) system as well as a nurse’s computer that contained confidential patient information. McGraw then posted online screenshots of the compromised HVAC system and even brazenly published a YouTube video that showed him installing malware on the hospital’s computers that made the machines slaves for a botnet that the twenty-five-year-old operated. McGraw used the moniker “GhostExodus” and proclaimed himself the leader of the hacking group “Electronik Tribulation Army,” which he envisioned as a rival of Anonymous. In the early hours of 13 February 2009, the night guard-turned-hacker physically accessed the control system facility for the clinic’s ventilation system without authorization, inserted a removable storage device, and ran a program that allowed him to emulate a CD/DVD drive. McGraw could have caused significant harm: “The HVAC system intrusion presented a health and safety risk to patients who could be adversely affected by the cooling if it were turned off during Texas summer weather conditions,” the FBI’s Dallas office argued, although summer was still a few months off.44 But hospital staff had reportedly experienced problems with the air conditioning and ventilation system, wondering why the alarm did not go off as programmed. McGraw’s screenshots revealed that the alarm notification in the hospital’s surgery center had indeed been set to “inactive.”45 In March 2011, two years after his offense, McGraw was sentenced to 110 months in federal prison.46
A further, most curious insider incident occurred on 17 November 2011. Joe Weiss, a security consultant working in the control systems industry, published a blog post, “Water System Hack—The System Is Broken.” Weiss alleged that an intruder from Russia had hacked into an American water utility, stole customer usernames and passwords, and created physical damage by switching the system on-and-off until the water pump was burned out. Minor glitches were observed for two to three months, Weiss wrote, which were then identified as a malicious cyber attack.47 Weiss’s information seems to have been based on a leaked report by the Illinois Statewide Terrorism and Intelligence Center, which was based on raw and unconfirmed data.48 The Washington Post covered the story and identified the alleged attack as the first foreign SCADA attack against a target in the United States, the Curran-Gardner Townships Public Water District in Springfield, Illinois. “This is a big deal,” Weiss was quoted in the paper, “It was tracked to Russia. It has been in the system for at least two to three months. It has caused damage.”49 The article did not ask why anybody in Russia would attack a single random water plant in the Midwestern United States. The FBI and the Department of Homeland Security started investigating the incident in Springfield and quickly cautioned against premature conclusions. One week later the facts were established. A contractor working on the Illinois water plant was traveling in Russia on personal business at the time and remotely accessed the plant’s computer systems. The information was not entirely wrong: the plant had a history of malfunction, a pump failed, and somebody from an IP address in Russia accessed the system. Yet the incident and the misunderstanding illustrate several things: it shows how malicious intention and activity would turn an accident into an attack—but in the case of the contractor logging into the Springfield water plant from Russia that malicious intent was absent. The incident also shows how urban legends about successful SCADA attacks are created. The problem of false ICS attacks is so common that the British Columbia Institute of Technology’s Industrial Security Incident Database used to have a separate category for “Hoax/Urban Legend.”50
But it turned out to be premature and dangerous to dismiss the risk of a devastating attack against critical infrastructure and utility companies, as one hacker demonstrated in the aftermath of the Springfield water hack story. One reader of the British IT news site The Register was so incensed by the statement of a government official that he decided to take action. “My eyes were drawn, nary, pulled, to a particular quote,” the angry hacker wrote in a Pastebin post a day later. One US Department of Homeland Security official had commented that, “At this time there is no credible corroborated data that indicates a risk to critical infrastructure entities or a threat to public safety.”51 This statement was highly controversial, even naïve, especially as it came from an official. “This was stupid. You know. Insanely stupid. I dislike, immensely, how the DHS tend to downplay how absolutely fucked the state of national infrastructure is.”52 So he decided to prove the government wrong by showing how bad the situation actually is. Using the handle pr0f, the angry reader proceeded to penetrate into the human–machine interface software of a SCADA system used by a water plant in South Houston, which serves 16,000 Texans with water. With the help of the public Shodan search engine that looks for fingerprints of SCADA systems online, pr0f allegedly found that the plant in South Houston was running the Siemens Simatic HMI software, connected to the Internet, and protected by a simple three-character password. The twenty-two-year-old unemployed hacker then made five screenshots of the human–machine interface and posted links to the files on Pastebin. The break-in took barely ten minutes. Pr0f did not do any damage and did not expose any details that could make it easy for malicious hackers to do damage, expressing his dislike of vandalism. The still unknown intruder allegedly favors hoodie sweatshirts and lives in his parents’ home somewhere overseas.53 The city of South Houston upgraded its water plant to the Siemens system long before 11 September 2001, before the debate about industrial control systems as targets had caught on. “Nobody gave it a second thought,” Mayor Joe Soto told The Washington Post. “When it was put in, we didn’t have terrorists.” Soto knew that pr0f had chosen his target more or less randomly. “We’re probably not the only one who is wide open,” the mayor said later, “He caught everyone with our pants down.”54
A comparable incident occurred over February and March 2012. One or multiple users from unauthorized IP addresses accessed the ICS network of an unidentified New Jersey air conditioning company, according to a memo published by the FBI.55 The intruders used a backdoor to access the company’s Tridium Niagara system, enabling them to control the system remotely. It is not known if the intruders actually changed the system settings or caused any damage. But they could have caused damage. The Niagara AX framework is installed on over 300,000 systems worldwide in applications such as energy management, building automation, telecommunications, including heating, fire detection, and surveillance systems for the Pentagon, the FBI, and America’s Internal Revenue Service.56 In the case of the New Jersey AC company, the intruder was able to access a “Graphical User Interface, which provided a floor plan layout of the office, with control fields and feedback for each office and shop area,” the FBI reported. “All areas of the office were clearly labeled with employee names or area names.” The incident could be traced back to two messages on Pastebin from January 2012.57 A user with the Twitter handle @ntisec, for “anti-security,” had posted a list of IP addresses, one of which led to the unidentified company. @ntisec identified himself as an “anarcho-syndicalist” who sympathized with Anonymous. He found the vulnerability through Google and Shodan by searching for “:|slot:/,” he reported. @ntisec seemed surprised by the ease with which he could get to various meter readings and control panels online:
Don’t even need an exploit to get in here. Don’t even have to be a hacker. No passwords what so ever.
So how is the state of your other #SCADA systems like your electrical grid? Or traffic management?
What about chemical industry? Or can hackers switch some stuff that sends trains to another fail?58
Yet the anarcho-syndicalist seemingly didn’t want to live up to his declared ideology, explicitly warning fellow amateur hackers not to do anything illegal, or, as it were, anarchical:
Be careful and don’t cause rampant anarchy. They might trace you and I have warned you not to alter control states. Just have a look around to see [for] yourself how these systems affect our everyday life.59
The unidentified intruders apparently did exactly that: using @ntisec’s public backdoor URL to gain administrator-level access to the company’s industrial control systems—no firewall to breach, no password required, as the FBI noted. It is unclear if the hackers also took @ntisec’s advice and didn’t alter the system’s control states. Either way, it seems that no harm was caused.
These incidents—Maroochy, Springfield, Houston, Carrell—are a far cry from “cyber war.” None harmed any human beings, and none had a tangible political goal. Yet they are among the most serious control system intrusions on record. But it would be shortsighted to dismiss the threat of serious computer attacks: the future of computer sabotage seems to be bright and the phenomenon seems to be on the rise. In 2012, the number of malicious programs that were able to “withdraw efficiency” from companies and governments multiplied quickly. Stuxnet set a new standard for what is possible. Shodan, the search engine, has removed some obscurity by exposing a vast number of Internet-facing control systems, although the details of the various installations certainly remain obscure, thus limiting what an attacker could accomplish, but by no means preventing a successful attack. A physically harmful attack on an industrial control system is a highly likely future scenario. “Eventually, somebody will get access to a major system and people will be hurt,” pr0f, the hacker who penetrated the Houston water plant, told The Washington Post. “It’s just a matter of time.” But it is important to keep these risks in perspective. Almost all acts of computer-sabotage to date have been non-violent, harming neither machines nor human beings (Stuxnet, which harmed an unknown number of machines in Iran’s Natanz nuclear enrichment plant, seems to be the only known exception). Such non-violent acts of sabotage seem to be on the rise—the Saudi Aramco incident discussed in the opening paragraph of this chapter is an ideal example, not least because the attack’s ICS component failed—and they clearly have the capability to undermine the trust and the confidence that consumers and citizens place in companies and governments, and in the products and services that they offer. They can also undermine the trust the executives place in their organization. Increased digitization and automation offer more and more opportunities for attackers to withdraw efficiency without actual physical destruction. In that sense sabotage in the age of computer attack is becoming less violent, not more violent.