5

ESPIONAGE

The second offensive activity that is neither crime nor war is espionage. Cyber espionage is an attempt to penetrate an adversarial computer network or system for the purpose of extracting sensitive or protected information. Two major distinctions dominate the organization as well as the study of intelligence. Intelligence agencies may either restrict themselves to collecting and analyzing information, while remaining largely passive observers—or spy agencies may also engage in operations, almost always covert operations, with the intention of concealing the entire operation (clandestine operations) or at least the identity of the sponsor. The second distinction concerns the nature of intelligence collection: it can be either social or technical in nature. That division of labor is old. In the intelligence community it is reflected in the distinction between human intelligence, HUMINT, and signals intelligence, SIGINT. Sensitive information transmitted by telecommunication is often encrypted. Espionage that takes advantage of SIGINT therefore requires the use of specialists in decryption and cryptanalysis. The field of signals intelligence is wide: it includes intercepting civilian and military radio signals, satellite links, telephone traffic, mobile phone conversations, and of course intercepting communication between computers through various data protocols, such as email and voice-over-internet-protocol. Gaining illicit access into computer networks is therefore only one, albeit a fast-growing, part of signals intelligence. Some conceptual clarity can be achieved by applying both distinctions to cyberspace: cyber espionage, for the purposes of this study, refers to the clandestine collection of intelligence by intercepting communications between computers as well as breaking into somebody else’s computer networks in order to exfiltrate data. Cyber sabotage, by contrast, would be the computer attack equivalent of covert operations: infiltrating an adversarial computer system with malicious software in order to create a desired physical effect or to withdraw efficiency from a process. The level of technical sophistication required for cyber espionage may be high, but the requirements are less demanding than for complex sabotage operations. This is because espionage is not directly instrumental: its main purpose is not achieving a goal, but to gather the information that may be used to design more concrete instruments or policies. The novel challenge of code-enabled sabotage has been discussed in the previous two chapters on cyber weapons and sabotage. This chapter will focus on cyber espionage.

The most widespread use of state-sponsored cyber capabilities is for the purposes of espionage. Empirically, the vast majority of all political cyber security incidents have been cases of espionage, not sabotage. And an ever more digitized environment is vastly increasing the number of actors in the espionage business. Professionally and expensively trained agents working for governments (or large companies) have new competition from hackers and private individuals, sometimes acting on their own initiative yet providing information for a larger cause. This chapter will explore the extent of the problem and the major notable cases where details are available in the public domain: what are the most spectacular network breaches on record? How significant is the threat of electronic espionage? (Or, from an offender’s point of view, how big is the opportunity for cyber espionage?) And what does this mean for intelligence agencies struggling to adapt to a new set of challenges?

The argument put forward on the following pages holds, in sharp contrast to the cyber security debate’s received wisdom, that three paradoxes are limiting the scope of cyber espionage. The first is the danger paradox: cyber espionage is not an act of war, not a weapon, and not an armed attack, yet it is a serious threat to the world’s most advanced economies. Experts in the use of code, to a degree, are replacing experts in the use of force—computer spying, in short, is entirely non-violent yet most dangerous. But there’s a caveat: those who are placing their bets on stripping their adversaries of a competitive advantage should be careful not to overestimate the possibilities of large-scale data exfiltration. Cyber espionage’s second characteristic is the significance paradox: although cyber espionage is perhaps the most significant form of cyber attack, it may not represent a fundamentally game-changing development for intelligence agencies—cyber espionage is a game-changer, but not for the best spy agencies. This is explained by the third seeming contradiction, which I call the normalization paradox: an intelligence agency taking cyber operations seriously will back these operations up with human sources, experienced informers, and expert operatives, thus progressively moving what the debate refers to as “cyber espionage” out of the realm of “cyber” and back into the realm of the traditional trade-craft of intelligence agencies, including HUMINT and covert operations. The outcome may be surprising: the better intelligence agencies become at “cyber,” the less they are likely to engage in cyber espionage narrowly defined. Something comparable applies in the arena of commercial espionage.

The argument is presented in five steps. Understanding the challenge of espionage, especially industrial espionage, requires understanding the nature of transferring technical expertise. The chapter therefore opens with a short conceptual exploration: at closer view, personalized expert knowledge about complex industrial or political processes cannot be downloaded as easily as is generally assumed. Secondly, some of the major cases of cyber espionage will be explored in detail, including Duqu, Flame, and Shady Rat. Thirdly, the growing role of social media in the cyber espionage business will be examined briefly. The chapter concludes by discussing some of the inherent difficulties associated with cyber espionage: distinguishing it from cyber crime, defending against it, doing it, and estimating the damage it causes as well as its benefits.

Some conceptual help is required to understand these limitations. We can get this help from Michael Polanyi, a highly influential philosopher of science.1 Polanyi’s work inspired one of the most influential books on creativity and innovation ever written, Ikujiro Nonaka’s The Knowledge Creating Company, published in 1995.2 One of Polanyi’s core distinctions concerns that between tacit and explicit knowledge. Tacit knowledge is personal, context-specific, and difficult to formalize and to communicate, Nonaka pointed out. It resides in experience, in practical insights, in teams, in established routines, in ways of doing things, in social interactions. Such experiences and interactions are hard to express in words. Video is a somewhat better format for transmitting such knowledge, as anybody who has tried to hone a specific personal skill—from kettlebell techniques to cooking a fish pie to building a boat—intuitively understands. Explicit knowledge, on the other hand, is codified and transmittable in formal, systematic language, for instance in an economics textbook or in a military field manual. Explicit knowledge, which can be expressed in words and numbers, is only the tip of the iceberg. That iceberg consists largely of tacit knowledge.

A bread-making machine is one of Nonaka’s most instructive examples. In the late 1980s, Matsushita Electrical Company, based near Osaka and now Panasonic, wanted to develop a top-of-the-line breadmaking machine. The company compared bread and dough prepared by a standard machine with that of a master baker. X-raying the dough revealed no meaningful differences. Ikuko Tanaka, the head of software development, then embedded herself with a well-known chef in Osaka International Hotel, famous for its delicious bread. Yet merely observing the head baker didn’t teach Tanaka how to make truly excellent bread. Only through imitation and practice did she learn how to stretch and twist the dough the right way. Even the head baker himself would not have been able to write down the “secret” recipe—it was embedded in his long-honed routines and practices. Japanese dough and breadmaking holds an important lesson for Western intelligence agencies. Tacit knowledge is a major challenge for espionage, especially industrial espionage. A Chinese company that is remotely infiltrating an American competitor’s network will have difficulty—metaphorically speaking—to bake bread to their customers’ delight, let alone manufacture a far more complex product like chloride-route processed titanium dioxide, as in the case of one of largest China-related conventional corporate espionage cases involving the chemical company Dupont.3

No doubt: economic cyber espionage is a major problem. But remotely stealing and then taking advantage of trade secrets by clandestinely breaching a competitor’s computer networks is more complicated than meets the eye. This becomes evident if one tries to list the most significant cases where cyber espionage caused real and quantifiable economic damage of major proportions—that list is shorter and more controversial than the media coverage implies. Among the most highprofile cases are three: a remarkable case is a hack that involved the Coca-Cola Corporation. On 15 March 2009, FBI officials quietly approached the soft drink company. They revealed that intruders, possibly the infamous “Comment Group,” had hacked into Coca-Cola’s networks and stole sensitive files about an attempted acquisition of China Huiyuan Juice Group. The deal, valued at $2.4 billion, collapsed three days later.4 If the acquisition had succeeded, it would have been the largest foreign takeover of a Chinese company at the time. A second, seemingly similar British case was revealed by MI5’s Jonathan Evans in mid-2012, when one UK-listed company allegedly lost a £800 million deal as a result of cyber espionage, although the details remain unknown.5 Possibly the most consequential, but also highly controversial, example is the demise of Nortel Networks Corp, a once-leading telecommunications manufacturer headquartered in Ontario, Canada. After the troubled company entered bankruptcy proceedings and then liquidation in 2009, Nortel sources claimed that Chinese hackers and a nearly decade-long high-level breach had caused, or at least contributed to, Nortel’s fall.6 But again, details about how precisely the loss of data damaged the firm remain mysterious. Other cases of real-life costs are discussed later in this book.

Yet these brief examples already illustrate how difficult it is to analyze computer espionage cases and come to general observations. The nature of the exfiltrated data is critical: process-related knowledge (think: bread making) may reside more in routines and practices, not in reports or on hard-drives, and therefore seems to be more difficult to steal and to replicate remotely—whereas confidential data about acquisitions and business-to-business negotiations may be pilfered from top executives and exploited more easily. Only a close empirical analysis can shed light on the challenges and limitations of cyber espionage. But too often what is known publicly are merely details about the exfiltration method, not details about the exfiltrated data and on how it was used or not used. The following pages will introduce most major cases of cyber espionage and often push the inquiry right to the limit of what is known about these cases on the public domain.

Perhaps the earliest example of cyber espionage is Moonlight Maze, which was discussed in chapter one. A more consequential example is Titan Rain. Titan Rain is the US government codename for a series of attacks on military and governmental computer systems that took place in 2003, and which continued persistently for years. Chinese hackers had probably gained access to hundreds of firewalled networks at the Pentagon, the State Department, and the Department of Homeland Security, as well as defense contractors such as Lockheed Martin. It remains unclear if Chinese security agencies were behind the intrusion or if an intruder merely wanted to mask his true identity by using computers based in China. Whoever was behind Titan Rain, the numbers were eye-popping. In August 2006, during an Air Force IT conference in Montgomery, Alabama, Major General William Lord, then the director of information, services and integration in the Air Force’s Office of Warfighting Integration, publicly mentioned the extent of what he believed was China’s state-sponsored espionage operation against America’s defense establishment. “China has downloaded 10 to 20 terabytes of data from the NIPRNET already,” he said, referring to the Pentagon’s non-classified but still sensitive IP router network. At the time the cyber attackers had not yet breached the Pentagon’s classified networks, the so-called SIPRNET, the Secret Internet Protocol Router Network.7 But the unclassified network contains the personal information, including the names, of every single person working for the Department of Defense.8 That, Lord assumed, was one of the most valuable things the attackers were after. “They’re looking for your identity so they can get into the network as you,” Lord said to the airmen and Pentagon employees assembled at Maxwell Air Force Base.

Twenty terabytes is a lot of information. If the same amount of data was printed on paper, physically carrying the stacks of documents would require “a line of moving vans stretching from the Pentagon to the Chinese freighters docked in Baltimore harbor 50 miles away,” calculated Joel Brenner, a former senior counsel at the National Security Agency.9 And the Department of Defense was certainly not the only target, so there was more than one proverbial line of trucks stretching from Washington to Baltimore. In June 2006, for instance, America’s Energy Department publicly revealed that the personal information of more than 1,500 employees of the National Nuclear Security Administration had been stolen. The intrusion into the nuclear security organization’s network had happened in 2004, but NNSA only discovered the breach a year after it had happened.

In November 2008, the US military witnessed what could be the most significant breach of its computers to date. An allegedly Russian piece of spyware was inserted into a flash drive on a laptop at a base in the Middle East, “placed there by a foreign intelligence agency,” according to the Pentagon’s number two.10 It then started scanning the Internet for dot-mil domain addresses. In this way the malware gained access to the Pentagon’s unclassified network, the NIPRNET. The Defense Department’s global secure intranet, the SIPRNET, designed to transmit confidential and secret-level information, is protected by an air gap or air wall, meaning that the secure network is physically, electrically, and electromagnetically separated from insecure networks. So once the piece of malware was on a hard drive in the NIPRNET, it began copying itself onto removable thumb drives. The hope was that an unknowing user would carry it over the air gap into SIPRNET, a problem known as the “sneakernet” effect among the Pentagon’s security experts.11 That indeed seems to have happened, and a virtual beachhead was established. But it remains unclear if the software was able to extricate information from the classified network, let alone what and how much.

“Shady RAT” is another well-known and well-executed case. It is the selection of targets that points to a specific country, but not to a specific actor within that country, and in the case of Shady RAT China is the suspect. RAT is a common acronym in the computer security industry which stands for Remote Access Tool. McAfee, the company that discovered and named the operation, ominously hinted at the enterprising and courageous features of the rat in the Chinese horoscope. The attack is relatively well documented, so it is instructive to look underneath the hood for a moment.

The attackers operated in a sequence of four steps. First they selected specific target organizations according to economic or political criteria. The second step was the actual penetration. To penetrate a company’s or an organization’s computers, the attackers chose specific individuals within those target organizations as entry points. The contact information and email addresses for these employees could sometimes be gleaned from LinkedIn. Based on all available information, the attacker then tailored emails to their specific recipients, complete with attachments in commonly used Microsoft Office formats, such as .PPT, . DOC, or .XLS, but also PDF files. The files contained an exploit code which, when opened, would execute and compromise software running on the recipient’s computer. This spear phishing ploy was remarkably sophisticated at times. One such email, sent to selected individuals, had the subject line “CNA National Security Seminar.” CNA referred to the Alexandria-based Center for Naval Analyses. The email’s body was even more specific:

We are pleased to announce that that Dr. Jeffrey A. Bader will the distinguished speaker at the CNA National Security Seminar (NSS) on Tuesday, 19 July, from 12:00 p.m. to 1:30 p.m. Dr. Bader, who was Special Assistant to the President and Senior Director for East Asian Affairs on the National Security Council from January 2009 to April 2011, will discuss the Obama Administration and East Asia.12

The phishing email’s content was not plucked out of thin air, but actually referred to an event that was scheduled at the CNA, and was therefore highly credible. The attached file, “Contact List.XLS,” contained a well-known exploit that was still effective due to Microsoft’s less-than-perfect security practices, the so-called Microsoft Excel “FEATHEADER” Record Remote Code Execution Vulnerability (detected by Bloodhound.Exploit.306).13 If the recipient’s computer had not installed Microsoft’s latest security updates, a clean copy of the Excel file would open as intended by the user, in order to avoid suspicion. But by clicking the file the user also opened a Trojan. One possible tell-tale sign of this particular exploit, Symantec reported, was that the MS Excel application would appear unresponsive for a few seconds and then resume operating normally, or it might crash and restart.

Shady RAT’s third step followed suit. As soon as the Trojan had installed itself on the targeted machine, it attempted to contact a command-and-control site through the target computer’s Internet connection. The web addresses of these control sites were programmed into the malware. Examples were:

http://www.swim[redacted].net/images/sleepyboo.jpg

http://www.comto[redacted].com/Tech/Lesson15.htm

http://www.comto[redacted].com/wak/mansher0.gif

Curiously, the addresses pointed to ordinarily used image files or HTML files, among the web’s most common file formats. This tactic, Symantec explained, was designed to bypass firewalls. Most protective firewalls are configured so that .JPG, .HTM, or .GIF files can pass without problem, without arousing the program’s suspicion. The Trojan’s image and text files looked entirely legitimate, even if superficially inspected by a human operator. One file, for instance, was headed “C# Tutorial, Lesson 15: Drawing with Pen and Brush,” pretending to be a manual for a specific piece of software. The text went on:

In this lesson I would like to introduce the Pen and the Brush objects. These objects are members of GDI+ library. GDI+ or GDI NET is a graphics library …

And so on. Yet, at closer examination, command-and-control code could be found behind the files’ façade. The .HTM file, for instance, contained hidden HTML comments. Programmers and website designers can use HTML comments to make notes within HTML files. These notes will be ignored by the browser when turning the file into a visually displayed website, but are visible to anybody reading the entire HTML file, be it a human or an artificial agent. The beginning of such comments is marked with, “<!—” and their end with “—>”. Shady RAT hid the coveted commands in these HTML comments. An example:

<!—{685DEC108DA731F1}—>

<!—{685DEC108DA73CF1}—>

<!—{eqNBb-OuO7WM}—>

<!—{ujQ~iY,UnQ[!,hboZWg}—>

Even if a zealous administrator opened an .HTM file in a simple text editor, which is normally used to write or modify legitimate code, these comments would be unsuspicious and harmless. Many programs that are used to design websites leave such unintelligible comments behind. But the Shady RAT Trojan would be able to decipher the cryptic comments by “parsing” them, as computer scientists say. Once parsed, the actual commands appear:

run:{URL/FILENAME}

sleep:{20160}

{IP ADDRESS}:{PORT NUMBER}

The first command, for instance, would result in an executable file being downloaded into a temporary folder on the target computer’s hard drive and then executed, much like clandestinely installing a malicious app from an illegitimate app store. What the app would be able to do is not specified by the Trojan. The second command, “sleep,” would tell the Trojan to lay dormant for two weeks—counted in minutes—and then awake to take some form of action. The third command is perhaps the most useful for the designers of the Shady RAT attack. It takes the attack to the next level. It does so by telling the compromised machine to open a remote connection to another computer, identified by the IP address, at a specific port.

That final step of the Shady RAT attack enables the attackers to control the target computer directly. The Trojan establishes what is called a “remote shell” with the machine that holds the desired information. A hidden remote shell is a bit like plugging in a distant screen with a separate keyboard, clandestinely, all hidden from the user who in the meantime may be working on a document in Microsoft Word or writing an email in Outlook. To install the attacker’s hidden screen and keyboard, the Trojan waits for a handshake from its controller through the freshly established port connection. To identify themselves, the attackers would whisper a password to the hidden Trojan. The string characters looked somewhat like the following seemingly random characters:

“/*\n@***@*@@@»»\*\n\r”

Once the Trojan received the password it sprang into action by copying a specific file, cmd.exe, into a folder reserved by the Microsoft operating system. The espionage software then used the newly copied file to open a remote shell, that is, the remote screen and keyboard, giving the attackers significant control over the files on the compromised machine. Below is a list of commands that the attacker may use to get to work:

gf:{FILENAME} retrieves a file from the remote server.

http:{URL}.exe retrieves a file from a remote URL, beginning with http and ending in .exe. The remote file is downloaded and executed.

pf:{FILENAME} uploads a file to the remote server.

taxi:Air Material Command sends a command from the remote server.

slp:{RESULT} sends the results of the command executed above to the remote server to report the status.14

These commands are quite comprehensive. The gf command, for instance, allows the attacker to infiltrate additional packets of malware, say to do a specific job that the Trojan is not equipped for in the first place. The most coveted command may be pf, which was used to exfiltrate specific files from a targeted organization to a hidden attacker, of course clandestinely.

McAfee was the first to make the attack public in a report in early August 2011.15 The report was led by Dmitri Alperovitch, McAfee’s vice president for threat research. Alperovitch’s team was able to identify seventy-one organizations from the log files of one command-and-control server. Among the targets were thirteen defense contractors, six agencies that belonged to the US Federal Government, five national and international Olympic Committees, three companies in the electronics industry, three companies in the energy sector, and two think tanks, as well as the Canadian and Indian governments and the United Nations.16 Forty-nine targets were in the United States, and the rest were in Western Europe and leading Asian countries, including Japan, South Korea, Taiwan, and India. That the Olympic Committees as well as the World Anti-Doping Agency were targeted was especially curious. Beijing hosted the Games in 2008, just when the attacks seemed to peak. Alperovitch concluded that this fact “potentially pointed a finger at a state actor behind the intrusions,” especially because the Olympia-related intrusions were unlikely to result in any immediate economic benefit. Some attacks also continued for an extended period of time. McAfee reported that one major American news organization headquartered in New York City was compromised for more than twenty-one months.

McAfee called the attacks “unprecedented.” Alex Gostev, chief security expert at Kaspersky Lab, one of McAfee’s competitors, disputed this finding. “Until the information in the McAfee report is backed up by evidence, to talk about the biggest cyberattack in history is premature,” he told Computerworld shortly after the attack became public.17 Others agreed. “Is the attack described in Operation Shady RAT a truly advanced persistent threat?” asked Symantec researcher Hon Lau in a blog post. “I would contend that it isn’t.”18 Whatever the operation’s best description—details about the volume and the nature of the exfiltrated data remain largely unknown. It is also unclear if and how the attackers were able to take advantage of the stolen information. Unfortunately this lack of knowledge is the rule rather than the exception.

Oak Ridge National Laboratory in Tennessee is the largest research institution focusing on energy-related science and technology under the umbrella of the Department of Energy. The lab, with a workforce of more than 4,200 and approximately 3,000 guest researchers a year, is one of America’s leading neutron science and nuclear energy research institutions. It houses some of the world’s most powerful computers. On 7 April 2011, unknown attackers set their sights on the lab. The attack was shrewd. A spoofed email purportedly from the lab’s human resource office contained a zero-day exploit, a previously unknown vulnerability, possibly in Microsoft Internet Explorer or Adobe Flash Player. The fake email was sent to 573 employees, informing them about benefit-related alterations by inviting them to follow a link for more detailed information. This trick succeeded. Department of Energy officials specified that the attacker had managed to steal approximately 1 gigabyte of data, the equivalent of a few thousand photos, or 1/64 the memory size of a standard smart phone. “When I think about how many pictures my daughter has on her iPhone, it’s really not a significant amount of data,” said Barbara Penland, the deputy director of communications for the Oak Ridge National Lab.19 Thom Mason, Oak Ridge’s director, suspected the attackers were after scientific data. Yet they seem to have failed to penetrate the lab’s classified network. In the aftermath of the attack, Oak Ridge lab turned off Internet access, including emails, to cut off possibly ongoing exfiltrations as well as follow-on attacks.

The attack was not the lab’s first. On 29 October 2007, Oak Ridge had already suffered a serious attack, along with other federal labs, including Los Alamos National Laboratory in New Mexico and California’s Lawrence Livermore National Laboratory. An unknown group of hackers had sent email messages with compromised attachments to a large number of employees, with some staff members receiving seven phishing emails designed to appear legitimate. One email mentioned a scientific conference and another phishing email contained information about a Federal Trade Commission complaint. In total, the attack included 1,100 attempts to penetrate the lab. In Oak Ridge, eleven employees opened a dodgy attachment, which allowed the attackers to exfiltrate data. The data were most likely stolen from a database that contained personal information about the lab’s external visitors, going back to 1990. Although the information contained sensitive personal details, such as Social Security numbers, it was probably the coveted research results or designs that the attackers were after. In Los Alamos, one of only two sites in the United States specializing in top-secret nuclear weapons research, hackers also successfully infiltrated the unclassified network and stole “a significant amount of data,” a spokesman admitted.20 DHS officials were later able to link that attack to China. The US Cyber Emergency Response Team, US-CERT, backed up that claim with a list of IP addresses registered in China that were used in the attack. Yet the details were not granular enough to link the attack to any particular agency or company. Ultimately the US was unable to attribute a wave of sophisticated attacks against some of the country’s most sensitive research installations.

A comparable case was “Duqu.” In early October 2011, the Laboratory of Cryptography and System Security, geekily abbreviated as CrySyS Lab, at the Budapest University of Technology and Economics discovered a new and exceptionally sophisticated malware threat which created files with the prefix “~DQ,” and so the Hungarian engineers analyzing it called it Duqu.21 The threat was identified as a remote access tool, or RAT. Duqu’s mission was to gather intelligence from control systems manufacturers, probably to enable a future cyber attack against a third party using the control systems of interest. “The attackers,” Symantec speculated, “are looking for information such as design documents that could help them mount a future attack on an industrial control facility.”22 Duqu was found in a number of unnamed companies in at least eight countries, predominantly in Europe.23 The breaches seem to have been launched by targeted emails, “spear phishing” in security jargon, rather than by mass spam. In one of the first attacks, a “Mr. B. Jason” sent two emails with an attached MS Word document to the targeted company, the name of which was specifically mentioned in the subject line as well as in the email’s text. The first email, sent on 17 April 2011 from a probably hijacked proxy in Seoul, Korea, was intercepted by the company’s spam filter. But the second email, sent on 21 April with the same credentials, went through and the recipient opened the attachment. Duqu had a keylogger, was able to take screenshots, exfiltrate data, and exploit a Windows kernel vulnerability, a highly valuable exploit. The threat did not self-replicate, and although it was advanced it did not have the capability to act autonomously. Instead, it had to be instructed by a command-and-control server. In one case, Duqu downloaded an infostealer that was able to record keystrokes and collect system data. These data were encrypted and sent back to the command-and-control server in the form of .jpg images so as not to arouse the suspicion of network administrators. The command-and-control server could also instruct Duqu to spread locally via internal network resources.

All these attacks seemed to follow the same pattern. Duqu’s authors created a separate set of attack files for every single victim, including the compromised .doc file; they used a unique control server in each case; and the exploit was embedded in a fake font called “Dexter Regular,” including a prank copyright reference to “Showtime Inc,” the company that produces the popular Dexter sitcom about a crime scene investigator who is also a part-time serial killer.24 Symantec and CrySyS Lab pointed out that there were “striking similarities” between Stuxnet and Duqu and surmised that the two were written by the same authors: both were modular, used a similar injection mechanism, exploited a Windows kernel vulnerability, had a digitally signed driver, were connected to the Taiwanese hardware company JMicron, shared a similar design philosophy, and used highly target-specific intelligence.25 One component of Duqu was also nearly identical to Stuxnet.26 But in one crucial way the two threats were very different: Duqu, unlike Stuxnet, was not code that had been weaponized. It was neither intended, designed, nor used to harm anything, only to gather information, albeit in a sophisticated way.

One of the most sophisticated cyber espionage operations to date became public in late April 2012 when Iran’s oil ministry reported an attack that was initially known as Wiper. Details at the time were scarce, and the story subsided. Then, a month later, a Hungarian research group published a report on the attack that quickly led to it acquiring the nickname Flame. Competing names in the initial frenzy were “Flamer” and “Skywiper.” Several parties announcing their finds on the same day created this confusion. On 28 May 2012, CrySyS Lab published a detailed 63-page report on the malware. Simultaneously, Kaspersky Lab in Russia announced news of the malware. The Iranian national CERT, the Maher Centre, had contacted well-known anti-virus vendors to alert them to the threat, dubbed by Hungarian experts to be “the most sophisticated” and “the most complex malware ever found.”27 Other experts agreed. “Overall, we can say Flame is one of the most complex threats ever discovered,” Kaspersky Lab wrote.28 The Washington Post also acknowledged the threat: “The virus is among the most sophisticated and subversive pieces of malware to be exposed to date.”29

The new catch was indeed remarkable. Flame was a highly complex listening device, a bug on steroids: the worm was able to highjack a computer’s microphone in order to record audio clandestinely; secretly shoot pictures with a computer’s built-in camera; take screenshots of specific applications; log keyboard activity; capture network traffic; record Skype calls; extract geolocation from images; send and receive commands and data through Bluetooth; and of course exfiltrate locally stored documents to a network of command-and-control servers. Meanwhile the worm was dressed up as a legitimate Microsoft update. The 20mb-heavy file was approximately twenty times larger than Stuxnet, which made it nearly impossible to spread itself by email, for instance. Yet its handlers kept Flame on a short leash. The spying tool was highly targeted. Kaspersky Lab pointed out that it was a backdoor, a Trojan, and the malware had “worm-like features,” which allowed the software’s remote human handlers to give commands to replicate inside a local network and through removable drives. Kaspersky also estimated the number of infected machines to be rather small, around 1,000. Once it arrived on a targeted machine, the spying software went to work by launching an entire set of operations, including sniffing the network traffic, taking screenshots of selected “interesting” applications, such as browsers, email clients, and instant messaging services. Flame was also able to record audio conversations through a computer’s built-in microphone, if there was one, and of exfiltrating the audio-files in compressed form. It could also intercept keystrokes and pull off other eavesdropping activities. Large amounts of data were then sent back, on a regular schedule, to Flame’s masters through a covert and encrypted SSL channel via predefined command-and-control servers. One of Flame’s most notable features was its modularity. Its handlers could install additional functionality into their spying vehicle, much like apps on an iPhone. Kaspersky Lab estimated that about twenty additional modules had been developed. Flame also contained a “suicide” functionality.30 The lab confirmed that Stuxnet and Flame shared some design features.31

Flame’s most impressive feature is not its multi-purpose design, but its success. The quality and the volume of intelligence that the espionage tool dispatched to its masters remain unknown, and that is highly unlikely to change. The development and possibly the deployment of Flame started as early as December 2006, logs from the command-and-control code show. The online sleuths, in other words, may have operated in the dark for as long as five years. A great deal of camouflage was necessary to accomplish this. The code also indicates that at least four programmers developed the code, and that team of four authors devised clever methods to disguise their operation. One is the control panel design. The site with the control interface looked like the early alpha version of a command-and-control panel for botnets, with vintage blue links, purple when clicked, raw table frames, no graphics, no animations. But the attackers, it seems, deliberately chose a simple-looking and unpretentious interface. They also used unsuspicious words like “data, upload, download, client, news, blog, ads, backup,” not botnet, infection, or attack. “We believe this was deliberately done to deceive hosting company sys-admins who might run unexpected checks,” Kaspersky Lab wrote.32 The attackers worked hard to make their effort appear as if it was a legal content management system. Another attempt to camouflage Flame was the unusually strong encryption of the stolen data itself.

On 1 June 2012 The New York Times broke the news that the US government developed Stuxnet, the world’s most sophisticated publicly known cyber attack to date. The US government still did not officially admit its authorship, but an FBI investigation into the leaks that led the Times to the story can be seen as a tacit statement of fact. Thereafter officials would occasionally comment on various aspects of government-sponsored computer attacks. Once anti-virus companies like Symantec and Kaspersky Lab discovered malware, government-made or not, patches and anti-virus measures were made available relatively quickly in order to counter the threat to their customers. Anti-virus companies, in short, could directly counter a US government-sponsored espionage program or even a covert operation. In the case of Flame, one anonymous US government official felt the need to reassure The Washington Post’s readership that America’s cyber attack was not neutralized by countermeasures against Stuxnet and Flame, “It doesn’t mean that other tools aren’t in play or performing effectively,” one former high-ranking American intelligence official told The Washington Post. “This is about preparing the battlefield for another type of covert action,” he said. Stuxnet and Flame, the official added, were elements of a broader and ongoing campaign, codenamed Olympic Games, which had yet to be uncovered. “Cyber collection against the Iranian program is way further down the road than this,” as The Washington Post quoted its anonymous source.33 Allegedly, the joint operation involved the National Security Agency, the CIA, and most probably IDF Unit 8200. Meanwhile Iran admitted that Flame posed a new problem but did not offer many details. “The virus penetrated some fields—one of them was the oil sector,” Gholam Reza Jalali, an Iranian military official in charge of cyber security, was quoted on Iranian state radio in May. “Fortunately, we detected and controlled this single incident.”34 It did not remain a single incident.

For hunters of government-sponsored espionage software, 2012 was shaping up to be the busiest year on record. The summer that year was exceptionally hot, especially in Washington, DC. On 9 August it was again Kaspersky Lab who found the newest cyber espionage platform: Gauss. The malware’s capabilities were notable. Gauss was a complex cyber espionage toolkit, a veritable virtual Swiss army knife. Like Flame, this spying software had a modular design. Its designers gave the different modules the names of famous mathematicians, notably Kurt Godel, Joseph-Louis Lagrange, and Johann Carl Friedrich Gauss. The last module contained the exfiltration capability and was thus the most significant. The Russian geeks at Kaspersky therefore called their new catch Gauss.

The software’s lifecycle may approximate one year. Kaspersky Lab initially discovered the new malware in the context of a large investigation initiated by the Geneva-based International Telecommunication Union. Gauss was likely written in mid-2011. Its operational deployment probably started in August and September of that year, just when Hungarian anti-virus researchers discovered Duqu, another tool for computer espionage probably created by the same entity that also designed Gauss. The command-and-control infrastructure that serviced the spying operation was shut down in July 2012.

Three of Gauss’s features stand out. The first is that the espionage toolkit specialized in financial institutions, especially ones based in Lebanon. The Gauss code, which came in the file winshell.ocx, contained direct commands that were required to intercept data from specific banks in Lebanon, including the Bank of Beirut, Byblos Bank, and Fransabank.35 Gauss attempted to find the login credentials for these institutions by searching the cookies directory, retrieving all cookie files, and carefully documenting the results in its logs. It specifically searched for cookies that contained any of the following identifiers:

paypal; mastercard; eurocard; visa; americanexpress; bankofbeirut; eblf; blombank; byblosbank; citibank fransabank; yahoo; creditlibanais; amazon; facebook; gmail; hotmail; ebay; maktoob

These identifiers denoted global Internet companies with many users in Lebanon, including banks with significant operations in the country, such as Citibank, or purely Lebanese banks such as Banque Libano-Française, BLOM Bank, Credit Libanais, Fransabank, and Byblos Bank, as well as some Lebanese-founded institutions with international outreach. “This is the first publicly known nation-state sponsored banking Trojan,” Kaspersky Lab concluded in their highly detailed 48-page report on Gauss. Gauss’s geographical reach was notable but limited. Kaspersky discovered more than 2,500 infections among its customers, which means the overall number could be in the tens of thousands. That would be significantly lower than many ordinary malware infections, but much higher than the number of infections in the case of the highly targeted Duqu, Flame, and Wiper attacks. The vast majority of victims, more than 66 per cent, were found in Lebanon, almost 20 per cent in Israel, and about 13 per cent in the Palestinian Territories.

The second notable feature was its carrying load. The software used the Round Robin Domain Name Service, a technique used to handle large data loads. A Round Robin-capable name server would respond to multiple requests by handing out not the same host address, but a rotating list of different host addresses, thus avoiding congestion. Gauss’s command-and-control infrastructure, therefore, was designed to handle a massive load of data sent back from its virtual spies. The authors of Gauss invested a lot of work into that structure, including several servers at the following addresses:

*.gowin7.com

*.secuurity.net

*.datajunction.org

*.bestcomputeradvisor.com

*.dotnetadvisor.info

*.guest-access.net36

These addresses were registered under fake identities, Jason-Bournestyle. Examples are: Peter Kulmann, Antala Straska, Prague (in reality a pharmacy); Gilles Renaud, Neugasse 10, Zürich (a nondescript fivestorey apartment building); and Adolph Dybevek, Prinsens gate 6, Oslo (a small hotel).

The third notable feature was Gauss’s mystery-features. The malware’s main payload module, named Godel, had a seemingly exceptionally strong encryption. Kaspersky Lab was unable to crack the code and took the unusual step of crowdsourcing the task, “If you are a world class cryptographer or if you can help us with decrypting them, please contact us,” the computer scientists wrote. Another mysterious feature is a seemingly superfluous unique custom font, “Palida Narrow.” The purpose of this font is unknown. One remote possibility is that the font could serve as some form of marker for a potential target.

Gauss, in sum, has the look and feel of a state-sponsored attack. Several arguments back up this assumption. One is that Kaspersky discovered the virus when it was looking for commonalities that the software shared with Flame. The researchers found a similar architecture, similar module compositions, similar code bases, similar means of communication with command-and-control servers, and the exploitation of a specific vulnerability, the so-called .LNK vulnerability, which was already used in Stuxnet and Flame. One of Gauss’s modules contains a path c:\documents and settings\flamer\desktop\gauss_white_1, where the “flamer” stands for the Windows username that created the product.37 Taking all these clues together, it indeed looks as if “Gauss was created by the same ‘factory’ which produced Flame,” as Kaspersky concluded their analysis.38 A second reason is the software’s sophistication and professional execution. And finally, and most convincingly, the target set indicates a state-sponsor. Lebanon is home to Hezbollah, an organization that is politically, criminally, and militarily a major player in the region—and, since 1999, on the US State Department’s list of Foreign Terrorist Organizations. The US administration has long been concerned about Hezbollah money-laundering through Lebanon’s legitimate lenders, but has so far failed to produce evidence of this. “There are a number of articles published in prestigious U.S. newspapers that claim that some of our banks are hoarding illegal cash or getting involved in terrorist funding,” Makram Sader, the secretary-general of the Association of Banks in Lebanon was quoted in July 2012, “All these allegations were not substantiated by their authors.”39 Gauss could have been designed to change this.

Stuxnet, Duqu, Flame, and Gauss have one other thing in common: most likely, some of these complex malwares were clandestinely operating for years before security researchers in private companies detected them. They demonstrate a remarkable failure on the part of the antivirus industry. Their failure is visible because their business model is developing and publishing products that mitigate such threats. Whether the world’s finest signal intelligence agencies—excluding those that potentially developed the attack tools—have also failed is more difficult to say because spies don’t publish what they know. But it is a fair assumption that if McAfee and Symantec miss a threat, the Bundesnachrichtendienst, the DGSE, and GCHQ could do so as well—not to speak of smaller, less well-resourced intelligence agencies. Unless, of course, these agencies themselves are the author of an attack. That, increasingly, seems to be the case.

The German government’s first known use of computer espionage is the so-called Bundestrojaner. On 8 October 2011, the Chaos Computer Club caused a political uproar in Berlin. Germany’s famous hacker club broke news by publishing a report that accused the federal government of using a backdoor Trojan to spy on criminal suspects inside Germany. Bund means federal in German, so the press started referring to the malware as Bundestrojaner. The software was able to take screenshots of browser windows and Skype, to record VoIP conversations, and even to download more functional modules that were yet to be defined.40 The CCC hackers accused the federal government of “state voyeurism” and, because the Trojan’s security precautions were allegedly faulty, of enabling third parties to abuse the software. In the following days several German states admitted using the spyware, although, officials insisted, under strict legal limitations. Noteworthy for spyware that was ordered by the German government is the home address of the command-and-control server: the commercial ISP Web Intellects based in Columbus, Ohio.41

On 7 September 2012, closer to Ohio than Berlin, Debora Plunkett, director of the information assurance directorate at the US National Security Agency, gave a lecture at the Polytechnic Institute of New York University. She spoke about defending cyberspace. “We’re starting to see nation-state resources and expertise employed in what we would characterize as reckless and disruptive, destructive behaviours.”42 Plunkett, surprisingly, did not speak about the United States and its allies. Nor did she mention Stuxnet, Flame, Gauss, or the roof program Olympic Games. She spoke about America’s adversaries. It is therefore important to face a stark reality: Western countries are leading the charge in cyber espionage. Stuxnet, Flame, and the Bundestrojaner are only among the best-documented cases. So it should not come as a surprise, as with many other Western tactical innovations, that less developed states are trying to catch up and develop their cyber espionage capabilities. Non-democratic states, naturally, are not limited by the same institutional, legal, and ethical constraints as liberal democracies. One case from the Middle East is especially curious.

The case in question is known as “Mahdi.” As mentioned previously, 2012 proved a busy year for malware analysts, and this was especially the case for those with an interest in the Middle East. In July of that year a most curious incident became public. In Islam, the Mahdi is a messiahlike figure, a redeemer. Belief in the Mahdi is an especially important concept in Shia Islam, where the return of the Twelfth Imam is seen as the prophesized coming of the savior. And Iran is a predominantly Shia country. The malware’s name comes from its dropper, which also executed a text file, mahdi.txt. The file would in turn open a Word document that contained a particular news article, published by Eli Lake from The Daily Beast in November 2011, “Israel’s Secret Iran Attack Plan: Electronic Warfare.” The article described how Israel had developed “multi-billion dollar electronic weapons” that could be deployed in the event of Israel attacking Iran’s nuclear installations.43 The Mahdi malware was not as powerful as other espionage packages that ricocheted through the region that year. But it was still remarkable.

Most remarkable of all were Mahdi’s ornaments and social engineering, rather than the technology itself. For instance, the kind of social engineering the attackers used to trick their victims into opening malicious email attachments. To infiltrate victims specifically located in Israel, the Mahdi attackers sent an email that contained a PowerPoint presentation, Moses_pic1.pps, with text in English as well as Hebrew. The attackers had embedded executable code as an “activated content” in one of the slides. The presentation started by asking “Would you like to see the Moses?” in English and broken Hebrew (receiving such bilingual content is not unusual in Israel, where many immigrants from English-speaking countries—“Anglos”—may still be working on their Hebrew, hence broken Hebrew is not necessarily suspicious). The text was set against a series of tranquil and peaceful nature-themed images of snow-capped mountains, forest lakes, and tropical beaches. When the presentation had reached the slide with the embedded executable code, the text instructed the viewer—who by now may have been daydreaming about the next holiday or spiritual experience—to “look at the four central points of the next picture | for 30 seconds … please click this file.” The attackers had carefully crafted their text and anticipated that Microsoft Office would now display a pop-up window with a yellow exclamation mark, annoyingly interrupting the user’s joy with the religiously themed album: “You are about to activate an inserted object that may contain viruses or otherwise be harmful to your computer. Make sure the object is from a trustworthy source. Do you want to continue?” Dozens of Israeli users wanted to continue, the attack statistics show. Once the malware was installed, it was able to perform a number of information-stealing operations: keylogging, screenshot capture, audio-recording, and data exfiltration. Mahdi included a screens-hot capture functionality that was triggered by communication through Facebook, Gmail, Hotmail, and other popular platforms. Yet technically, from a programmer’s point of view, the attack was simple and inelegantly designed, “No extended 0-day research efforts, no security researcher commitments or big salaries were required,” commented Kaspersky Lab.44 It seems that the attack continued for eight months.

As with almost all cases of political malware, attribution was highly difficult and incomplete. So far it has not been possible to link Mahdi to a specific actor or agency. But because the code was not particularly sophisticated, it seems highly plausible that Mahdi was an Iranian operation: in order to communicate with command-and-control servers, some of them in Canada, an Israeli security firm discovered, some of the malware’s communication with its handlers contained calendar dates in Persian format as well as code with strings in Farsi, the language spoken in Iran.45 Another indicator is Mahdi’s conspicuous list of targets. The 800 victims included companies that provide critical infrastructure, financial firms, and embassies. All targets were geographically located in the wider Middle Eastern region, the vast majority in Iran, followed by Israel, Afghanistan, Saudi Arabia, and the Emirates—all of these countries are either open enemies or regional rivals of Iran (with the exception of Afghanistan, which in 2012 still hosted the armed forces of many countries Iran considers adversaries).

Mahdi was certainly not as impressive as high-powered Chinese intrusions. But cyber espionage does not necessarily have to be technically sophisticated to be successful. Israel offers two interesting cases, this time not as a high-skilled attacker switching off air-defenses or stealthily sabotaging nuclear enrichment plants—but as a victim of cyber attack. This is not despite, but because of its technological prowess. More than any other country in the region, the Jewish State is a veritable high-tech nation. By 2009, Israel had more companies listed on the tech-oriented NASDAQ index in New York than all continental European countries combined.46 Naturally, online social networks grew rapidly in Israel, including in the armed forces, with Facebook proving to be especially popular. In early 2011, the country’s Facebook penetration had grown to nearly 50 per cent of Israel’s overall population. Israel was one of the most connected countries on the social network, with 71 per cent of all users being under the age of thirty-five.47 This created a novel espionage opportunity for Israel’s enemies and it was only a question of time until they would try to exploit this public, or at least semi-public, trove of information. Indeed, Hezbollah soon started collecting intelligence on online social networks. Military officers had long been wary of operational security; many were especially concerned about the spotty risk awareness of draftees and young recruits. The IDF, somewhat surprisingly, was slow in including Facebook sensibilization in its basic training. Israeli officers had good reason to be concerned. Already in September 2008, Israeli intelligence allegedly warned of Facebook-related enemy infiltrations: “Facebook is a major resource for terrorists, seeking to gather information on soldiers and IDF units,” a report in the Lebanese news outlet Ya Libnan said, “the fear is soldiers might even unknowingly arrange to meet an internet companion who in reality is a terrorist.”48 Around that time, Hezbollah was probably already testing the waters and starting to infiltrate the Israeli Army via Facebook. One operation became public in May 2010, more than a year after it had been launched. Reut Zukerman was the cover name of a fake Facebook persona allegedly created by Hezbollah operatives. The girl’s profile photo showed her lying on a sofa, smiling innocently. Hackers call unauthorized attempts to introduce networks through decision “honeypots,” although usually the method is not used in the context of social networks. In Zukerman’s case the honeypot was an attractive young woman, but not too salacious to be suspicious or lacking in credibility. Approximately 200 elite soldiers and reservists responded to Zukerman’s friendship requests over the course of several months. Once the profile had accumulated a visible group of contacts on Facebook, newcomers assumed that Reut would be just another Special Forces soldier herself. “Zukerman” allegedly succeeded in gaining the trust of several soldiers who volunteered information about the names of other service personnel, along with explanations of jargon, detailed descriptions of military bases, and even codes. Only after one full year did one of Zukerman’s “friends” become suspicious and alert the IDF’s responsible unit.49

In July 2010, one of the Israel Defense Force’s most serious security breaches became known. Soldiers serving at one of the country’s most highly classified military bases opened a Facebook group. Veterans of the base could upload photos and videos of their shared time in the IDF. The group boasted a motto, in Hebrew, “There are things hidden from us, which we will never know or understand.” The group had grown to 265 members, all approved by an administrator. But the administrators apparently did a sloppy job. A journalist from the Israeli daily Yedioth Aharonot got access to the group, did some research, and wrote a story. “Guys, we were privileged to get to be in this fantastic place,” one veteran wrote, “Keep in touch and protect the secret.”50 The group members posted pictures of themselves on the base. Yet, according to Yedioth, the material made available by the group did not contain any compromising information. Some of the group’s members had repeatedly warned on the page’s wall not to upload classified or sensitive information.

One feature that appears as a common trait in almost all high-profile cyber espionage cases is the use of some form of social engineering, or spear-phishing. The use of emails or websites that trick users into unwittingly installing malware highlights the human dimension of cyber espionage. This human dimension is more important, at closer examination, than commonly assumed. Two of the most high-profile examples on record illustrate this significance of human sources for computer espionage: the first is a Chinese operation, the second an American one.

A recent row between an American and a Chinese maker of wind turbines is instructive. AMSC, an American green energy company formerly known as American Superconductor Corp., based in Devens, Massachusetts, sought $1.2bn in damages, thus making the case the largest intellectual property dispute between the US and China on record. Sinovel, China’s biggest manufacturer of such turbines, is known for building the country’s first offshore wind farm. Sinovel used to be the AMSC’s largest customer, accounting for about 70 per cent of the American company’s revenue. But on 5 April, the US company informed its shareholders that Sinovel was refusing delivery of its products and had cancelled contracts during the previous month.51 “We first thought that this was an inventory issue, and we were understanding,” Jason Fredette, AMSC’s vice president of communications and marketing told IEEE Spectrum. “Then in June we discovered this IP theft, and that changed things quite a bit.”52 The coveted design that the Chinese company was after was a new software package that enabled so-called “low-voltage ride through,” a way of allowing the wind turbines to maintain operations during a grid outage. AMSC reportedly gave Sinovel a sample for testing purposes, but the brand-new software had an expiry date, just as some commercial software suites for users require a purchase after thirty days or so. So when an employee in China discovered a turbine operating with “cracked” software beyond its expiry date, AMSC became suspicious. Somebody from inside the firm must have helped the Chinese to remove the expiry date, the company reckoned. The Massachusetts-based firm started an investigation. Only a limited number of AMSC’s employees had access to the “low-voltage ride through” software in question, and even fewer people had traveled to China. The firm quickly identified one of its staff based in Austria, Dejan Karabasevic. At the time of the leak this 38-year-old Serbian was working as a manager at a Klagenfurt-based subsidiary of American Superconductor, Windtec Solutions. The suspect confessed while waiting for his trial in an Austrian prison. On 23 September, the engineer was sentenced to one year in jail and two years of probation. The Klagenfurt district court also ordered Karabasevic to pay $270,000 in damages to his former American employer. As it turned out during the trial, Karabasevic had used a thumb drive to exfiltrate “large amounts of data” from his work laptop at Windtec in April 2011, the Austrian judge Christian Leibheuser-Karl reported, including the entire source code of a crucial program in the most recent version. He then allegedly sent the relevant code via his Gmail account to his sources in Sinovel. This source code enabled the Chinese company to modify the control and supervisory program. Thanks to the rogue engineer’s help, the Chinese were able to copy and modify the software at will, thus circumventing the purchase of new software versions as well as new licences. The Austrian prosecutors estimated that Karabasevic had received €15,000 from Sinovel for his services.53 Florian Kremslehner, an Austrian lawyer representing American Superconductor, revealed that his client had evidence that Sinovel had lured its valuable spy by offering him an apartment, a five-year contract that would have doubled his AMSC salary, and “all the human contact” he desired, the attorney said, “in particular, female co-workers.”54 The affair was economically highly damaging for American Superconductor. The company’s yearly revenue dropped by almost 90 per cent; its stock plunged from $40 to $4; it cut 30 per cent of its workforce, around 150 jobs; and it reported a net loss of $37.7 million in the first quarter after the Sinovel affair.55

A second instructive example is offered by the lead-up phase to Operation Olympic Games. Even the Stuxnet saga, the only potent cyber weapon ever deployed, demonstrates the continued significance of human agents in getting coveted, actionable intelligence. The “holy grail,” in the words of one of the attack’s architects, was getting a piece of espionage software into the control system at Natanz. The designers of what was to become the Stuxnet worm needed fine-grained data from inside the Iranian plant to develop their weaponized code.56 The problem was that the control system was air gapped, as it should be. But the American intelligence operatives had a list of people who were physically visiting the plant to work on its computer equipment, therefore traveling across the air gap. The list included scientists as well as maintenance engineers. Anybody could carry the payload into the targeted plant, even without knowing it. “We had to find an unwitting person on the Iranian side of the house who could jump the gap,” one planner later told Sanger.57 The list of possible carriers involved Siemens engineers, who were helping their Iranian colleagues in maintaining the programmable logic controllers. The work of engineers would often involve updating or modifying bits of the program that ran on the programmable logic controllers, but because the controllers don’t have a keyboard and screen, the work had to be done on the engineers’ laptops. And the laptops needed to be connected directly to the PLCs to modify their software. Siemens was reportedly helping the Iranians to maintain their systems every few weeks. Siemens, it should be noted, had been dealing with Iran for nearly one-and-a-half centuries. In 1859, the company’s founder Werner von Siemens emphasized the importance of business in Iran in a letter to his brother Carl in Saint Petersburg. In 1870 Siemens completed the construction of the 11,000-kilometer Indo-European telegraph line, linking London to Calcutta via Tehran.58 Some 140 years later, Siemens engineers were again carrying a novel piece of IT equipment into Iran, but this time without their knowledge: “Siemens had no idea they were a carrier,” one US official told Sanger. “It turns out there is always an idiot around who doesn’t think much about the thumb drive in their hand.”59 American intelligence agencies apparently did not infiltrate Siemens, as they sought to avoid damaging their relationship with Germany’s Bundesnachrichtendienst, the country’s foreign intelligence service. But Israel was allegedly not held back by such considerations. Another version of events is that Siemens engineers willingly helped infiltrate the malware into Natanz. One recent book on the history of the Mossad, Spies Against Armageddon, claims that the Bundesnachrichtendienst, an agency traditionally friendly to Israel out of a habit of trying to right past wrongs against the Jewish people during the Holocaust, “arranged the cooperation of Siemens.”60 Executives at Siemens may have “felt pangs of conscience,” the Israeli-American authors suspected, or they may have simply reacted to public pressure. Ultimately the Iranians became suspicious of the German engineers and ended the visits to Natanz.61 But by then it was too late.

The precise details of Stuxnet’s penetration technique remain shrouded in mystery. Yet some details and facts have been established. We may not know who ultimately bridged the air gap and made sure the worm could start its harmful work. But it now seems highly likely that a human carrier helped jump that gap, at least at some stage during the reconnaissance or the attack itself. An earlier assumption was that Stuxnet had a highly aggressive initial infection strategy hardwired into its software, in order to maximize the likelihood of spreading to one of the laptops used to program the Siemens PLCs.62 It should be noted that the two possibilities do not necessarily stand in contradiction, as the attack was a protracted campaign that had to jump the air gap more than once.

The evolution of the Internet is widely seen as a game-changer for intelligence agencies. Appropriate historical comparisons are difficult to make.63 But in terms of significance, the Internet probably surpasses the invention of electrical telegraphy in the 1830s. The net’s wider importance for human communication may be comparable to Johannes Gutenberg’s invention of the printing press in the 1440s, a time that predates the existence of the modern state with its specialized intelligence agencies. The more precise meaning of this possibly game-changing development will remain uncertain for years to come. Isolating three trends may help clarify the picture.

The first unprecedented change is an explosion of data. Individuals, companies, non-commercial groups, and of course states produce a fastgrowing volume of data in the form of digital imagery, videos, voice data, emails, instant messages, text, metadata, and much more besides. The digital footprint of almost any individual in a developed country is constantly getting deeper and bigger, and so are the digital fingerprints that all sorts of transactions are leaving behind in log-files and metadata. The same applies to companies and public administrations. More data is produced at any given moment than at any time in the past. Vast quantities of this information are instantly classified. Yet, perhaps counterintuitively, the ratio of information that is actually secret is shrinking and becoming better protected, argues Nigel Inkster, a former British intelligence official now working at the International Institute for Strategic Studies.64 Non-classified data is growing faster than classified data. The ongoing rise of social media epitomizes this trend: social media generate gigantic amounts of data; this data is half-public, depending on the privacy settings, thus creating potentially collectable intelligence.

The second novel change is the rise of the attribution problem. Acts of espionage and even acts of political violence that cannot be attributed to a perpetrator are of course not new, but their number and significance certainly is. In September 2010, Jonathan Evans, the director-general of MI5, Britain’s domestic intelligence service, gave a speech to the Worshipful Company of Security Professionals, a younger livery company (an old institution related to medieval guilds), in London. Evans highlighted the emerging risks: “Using cyberspace, especially the Internet, as a vector for espionage has lowered the barriers to entry and has also made attribution of attacks more difficult, reducing the political risks of spying,” he said.65 More actors were spying, it was easier to hide for them, and the risk they were taking was lower. As a result of falling costs and rising opportunities, he argued, the likelihood that a firm or government agency is the target of state espionage was higher than ever before. The range and volume of espionage that can be accomplished without attribution has probably never been so great. The same applies to the amount of goods that can be [pilfered clandestinely].

The third trend partly follows from the first two: the blending of economic and political espionage. In late 2007, Evans sent a confidential letter to the top executives of 300 large companies in the United Kingdom, including banks, financial services firms, accountants, and law firms. Evans warned that they were under attack from “Chinese state organizations.”66 This was the first time that the British government had directly accused the Chinese government of cyber espionage. The summary of the letter on the website of the UK’s Centre for the Protection of the National Infrastructure warned that the People’s Liberation Army would target British firms doing business with China in an effort to steal confidential information that may be commercially valuable. The foreign intruders had specifically targeted Shell Oil and Rolls Royce, a leading producer of high-tech engineered jet engines. Evans’s letter allegedly included a list of known “signatures” that could be used to identify Chinese Trojans, as well as a rundown of URLs that had been used in the past to stage targeted attacks. Economic espionage is more than just a subset of this problem. Especially in highly developed knowledge societies—the West, in short—the most profitable and the biggest companies have integrated global supply chains with many exposures to the Internet. One agency in Washington is in charge of integrating the defense against foreign spies across various government agencies, the Office of the National Counterintelligence Executive, abbreviated as NCIX. In October 2011, the outfit published a report on foreign spying operations against US economic secrets in cyberspace. “Cyber tools have enhanced the economic espionage threat,” the report states, “and the Intelligence Community judges the use of such tools is already a larger threat than more traditional espionage methods.”67 The report clearly stated that Chinese actors were the world’s most active and persistent aggressors, noting an “onslaught” of computer intrusions against the private sector—yet even the intelligence agency in charge of detecting such intrusions noted that it was unable to confirm who within China was responsible.

These trends create a number of novel challenges for intelligence agencies, especially those agencies or subdivisions traditionally focused on signal intelligence, SIGINT. The first challenge is selection: identifying and exploiting the most relevant sources for the most relevant information. This challenge is a consequence of big data. The problem is best illustrated by modifying the old quip of the drunk looking for the car keys underneath the streetlight, because that’s where the light is. Big data means the drunk is now searching for the car keys in a sprawling and brightly lit field of streetlights, stretching out in all directions as far as the eye can see. Finding the keys there is a problem in itself. But the selection problem is that the key may still be in the dark, beyond that field of streetlights. Just because a signal intelligence agency has a lot of data, this doesn’t necessarily mean it has the right data.

The second challenge is interpretation and analysis (i.e. finding the keys within that field of streetlights). Big data means that turning data into intelligence has become harder, and turning that intelligence into “actionable” intelligence even more so. Pure cyber espionage—that is, remote infiltration of a system, remote reconnaissance, and remote exfiltration of data—comes with a number of problems built in. A lack of insider knowledge almost always means that putting data and information into context is far harder. The story of Tanaka’s head baker epitomizes this problem. If a company is set to steal and replicate an entire industrial process, a lot of tacit knowledge is required to replicate that process, not just explicit knowledge stored in data. Data can be downloaded, but not experience and skills and hunches, all of which are crucial in order to understand complex processes as well as complex decisions. Access to insiders may be necessary to put a deluge of information into context. Big data, in short, also means that intelligence agencies can collect far more data than they can sensibly analyze.

The third challenge is reorienting and reconnecting human intelligence. The specter of cyber espionage, especially from the point of view of the attacked, is threatening to drive a wedge between SIGINT and HUMINT, with the former receiving lots of funds, even in times of scarce budgets, and the latter receiving queries about its continued relevance. “[H]uman spies are no longer the whole game,” observed Brenner, formerly at the NSA, “If someone can steal secrets electronically from your office from Shanghai or Moscow, perhaps they don’t need a human spy.”68 Yet seeing a cleavage between the two forms of intelligence would be mistaken. Stuxnet and the Sinovel case, two of the most high-profile cyber espionage operations, highlight the crucial relevance of human operatives within cyber espionage operations. As Evans noted to the Worshipful Company of Security Professionals, “Cyber espionage can be facilitated by, and facilitate, traditional human spying.”69 The two prime challenges induced by the explosion of data and the attribution problem—selection and interpretation—may be dealt with only through old-fashioned HUMINT work, albeit sometimes technically upgraded, and not merely by “pure” cyber espionage and data-crunching. One crucial job of intelligence agents is to recruit and maintain a relationship of trust with informants, for instance Iranian scientists working on the nuclear enrichment program clandestinely passing on information to the CIA. The recruitment and maintenance of informants is a delicate task that requires granular knowledge of an individual’s personality and history, and establishing a personal relationship between the handler and the informant. It has become possible to establish and maintain such relationships online, although such online-only recruitment presents significant challenges to intelligence agencies. Making sure that a person on Skype, Facebook, or email is a bona fide member of a specific group or profession is more difficult than in a face-to-face conversation and interrogation.70 Human intelligence is still needed, and tacit knowledge makes it clear why this is the case. Neither the value of tacit knowledge nor the value of personal trust and face-to-face connections has diminished in the twenty-first century.

The fourth challenge for secretive intelligence agencies is openness: not openly available data, but the need to be open and transparent to succeed. On 12 October 2010, the head of the Government Communications Headquarters (GCHQ), the UK’s equivalent to the National Security Agency, gave a noteworthy speech. The speech by Iain Lobban was noteworthy for the fact alone that it was the first-ever major address by an acting head of the secretive agency, known as the “doughnut” in England because of its ring-shaped headquarters building near Cheltenham, a large spa town in Gloucestershire, two hours west of London. Lobban indeed made a few important points in the speech. Perhaps the most crucial one was that he highlighted an opportunity that the United Kingdom could seize if only the government, telecommunication companies, hardware and software vendors, as well as service providers, would “come together:”

It’s an opportunity to develop a holistic approach to Cyber Security that makes UK networks intrinsically resilient in the face of cyber threats. And that will lead to a competitive advantage for the UK. We can give enterprises the confidence that by basing themselves here they gain the advantages of access to a modern Internet infrastructure while reducing their risks.71

It is no longer enough that the government’s own networks are safe. Securing a country’s interest in cyberspace requires securing a segment of the entire domain that goes far beyond just the public sector within it. But such a holistic approach to cyber security, and the intelligence agency–private sector cooperation needed for that approach, comes with built-in difficulties. Giving enterprises confidence before they even come to the UK implies some form of international marketing to highlight the security benefits of Britain as a new home for financial firms and high-tech entrepreneurs. But GCHQ was formed in 1919 as the Government Code and Cypher School and is known for inventing public key cryptography, a tool to keep information secret. This long-fostered secretive culture may now turn from virtue to vice. Only by being significantly and aggressively more open will intelligence agencies be able to meet their new responsibilities, especially those concerning the economic dimension of that responsibility.

This fourth challenge leads to a fifth one that is even more fundamental. The Internet made it far more difficult to draw the line between domestic and foreign intelligence. The predominant administrative division of labor in the intelligence community is predicated on a clear line between internal and external affairs, as is the legal foundation of espionage. That line, which was never entirely clear, has become brittle and murky. The attribution problem means that an agency that intercepts an email of grave concern, for instance, may find it impossible to locate the sender and the receiver, therefore making it impossible to identify a specific piece of intelligence as foreign or domestic. But the intelligence may still be highly valuable. In 1978, US President Jimmy Carter signed the so-called Foreign Intelligence Surveillance Act (FISA) into law. Introduced by Senator Ted Kennedy, the act was designed to improve congressional oversight of the government’s surveillance activities. The backdrop of the new law was President Richard Nixon’s abuse of federal intelligence agencies to spy on opposition political groups in America. FISA, as a consequence, imposed severe limits on the use of intelligence agencies inside the United States, including intercepting communication between foreigners on American soil.

In sum, intelligence agencies engaged in some form of cyber espionage are facing down an ugly catch-22: on the one hand, taking cyber espionage seriously means unprecedented openness vis-à-vis new constituencies as well as unprecedented and borderline-legal surveillance at home. This means change: changing the culture as well as the administrative and possibly legal setup of what intelligence agencies used to be in the past. Such reforms could amount to a veritable redefinition of the very role of intelligence agencies. On the other hand, taking cyber espionage seriously means reintroducing and strengthening the human element, in order to penetrate hard targets, big data, and the wicked attribution problem. But by recruiting, placing, and maintaining human intelligence sources, an agency may effectively move an operation outside the realm of cyber espionage narrowly defined, thus removing the “cyber” prefix from espionage.