CHAPTER FIFTEEN

THE LIGHTS GO OFF

A phone call at 4.45 a.m. woke Oliver Hoare, the head of cyber security for the London Olympics, in July 2012. An early wake-up call was especially unwelcome when it was the day of the opening ceremony and the call was from GCHQ. ‘There was a suggestion that there was a credible attack on the electricity infrastructure supporting the Games,’ Hoare recalls.1 Attack tools had been found in the possession of a hacker, along with what were thought to be schematics of the systems underpinning the Olympics. If the lights had gone off that evening and the Queen plunged into the dark on global TV, the reputational damage to the UK would have been enormous. Emergency meetings were held in the Cabinet Office to try to bottom out the threat and work out how to respond. ‘We effectively switched to manual – or had the facility to switch to manual,’ Hoare says, explaining how technicians had to be stationed at various points to keep the power flowing in case it was switched off remotely. An hour before the opening ceremony he was reassured that if the lights went down they would be back up within thirty seconds. But thirty seconds of dark during the Olympic opening ceremony with billions around the world watching would still have been a disaster. In the end, the feared attack turned out to be a false alarm: it would eventually emerge that the plans the hackers possessed were similar to but not the same as those of the Olympic systems. However, as when the lights went out in the US Super Bowl in January 2013 and everyone wondered why, the incident revealed how jittery officials have now become about the dangers of cyber threat to infrastructure. ‘It is just too serious a matter to ignore,’ says Hoare.

One of the reasons officials are so worried is because they understand how vulnerable infrastructure is and they have seen what Stuxnet can do. Many industrial control systems (known as SCADA) are decades old and often have minimal security measures. In the past this did not matter, as an engineer needed to be physically present to manage them and they were not accessible from the outside. But companies have increasingly hooked them up to the internet for convenience: for instance, a manager may want to monitor and manage the flow through a gas pipeline remotely or know what reserves there are in order to quickly buy extra capacity on the market. That might all be done from the same laptop on which the manager sends his or her emails. It makes life easy, but also dangerous. By putting public-facing front-end computers on top of old insecure systems you immediately have a major problem. Replacing or updating them would be expensive, since they are embedded within large industrial plants. Now, if hackers can get into your system, they can also get into the controls which are ‘sitting ducks’. Researchers have been able to find half a million SCADA systems accessible over the internet.

The ‘human factor’ of insiders was most apparent in one of the earliest attacks on infrastructure in February 2000 when 800,000 litres of raw sewage were released into parks and rivers in Queensland, Australia. This turned out to have been the work of someone who had failed to get a job with the company and issued commands to the computers controlling the sewage over an unsecured network. A more serious wake-up call came with a test by the US Department of Energy’s Idaho lab in 2007 which showed that remote hacking into the operating cycle of a power generator could send it out of control to the point where it effectively blew up.

These infrastructure systems are often in private hands, so whose responsibility is it to defend them? Government or industry? Industry has often proved itself either incapable or unwilling to spend the money. It has also fought against ideas to impose security standards, fearing the cost will make companies uncompetitive globally. Government is reluctant to get into the business of protecting anything but the most core national assets in the private sector because the job is so vast. Infrastructure is so complex and interconnected now that no one really understands the points of connection or the vulnerabilities or what is actually critical.2 The private and public sectors are interlinked, often across national borders, with foreign companies running parts of a country’s infrastructure. A dense mesh of cyberspace is emerging which is vital to the functioning of our world but also poorly understood. As with financial systems, the danger is that there is no one who fully understands the vulnerabilities and the way actions can ripple out and cause a crash.

When natural gas pipeline operators are targeted by Unit 61398 of the PLA, the spies have not been stealing corporate data but seeking information on how controllers that run the systems operate.3 The fear is that this could open the way for Stuxnet-type attacks. Stuxnet showed just how much work is required to carry out an effective act of sabotage, but there are plenty of signs that the kind of reconnaissance needed is being done.4 Cyber reconnaissance of infrastructure even infiltrated a speech in President Obama’s March 2013 State of the Union address. ‘Our enemies are also seeking the ability to sabotage our power grid, our financial institutions, our traffic controls systems,’ he said. China, as ever, gets most of the attention but it is not the only actor.

A Russian spy named Oleg Lyalin was arrested in 1971 careering down Tottenham Court Road in his car, drunk and with a blonde at his side. Ostensibly a knitwear representative for the Soviet Trade delegation, Lyalin was in fact an expert in hand-to-hand combat and part of the ultra-secret Department V of the KGB. This dealt with sabotage in the event of war, the latest incarnation of the ‘stay behind’ networks of the Second World War and the early Cold War whose job was to activate when a conflict started and do as much damage as possible. As a defector, Lyalin revealed plans to land teams of Spetsnaz Special Forces in Britain, flood the London Underground and blow up Fylingdales radar station. This was a classic aspect of Russian espionage that again takes spying beyond the narrow field of gathering information: it is also preparing for and carrying out covert action. Just as they invested heavily in this kind of spying during the Cold War, so the Russians are also believed to have become masters of similar activity in cyberspace, expertly probing infrastructure for weaknesses which can be targeted if the order is given. It should surprise no one that American and British spies have also hacked into the infrastructure of Russia and China as well. This may be partly for deterrence – to send a message to your opponent that you can do to them what they can do to you – to create a form of mutually assured destruction. But it can also be to prepare for war.

The penetration of these systems is a form of intelligencegathering much as states have carried out in the past when preparing themselves for conflict. For hundreds of years they did this by making maps of a potential adversary’s key facilities, perhaps after despatching spies or interviewing people who returned from far-off lands. In the Cold War it would be done through more technical means, such as satellite reconnaissance or signals intelligence to try to identify enemy military units and associated infrastructure. Now this is done in cyberspace. And, crucially, it is done against the private sector and not just government and military networks, since that is often where national power resides. But does the act of reconnaissance actually constitute an ‘attack’? It may involve penetrating networks and even leaving behind implants and backdoors to allow a future attack. But it is not the same as actually pulling the trigger. In that sense, this kind of activity is closer to traditional military intelligence and reconnaissance. It only becomes sabotage when deployed covertly (as with Stuxnet, and therefore still a traditional clandestine intelligence activity) and only cyber war when used overtly as an open act of aggression. This is something states have yet to do. And why would China do this to America or Britain do this to China? Only if the two countries were at war or about to go to war. In which case cyber attack would be the least of people’s worries, given the presence of far more lethal weapons. How likely are the US and China to go to war when their economies are closely connected – and far more interwoven, for instance, than that of the US and USSR in the Cold War? Seen in this way, cyber war is merely a new route that warfare will take in the unlikely event of an actual conflict.

There is one problem, though. Cyber reconnaissance is hard to distinguish from warfare. The act of getting into a network and leaving a backdoor to be able to carry out an offensive action in the future is 99 per cent of the work required to take a network down or switch off the power – all that may be missing is a command. That makes it different from traditional intelligence-gathering and much harder to distinguish from attacking. ‘You’ve got to know about an adversary’s network before you want to work your will on it,’ says Michael Hayden. ‘But in a very interesting way the reccie [reconnaissance] in the cyber domain is actually the higher-order action. It’s actually operationally and technically more challenging to penetrate someone’s network, live on it undetected and extract large volumes of information from it – far more difficult – than it is to do something once you’re inside that network. And so when you see someone in a SCADA network, one that controls industrial processes . . . power grids or banking systems, what’s really scary is that “foreign” – whatever that means – presence in that network tells you that that agent already has the ability to do harm because they’ve penetrated the network and have lived on it undetected. That’s what makes “foreign” – read Chinese – presence on these industrial networks quite scary. That already indicates the ability to do harm. It’s not like in the physical domain where okay, I get it, they’re conducting espionage, they’re learning about targets. In this case they’ve already mastered the target.’5 This type of espionage may therefore create a sense of vulnerability and fear that is itself destabilising.

What if a country could use even non-classified knowledge – gathered by cyber espionage – of what supplies are being ordered, whether food or oil, not just to work out where military units might be moved but also, in times of crisis, to disrupt those supplies in order to prevent troops or ships being deployed? What if penetrating defence companies allowed you not just to steal designs but also implant vulnerabilities which could be turned on during time of war? ‘My nightmare scenario is that the United States tries to use force or is contemplating using force in a region of the world and when it trots out its military nothing works because there are Trojan horses inside the software in the American military arsenal,’ says Richard Clarke. ‘If you look at something like the F-35 fighter plane, there are tens of thousands of computer chips in it and very few of them made in the United States, very few of them made under secure conditions. And the software that we rely on is also filled with errors that can be exploited, so the supply chain for American weapons is very vulnerable.’ Kill switches hidden in the hardware of guided missiles are the ‘ultimate sleeper cell’, others fear.

This means that cyber reconnaissance is not just drawing up maps of your opponent’s terrain; it is more like sneaking in and leaving a few satchel bombs hidden in air ducts and underneath the floorboards ready to be triggered remotely if you ever need to. The act itself involves interfering with a network and can be misinterpreted as hostile, even if the purpose is only reconnaissance. In this way there is a greater danger of escalation in cyberspace, both because intrusion is so easy and also because it can be misread. Cyber reconnaissance exists in a new place, sitting uneasily and dangerously between traditional espionage and real warfare.

Fort Meade, the long-time home of the NSA, sprawls across a chunk of Maryland. An old signals intelligence collection aircraft sits near the museum that houses America’s cryptologic history, a reminder of the past. Through the gates and into the ominous black building the sense is clear, not least from the number of uniformed personnel, that visitors are entering an institution which is firmly part of the military (unlike Britain’s GCHQ, which is civilian). The sign outside also tells a story. As well as National Security Agency, it reads US Cyber Command. The US has created an almost (but not quite) seamless join between espionage in cyberspace and military action. This is reflected institutionally in the fact that Cyber Command, whose job is to carry out military attacks, is joined at the hip with the NSA, whose job is to carry out intelligence missions, with the same military man running both. The NSA has always been close to the military, growing in size to support it in Vietnam; but the computer age added a new factor – that of actual offensive work rather than intelligence support. This came partly because the deep understanding of computer networks resided in the NSA and the idea of replicating that level of capability in a separate organisation was seen as making little sense. But it also reflects a US view that the two activities are closely intertwined: the same skills needed to penetrate a network to gather intelligence are required for the reconnaissance and execution of a military attack. Chris Inglis, former NSA Deputy Director, puts it this way: ‘What is needed is finding, fixing, holding in your mind’s eye the thing that you would either defend, or exploit or attack. And then and only then do you make the final choice about what you are going to do with that.’6

The closely bound nature of military and espionage work in the NSA is reflected in the way the military men who have led it have thought about cyberspace as simply another domain in which to wield power. This was a way of thinking the air force popularised in the 1990s and which was explained to Air Force General Michael Hayden when he was briefed about the NSA on his arrival as its head in 1999.

They introduced me to this thought of a domain – land, sea, air, space, cyber. Once you are in that place a man of my background begins to understand that, just as in the other domains – land, sea, air and space – the United States wants to be able to freely use that domain and to deny its use to others who would will us harm . . . the language we use to describe what we want to do in cyberspace, it feels an awful lot like air force doctrine of air superiority and air dominance . . . We want to control the space and then, after we control the space, we will work our will there. Now look, that sounds very aggressive. What I’m talking about is in a wartime situation . . . now unfortunately American law, American congressional oversight, divides what you want to do in cyberspace into attack, and defence and exploitation, the espionage thing. But those of us who work in that space know they are all the same thing. They are all about controlling the space.7

From the 1990s there had been a division in the US military. On the one hand were those who saw information warfare and then cyber as a revolutionary new form of warfare – a game-changer akin to nuclear weapons – and who talked about cyber Pearl Harbors. On the other hand there were those who saw it just as a way of exploiting vulnerabilities in systems during conflict, much like the old techniques of electronic warfare. So far, it has only rarely been deployed. In the late 1990s there was discussion about shutting down the Serbian banking system during the conflict over Kosovo. But the US decided against, fearing it would set a precedent and open the way for hackers to take their revenge on the much larger American system. Similar fears expressed by the US Treasury Secretary stopped the manipulation of Saddam Hussein’s bank accounts in 2003 as well.

Apart from the targeted Stuxnet attack, there seem to have been mainly smaller-scale attacks against machines. One leaked document claimed that the CIA and NSA carried out 231 offensive cyber operations in 2011 – different from espionage because they might involve shutting down someone’s network or scrambling the data on a machine. Nearly three-quarters were reported to be against top-priority targets like Iran, Russia, China and North Korea.8 A 2012 Presidential Directive noted that cyber attack ‘can offer unique and unconventional capabilities to advance US national objectives around the world with little or no warning to the adversary or target and with potential effects ranging from subtle to severely damaging’.9

Cyber Command has grown rapidly – to at least 6,000 personnel, almost all military. It consists of more than a hundred teams, some assigned to support each regular combat command, others to focus on defending – or attacking – particular sectors or countries (like China and Iran).10 This involves exploiting NSA spying skills. Insiders say there are differences in culture. Cyber Command is, as said above, almost entirely military and the chain of command is rigid. The NSA is less than 50 per cent military and, insiders say, people will ignore seniority to defer to the smartest person in the room.

The rapid advance of Cyber Command looks scary to the outside world. That perhaps is the point. ‘No offence to my friends in Cheltenham, the greatest concentration of cyber power on the planet is at the intersection of the Baltimore-Washington Parkway at Maryland Route 32,’ Michael Hayden says. The fact that he refers to a place rather than the NSA and Cyber Command is important. Also located at the intersection are a vast array of defence contractors feeding off the growing trough of money associated with the buzzword ‘cyber’. In recent years this has included private companies being contracted to carry out offensive hacking. Sometimes they are asked to race to see who can get inside a target system first or find a vulnerability, with the pot of prize money going to the winner. These private contractors provide the US with its version of plausible deniability (as well as profit and pay packets for the people who move back and forth with government). Russia may engage criminal and underground gangs; the US uses companies. When he left office, President Eisenhower warned of the scale and power of a military-industrial complex. Today there is a cyber-industrial complex.

Contractors are also involved in buying up computer vulnerabilities that can be exploited for attack – creating a market, critics say, in which private hackers sell exploits to contractors and middlemen rather than tell the software companies so they can be patched up (who can easily be outbid if required, with some vulnerabilities going for over $100,000 a time). These arms brokers (a bit like shady intelligence brokers operating in places like Istanbul and Brussels a century ago) now hand out business cards at hacker conferences trying to recruit new staff who are expert at finding vulnerabilities. This is another step in the industrialisation and commercialisation of hacking towards even the intelligence space. Intelligence agency recruiters are now looking for the successors to the Hanover crew whom Cliff Stoll found working for the KGB for money in the late 1980s. They were the first, but now this is business. The balance between the old defensive mission of protecting computers and the offensive one of breaking into them has been tilted, critics fear, far too much towards offence by the increasing desire of the US military to ‘stockpile’ vulnerabilities so that it has an arsenal bigger than anyone else.

The UK is also developing cyber weapons for use in the case of war. However, their exact utility is not always clear. One person who attended meetings on the subject remembers that discussions were reminiscent of a particular scene in an Austin Powers movie. In the scene, the villain Dr Evil explains that he is going to dispose of the captured British superspy Austin Powers by coming up with an overly elaborate and exotic scheme which will lead to his death. At which the villain’s son, Scott Evil, asks the obvious question everyone who has seen a James Bond movie always wants to know: ‘Why don’t you just shoot him?’ He offers to get a gun but Dr Evil then threatens to ground him. The British official had exactly the same thought as Scott during the meetings on cyber weapons. In other words, it might well be a lot easier to drop a real bomb to do the job than go for some elaborate and destructive cyber weapon which may or may not work (and, as in the Austin Powers and Bond movies, give your opponent the chance to get away). The only time you might prefer a cyber attack to a computer network is – as with Stuxnet – when you want it to be an act of sabotage that is covert and not immediately traceable back to you.

A row took place behind closed doors between GCHQ and the military. The military wanted to wrest more control of cyber weapons from GCHQ, the generals and their officials arguing that cyber weapons are increasingly a core part of fighting wars. They may have feared becoming irrelevant if they lost control of the one part of the budget that seemed to be growing while the rest of their empire was shrinking. GCHQ argued that cyber attack was one end of a spectrum of capabilities (ranging from espionage), rather than something that could be isolated and separated off. They also argued that the number of times it would be used overtly by the military would be low: once this happened you would blow your capability by showing your hand. The military tend to work more by having overt capabilities that can be used to deter opponents, but in cyber, the spooks argued, this did not apply. As soon as you reveal what you can do your hand is blown, and your opponent will patch up the vulnerabilities or Zero Days that you had exploited. Far better to keep the capability as intelligence-led and therefore clandestine, they said, with some in GCHQ even suggesting that if the military wanted their own overt military capacity, then they should build it in parallel.

In the Second World War, penetrating the opponent’s systems was most valuable in deception; senior intelligence officials say that this may still prove to be the most valuable aspect of computer network espionage in the future. It might be electronic warfare to take out certain enemy systems, but it could more fruitfully involve inserting false information to confuse the enemy – like Operation Fortitude, when dummy communication networks made the Germans think that the main thrust of D-Day would be in Calais and not Normandy. Even if your opponent knows you are inside their network, that in itself can lead them to not trust their own communications and sensors and undermine their ability to act (turning the red dots on screens blue so you attack your own people or don’t know who is friend and who is foe, as one retired American general puts it).

Cyber attack may well become integrated with regular warfare to the point where they are indistinguishable, just as cyber espionage becomes entirely interwoven with regular espionage. The comparisons often made to nuclear weapons are misleading: everyone knew what a nuclear weapon could do – they had seen them used in Japan. The truth is that no one knows what other countries can do when it comes to cyber weapons. People who have worked at the highest levels of the British effort concede they do not really know what the US is capable of doing. Military thinkers are struggling to define what constitutes an attack (as opposed to espionage) and therefore what a proportional response would be. Should a cyber attack be countered by shutting down the computer responsible, wherever it is? What if you strike back and your enemy diverts your attack to shut down a hospital’s computers and then blames you? Can you – as the Pentagon suggests – return fire from a cyber attack with a real-world missile? A missile comes with a return address in a way a cyber attack does not; in a cyber attack the problem is knowing who is attacking you, and whether it is even a state.

In January 2008 a CIA analyst surprised a gathering of infrastructure protection engineers from the US and Europe with a candid statement. ‘We have information, from multiple regions outside the United States, of cyber intrusions into utilities, followed by extortion demands. We suspect, but cannot confirm, that some of these attackers had the benefit of inside knowledge. We have information that cyber attacks have been used to disrupt power equipment in several regions outside the United States. In at least one case, the disruption caused a power outage affecting multiple cities.’11 He made it clear that there had been a debate about whether to disclose this, but the agreement was that it was better that the experts assembled at the conference understood what was really going on. The ability to carry out cyber reconnaissance and attack is not restricted to states. For criminals, the threat of destructive cyber attacks offers a route to extortion by holding to ransom companies that rely on websites – threatening, for instance, to take gambling sites offline on the day of the Grand National. The first reports date as far back as the mid 1990s, when Britain’s Department of Trade and Industry said that it was investigating reports that firms in the City of London had been extorted to pay millions to avoid their computer systems being wiped. However, it said it had not seen hard evidence.12

The dog that has not yet barked is destructive cyber attack by terrorist groups. A staple of Hollywood thrillers and alarmist briefing papers from the 1990s onwards, there has been relatively little evidence so far. The fears of attacks on infrastructure grew from the 1990s and after the Oklahoma City bombing. Intrusions into utilities in California in the summer of 2001 were traced to Asia, with some wondering if it was Al Qaeda, others thinking it was the Chinese. Laptops found in Afghanistan after the fall of the Taliban in 2001 showed that Al Qaeda may have carried out reconnaissance over the internet, but only in the sense of searching for the schematics and engineering designs of things like nuclear power plants and water systems on the web rather than actually planning to attack them over computer networks. Al Qaeda and related groups have not managed to use the internet to carry out a destructive cyber attack. What no one is sure about is whether this is a calm before the storm. It may be because of a lack of capability (as Stuxnet showed, it takes real effort and work), but it may also be because it is a group that prizes real death over online disruption. That may change, and other groups may develop more effective capabilities as cyber attack techniques proliferate faster than anyone had expected. ‘It’s got to be a worry, and speaking personally I think it’s only a matter of time,’ MI5’s head of cyber explained in 2013. ‘The intent is already there, the capability can only follow in a few years’ time.’

The internet may be more fertile territory for those wishing to spread fear and confusion rather than cause mass casualties. But the border between hacktivists, states and ‘terrorists’ is often in the eye of the beholder (or the accuser). ‘In the morning a person could be a hacktivist, but at the end of the day he needs money. So it is very difficult to draw the line and I’m afraid that criminals and hacktivists will be employed by terrorists,’ argues Eugene Kaspersky. Michael Hayden sees three threat actors: states, criminals and a third group with an agenda. ‘I haven’t developed a good word for them yet, but “hacktivists”, “anarchists”, “nihilists” – people living in their mums’ basements who haven’t talked to the opposite sex in five years,’ he says. ‘Now you’ve got this third group, blessedly the least capable. But I don’t know what motivates them and I certainly don’t know what deters them. I don’t know what kind of demands they’ll make in the future and I’m not so sure they care much about collateral damage.’13 The term hacktivist, though, can easily be applied broadly to those using cyber tools to dissent – definitions are rarely simple and often contested. And separating hacktivists from states is also getting harder.

Spear-phishing emails arrived in the inboxes of employees at the Sony film studio in September 2014. Once they were in, hackers began exploring the network carefully. Finally, in late November they were ready to act. When employees logged on they saw a message from a hacking group calling itself ‘Guardians of Peace’ and found their systems not responding – a similar experience to that of Saudi Aramco. Next came a data dump. Masses of personal information and corporate emails exposed on the internet. Movies the film studio was working on were published, and some of the emails, in which studio executives talked in none-too-flattering terms about celebrities like Angelina Jolie, were there for everyone to see. The attack was linked to the release of a film called The Interview, a comedy which featured a CIA plot to kill the North Korean leader. ‘They came in the house, stole everything, then burnt down the house,’ Michael Lynton, the movie studio’s CEO, told Associated Press. ‘They destroyed servers, computers, wiped them clean of all the data and took all the data.’ Staff had to be paid with paper cheques and dig out old phones to communicate. Lynton admitted he had ‘no playbook’ to deal with the crisis.

The exposure of corporate data was embarrassing, but it took a threat of real physical violence to escalate the crisis. A message suggesting that cinemas might face some kind of terrorist attack if they showed the film was enough for them to back out, leading to criticism from the White House. Was it just a group of hackers? The US authorities were confident in attributing the attack to North Korea. Some computer security experts questioned that, and wondered if an insider was involved. But the North Koreans had made mistakes, and the US administration could also be confident because they could use their wider intelligence machine. The US had been spying on North Korean activity for a few years, implanting malware in its computers from at least 2010. It knew what they were up to.14 That focus had intensified after 2013, when it became clear that North Korean hackers were capable of destructive attacks after they targeted South Korean media and banks.

Look at a global map of internet activity and the northern part of the Korean peninsula looks almost entirely dark – in stark contrast to the south. But while the country’s citizens may be almost entirely cut off from the global World Wide Web and instead relegated to a domestic walled garden, a select few hackers working for the state have honed their skills. From the early 1990s, North Korea seems to have wised up to the idea of using the internet to gather intelligence from its enemies: like many other countries at the time, it saw its value as an equaliser against the technologically advanced West, especially in the wake of the US operation against Iraq in 1991. According to one report, this realisation came after some of the country’s computer experts visited China and saw that it was already undertaking intelligence collection. A team of fifteen were reportedly soon sent over the border to a military academy in Beijing to learn the tricks of the trade. A steady stream of hackers were sent to China and Russia over the years who were envied for their experience of the outside world and the luxuries they were allowed, defectors said. South Korea now reckons there are 6,000 North Korean hackers working for military and intelligence agencies, some using infrastructure over the border in China. North Korea, like other states, was coming to value computers as not just a means of espionage but also of ‘asymmetric warfare’ – levelling the playing field with the US and others.

US spying on North Korea had enabled it to allocate the blame for the attacks, although some asked why, if the intelligence was so good, the US had not warned Sony. But that goes back to the wider question of how far the NSA is there to protect corporate America and how far it wants the secrets of its work exposed. The case was one of the first, though, where one state directly accused another state of attacking a corporate network. A few days after that happened in December 2014, the entire internet and mobile phone data network for North Korea went down – only for a short period, but it was perhaps a signal of capability. The North Koreans blamed the Americans. America certainly has the capacity to attack, but it also knows how vulnerable it is.

The film Dr Strangelove satirised the Cold War desperation to ensure that you could obliterate your enemy before they obliterated you (including even building a doomsday device run by a network of computers to destroy the earth as a last resort). The attacker has a huge advantage in striking first and knocking out an enemy’s systems so they cannot respond – something contemplated in the early, dangerous days of the Cold War with nuclear weapons. The fear of not being able to respond fast enough is driving research in computers today as it did with SAGE in the nuclear past. And defending countries in cyberspace is becoming increasingly automated. Taking humans out of the loop, defence officials claim, is the only way of stopping an attack at network speed (before perhaps it confuses you or knocks out your systems). The only way of blocking malicious cyber attacks, they say, is by monitoring all the traffic and analysing its patterns to understand what looks dangerous and then stopping that coming in – a kind of automated defensive monitoring. The next question, though, is whether you also want to return fire automatically. A US system, almost comically named as MonsterMind, was reported as being considered to do this.15 Even during the dark days of the Cold War, the President was expected to have a few minutes to decide whether or not to retaliate to a suspected Soviet missile strike. In the cyber world this could be down to milliseconds to judge whether or not to shut down a machine that is sending malicious code to you wherever it may be. But in an online world in which so much can be obscured and confused (hosting an attack from a third country’s computers, for instance) and in which deception (or at least anonymity) is a fundamental tenet of the internet and the work of spies, could we really be sure we were striking back at the right computers in the right country? In a sense this returns us to the era of nuclear-tipped missiles ready to be launched automatically against Soviet bombers, and Roger Schell’s question to the US Air Force back in the 1960s: do we really trust computers with decisions of life and death?