8 Conflict, security and computer ethics
John Arquilla
In the long history of human development, almost all technological advances have been quickly harnessed for purposes of war-making.
The hunter-gatherer's atlatl – a handheld rod-and-thong with a socket into which a spear was inserted – greatly extended his
throwing range, and was good for bringing down prey and handy for fighting competing tribesmen. The same was true of the bow
and arrow as well. A kind of dual-use phenomenon – with the same advances having both economic and military applications –
soon became evident. For example, the wheel helped move goods in carts, but also led to the war chariot. And so on, since
antiquity. The pattern held right up through the industrial revolution, which began over two centuries ago, with steam revolutionizing
both land and maritime trade, and giving far greater mobility to armies and navies. Next came aircraft, whose uses seemed
to apply to conflict before they did to commerce, since very soon after humankind took to the air a century ago, bombs began
to rain down from the sky. The same pattern also held for the atom, with the very first use of nuclear power in 1945 being
to kill some hundreds of thousands of innocent civilians. ‘Atoms for peace’ came later. Today, early on in the information
age, it is clear that the computer too has been yoked to serve Mars, vastly empowering the complex manoeuvres of modern militaries
and the disruptive aims of terrorist networks. Whatever the social and commercial benefits of computing – and they are enormous
– silicon-based intelligence is fast becoming a central element in military and strategic affairs.
Still, throughout all these developments over the past several millennia, there has often been some light to be glimpsed around
the edges of the darkness of war. It has taken the form of ethical guidelines for conducting conflicts, strictures that are
almost as old as the ways of war themselves. These behavioural norms have shown a great deal of consistency across time and
culture: fight in self-defence, or to protect others; employ proportionate levels of violence; honour the immunity of noncombatants;
and, finally, go to war only as a last resort, and on the basis of decisions taken by duly constituted authority (Walzer
1977, Howard
et al.
1994). Notions of
‘just war’, though seldom fully observed, have thus accompanied many, perhaps most, conflicts for millennia, sometimes ameliorating
the effects of what can only be labelled organized savagery. In this way, self-extinction of the species has, so far, been
avoided. Indeed, it is hard to think of our having successfully negotiated the nuclear rapids of the
Cold War confrontation in the absence of an ethical conscience continually reminding both sides of the inherently disproportionate
destructiveness – largely aimed at the innocent – of atomic weapons. But, just as the end of the cold war was overturning
the old international system and holding out new prospects for a deep, lasting peace, the coming of computers may have unwittingly
helped to put war and terror very much ‘back in business’.
Computers have contributed to revitalizing the realm of conflict in three principal areas. First, in terms of conventional
military operations, computers have completely revolutionized communications, making complex new modes of field operations
possible. Next, computers have made it possible to analyse oceans of sensor data quite swiftly, enabling the military, intelligence
and law enforcement communities to take action in ever more timely and targeted ways. This powerful new analytic capacity,
it must be noted, can serve aggressors or defenders equally well, whether they are nations or terrorist networks. Last, the
growing dependence of societies and their militaries on advanced information and communications technologies has given birth
to cyberspace-based forms of strategic attack, designed to cause costly, crippling disruptions. These may also be undertaken
by either nations or networks, perhaps even by individuals or very small groups, given the increasing ability to maintain
one's anonymity while covertly scanning the target. Indeed, the veil of anonymity may prove hard to pierce, both during and in the
wake of a cyber attack.
All these emerging possibilities for engaging in new, information-technology-mediated modes of conflict may pose a variety
of puzzling ethical dilemmas: for war-fighting militaries in the field, for intelligence-gathering services, perhaps even
for terrorists. And thinking these matters through from an ethical perspective may produce some surprising results. The most
troubling may be that both war and terror have grown more ‘thinkable’ and ethically acceptable due to the rise of disruptive,
cyberspace-based means of attack. However, the same cannot likely be said for individual liberty and privacy, which may come
under sharp, devaluating pressure as a result of efforts to detect, deter or defend against the various forms of cyber attack.
8.1 Computers in battle: ethical considerations
In terms of the first area of inquiry, computers on the battlefield, the principal worry is that somehow communications links
will be severed and armies, blinded in this manner, will be seriously weakened and much less able to ward off physical attacks.
From the time that computers first came to be appreciated
as potential battlefield tools, concerns have been expressed about whether they would actually work in dusty, wet or otherwise
dirty environments, where they would be routinely banged around (Bellin and Chapman
1987). These worries were effectively dealt with by robust designs and much redundancy, and advanced militaries have come to depend
heavily upon computers for battle coordination, logistics and other essential aspects of their operations. They are also embracing
an increasingly wide range of
unmanned, and sometimes autonomous, vehicles, aircraft and weapons systems – a trend that began over two decades ago (Barnaby
1986, De Landa
1991), but which is accelerating.
With this dependency comes vulnerability. If an army's computers are disrupted by cyber means, or intruded upon with the intention
of spying out their next moves in the field, the consequences can be grave. An advanced military could be crippled due to
a loss, even temporary, of its information flows (Arquilla and Ronfeldt 1993). Would it be ethical to use explosive weapons
to wipe out a force that had lost much of its functionality because of a disruptive computer attack? While it would almost
surely be acceptable to use cyber means to cause such confusion among enemy forces, in order for this use of computers to
stay on the ethical high ground, the side with the advantage should show some mercy for the helpless. Much as the
US military did in Kuwait in February of 1991, when the slaughter of retreating Iraqi troops along the ‘Highway of Death’
was quickly curtailed (Hallion
1992, pp. 235–237). Though their disarray had not been caused by computer attacks – their discomfiture was due to gravity bombs
rather than logic bombs – the Iraqis were nonetheless in a helpless position, and American forces responded, after engaging
in some lethal pummelling, with restraint. A similar circumspection would be called for if the chaos among enemy forces had
been caused by computer network attacks.
All this said, the possibility of disabling an adversary's forces in the field by means of cyberspace-based attacks has to
be viewed as quite attractive. The need for bloody attritional struggles recedes if one side in a conflict might lose much
of its cohesion as a result of just a few smart ‘clicks’. It is even possible to think in third-party terms, where a power
(or a group of powers, or even a peacekeeping entity authorized by the United Nations) would act from the outside to disable
the military of an aggressor threatening to start a conflict, or would disable both sides’ forces. The sheer life-saving value
of such action might outweigh traditional just war strictures about using force only as a ‘last resort’. Yet, even in this
area care would have to be taken, lest one side blame the other and become even more resolved to continue the fight by all
means available. In the case of nuclear-armed rivals like
India and Pakistan, for example, it might be more prudent to forgo cyber disruption, given the countervailing risk of escalation.
In this particular instance, allowing a clearly
limited conventional (or even unconventional) conflict to play itself out – rather than try to engage in cyber deterrence
– might be ethically preferable
.
8.2 ‘Cyber-snooping’ by nation-states and terrorist networks
Consider now the ethics of obtaining information regarding one's adversaries by means of cyber intrusions and then mining
and parsing the data in ways that might allow their vulnerabilities to be exploited in the ‘real world’. This kind of cyber
activity is, to date, the most prevalent. It is reflected in commerce, under the rubrics of industrial espionage and advanced
marketing techniques, both of which can be quite dodgy in ethical terms. In the realm of crime, almost all the traditional
ruses – i.e., ‘con games’ – and other sorts of long-practised deceptions have migrated into cyberspace. It is hardly surprising,
then, that professional militaries, intelligence operatives, insurgents and terrorists have all come to rely heavily upon
cyber-snooping on each other. But these actors, unlike, say, criminals, are more likely to justify their illicit data-mining
operations in terms of serving their cause in time of war. Knowing what is happening on the ‘other side of the hill’ has long
been considered a critical element in the formulation of effective strategies during times of conflict.
Despite their obvious utility, such activities could be deemed unethical in some settings. But not in the case of national
military services, hacking into an adversary's information systems, learning their ways and plans – and then acting upon this
knowledge – which are undertakings that should not be considered unethical per se. And certainly not in the case of cyber-snooping
on another national military. But if the fundamental strategic problem is that the opponent is no longer a nation, the ethical
situation grows more complex. For example, a well-hidden network, requiring much patient trawling for tracks and signs – and
a great deal of sifting through of information on private individuals that exists in cyberspace – poses ethical problems aplenty.
The terrorists may be very dangerous, and their activities may be designed to cause the deaths of noncombatants; but this
still may not justify intrusive, perhaps even illegal, searches that sweep up vast amounts of data on the innocent (Brown
2003, Heymann
2003, Leone and Anrig
2004, Rosen
2004). These sorts of issues can probably be best answered in an ethical way by harkening to one of
Thomas Aquinas' notions about ‘fighting justly’ (
jus in bello): strive to do more overall good than harm. As Aquinas put it so well in the
Summa Theologica, in
Part II, the reply to Question 40: ‘For the true followers of God even wars are peaceful if they are waged not out of greed or cruelty
but for the sake of peace, to restrain the evildoers and assist the good.’
Thus, one might reason that the overall, enduring harm done by engaging in unethical search procedures in cyberspace could
be outweighed by
the benefit of detecting and preempting an impending terrorist attack. Still, this poses a question about the value of such
intrusions in the event that no attack plans are detected or thwarted. And if attempts are made to mitigate this problem by
forcing the watchers to obtain ‘probable cause’ warrants – or their equivalent – then the ability to detect and track an incipient
terrorist conspiracy might be impaired or seriously slowed, with grave consequences. There is no simple answer, just the general
sense that most people in most societies would probably be willing to accept increased intrusions upon their information privacy
– as they do with regard to their privacy in other contexts – in return for enhanced physical security. The art here will
be to find the equilibrium between compromises to
privacy and enhanced security – and then monitor continuously for any drift in or disturbance to this equilibrium.
In the spirit of doing more good than harm, it may also be possible to argue that cyber-intrusions, even very aggressive and
sustained ones, if accompanied by a drop in older forms of surveillance might produce a situation in which the overall violations of individual privacy rights would be diminished. During
the last decade of the Cold War, technical surveillance capabilities expanded greatly, in particular those that empowered
automated search agents to scan various forms of communication for sensitive key words that might indicate illicit activity.
They came with names like ‘Echelon’ and ‘Semantic Forests’ and other code words that cannot be mentioned openly. This sort of mass-sifting approach
guaranteed that a great deal of presumably private communications would be intruded upon. In the wake of 9/11, pressures mounted to step up such efforts – to little apparent effect, given the persistence, even the growth, of al Qaeda and its affiliates over the years. However, with the rise of sophisticated new cyber-surveillance techniques, like
keystroke reconstruction and other proprietary ‘back hacking’ capabilities, the prospect of learning more in this way – rather
than by simply trawling through telephone calls, for example – suggests an approach that will be both ethically superior and
more practically effective in the fight against terrorism. By allowing and engaging in more of this sort of cyber-snooping,
while at the same time reducing the level of more indiscriminate intrusions into telephone and other forms of communications,
it may be possible both to do good and to do well.
If governments and their military, intelligence and law enforcement arms are being forced to thread their way through an ethical
thicket in order to combat terror, the same cannot be said for the terrorists themselves. At first blush, their data-mining
activities seem to fall completely in the unethical realm. After all, they are seeking information so that they can attack
and kill innocents. Yet there may be a way to view their obtaining targeting information via cyberspace-based operations as
ethical. Certainly, one has to think very creatively to conjure up any sense of how an inherently unethical phenomenon like
terrorism – the deliberate targeting of noncombatants as objects of symbolic acts of violence – can possibly be morally mitigated
by cyber-spying. Yet, on reflection, there do appear to be a few ways in which terrorism might be made less odious, if guided
by cyberspace-based intelligence-gathering operations.
For example, terrorists engaging in skilful data mining might be able to generate enough detailed information that they would
be able to mount their strikes more precisely – and might even be able to shift towards military rather than civilian targets
– allowing them to achieve significant psychological effects, while shedding little noncombatant blood. Perhaps they could
operate without any killing at all, if the kind of precision obtained allowed them to go directly after infrastructure targets.
To the extent to which cyber-targeting allowed this, a shift in terrorism from lethal destruction to costly but (mostly) nonlethal
disruption could be viewed as a lesser evil. Indeed, if terrorists embraced cyberspace as a platform for a major shift to
disruptive
‘infrastructure warfare’, they would be moving more in the direction of the stated aims of strategic aerial bombardment than
of traditional terror (Rattray
2001). And they might end up doing better than aerial bombers, as the latter have always inflicted a lot of destruction – including
the killing of countless innocents on the ground, over a century of largely ineffective air campaigns – in order to achieve
their disruptive effects (Pape
1996). Skilfully guided terrorists could, with a minimum requirement for explosives or other ordnance, achieve high levels of
disruption to power, water and transportation infrastructures – causing their opponents to suffer huge economic losses, but
with little attendant loss of life
.
8.3 The ‘strategic attack paradigm’
This last point leads us to consider the third major realm of
‘information warfare’: its use as a form of strategic attack (Adams
1998). So far, the ideas advanced about the use of cyberspace in conflict and terrorism have limited the use of the virtual domain
to information gathering as an enabler of ‘precision targeting’ in the physical world. But there is a very real prospect,
in the coming years, that forms of cyber warfare and cyber terror will make use of computers as much more than just ‘guidance
systems’. Instead, bits and bytes may soon be used directly as weapons. If this were to happen, then it would be possible
to think of both war and terror evolving in ways that would be less violent and destructive, even though they would at the
same time be very disruptive and would impose heavy economic costs. The new ethical dilemma that would arise might follow
these lines: cyber warfare and cyber terror, by allowing conflict to be conducted in much less bloody ways, might make war
and terror far more ‘thinkable’. Thus, efforts to wage war more ethically – from the virtual domain – might have the perverse
effect of making war and terror more prevalent. As Confederate General Robert E. Lee once put it, during the
American Civil War, ‘It is a good thing that war is so terrible. Otherwise we should grow too fond of it.’ So it could be
if cyberspace-based forms of attack, which can be mounted at small cost and achieve disproportionate results, were to become
the new norm in conflict.
Frederik Pohl, the great writer of speculative fiction, neatly captured this dilemma in his dystopian classic,
The Cool War (
1981). In this tale, the mutual fear of how big new weapons were making war far too destructive led many nations to cultivate
covert capabilities for mounting costly, disruptive attacks on others’
infrastructures – from disturbing financial markets to introducing disease viruses into the food chain. Pohl's world quickly
becomes a Hobbesian one of constant warfare, economic and environmental degradation, and growing hopelessness. The moral of
his story is that efforts to sharply reduce war's catastrophic potential may, in an unintended way, make things much worse.
Even to the extent of goading some angry nations to respond to such small attacks with their arsenals of traditional weapons.
So, there is not only the ‘horizontal’ threat of a rapid growth in small, covert wars, but also a ‘vertical threat’ of escalation
to even deadlier forms of conflict. Thus, it is possible to think in terms of violent physical responses to virtual attacks
becoming the norm.
In the setting of our real world, the ethical dilemmas that would attend the rise of this ‘cool war’ ethos are completely
contingent upon there being an actual ability to mount attacks in this fashion. And today there is still much debate about
the reality of this potential threat (Libicki
2007), as there has never been a crippling ‘digital Pearl Harbor’ or ‘virtual Copenhagen’ – the latter incident refers to Lord
Nelson's surprise onslaught against the Danish fleet during the Napoleonic Wars that sharply raised aspiring naval powers’
fears (in Germany, especially) of Britain waging preventive war from the sea. Yet, there have been several troubling signs
that have confirmed advanced societies’ real vulnerabilities to such attacks. Some of this evidence has come in the form of
exercises or tests, like the
US military's ‘Eligible Receiver’ a decade ago, in which the cyber attack team, using only tools openly available on the Web,
is said to have crippled major military elements’ abilities to fight (see Verton
2003, pp. 31–34). More recent exercises, under such code names as
‘Silent Horizon’ and ‘Cyber Storm’, have confirmed the great and growing vulnerability of both military and civilian infrastructures
to grave disruptions from cyber attack. And, in the realm of real attacks, the sustained intrusions into sensitive systems
over the past dozen years, known under such names as
‘Moonlight Maze’ and ‘Titan Rain’, provide even more chilling confirmation. These incidents go well beyond even the darker
aspects of the phenomenon of
‘hacktivism’ – when computers are sometimes used, instead of to mobilize demonstrators, to mount ‘denial of service’ attacks
that signal displeasure with the foreign or domestic policies of some government entity. To date, the most serious incidents
of this sort have come, apparently, out of
Russia. In
2007, hackers caused some brief, limited disruptions in Estonia in the wake of the removal of a statue of a Russian soldier
from a prominent location. Then, in the summer of 2008, a wide range of Georgian critical information systems were brought
down during that country's week-long war with Russia over the fate of South Ossetia
.
8.4 Prospects for cyber deterrence and arms control
Whatever the actual pace of development of ‘strategic warfare in cyberspace’ (Gregory Rattray's term) – and there is much
debate about this – all can agree that vulnerabilities to cyberspace-based forms of attack are real and growing. However,
there has as yet been no ‘existential proof’ of a serious capability, as noted above, given the absence of a major attack
mounted either by a nation or a network. Assuming that nations do have some capabilities along these lines – and the Moonlight Maze and Titan Rain intelligence-gathering intrusions suggest that Russia and China, respectively, may well have
access to such offensive means – the reasons why they have been reluctant to mount actual attacks should be carefully considered.
Certainly they have not refrained because of a lack of study or failure to invest in crafting the requisite offensive capabilities.
Most advanced countries’ security establishments have been obsessed with the problem of cyber attack since at least the end
of the Cold War. Many have invested billions in learning how to defend themselves; and good defence implies a deep understanding
of how to mount offensives. No, sheer incapacity cannot be the reason for such restraint.
The emergence of a kind of ‘cyber deterrence’ may explain the wariness of nations about waging war in and from the virtual
domain. Very simply, it may be that Frederik Pohl's argument about the false attractiveness of such covert warfare – and the
deleterious consequences of the rise of this new way of war – have been well understood by leaders capable of foreseeing the
deep, troubling endgame that Pohl described. It may be that, in the real world, mutual vulnerability to ‘cool war’ – or to an angry, escalatory return to more traditional violence – may be the driving force behind the kind of
durable ‘cyber peace’ that seems to be holding, for now, among nations. And there may be a very serious reluctance to be known as
the nation that took the first step down this path.
The author's own experiences in this area tend to confirm this point – the result of having been asked by the US government
to co-chair a meeting between
Russian and American specialists in ‘information warfare’ in the mid 1990s. The week-long session remains largely ‘not for
attribution’, but what can be said is that there was clearly much concern on both sides about the great and growing vulnerability
of each society – and each military – to cyber attacks. In the wake of this meeting, the Russians introduced a measure in
the
United Nations
(General Assembly Resolution 53/70) calling for all nations to refrain from making cyberspace-based attacks. The United States
blocked this resolution – just as Americans blocked the post-World War II Baruch Plan to cease the development of nuclear
weapons and put those that already existed under international control. Still, the Russians continue to raise this issue in
the United Nations from time to time – and it may behove the Americans to behave less obstructively at some point.
Despite the American rebuff of the Russian call for a kind of ‘behaviour-based’ cyber arms control regime – think of it as
being akin to the pleas a generation ago for a ‘no-first-use’ pledge in the nuclear realm – the author was witness in 1999
to some very prudential behaviour. Little can be said openly, save that, during the Kosovo War, the United States may have had a considerable capacity for mounting cyber attacks against the hidden assets of
Serb leader Slobodan Milosevich and his cronies. Yet these capabilities were not employed. At the time, the author was working
directly with the ‘director of information warfare’ in the Pentagon, and participated in strategy and policy discussions at
high levels, coming away with the clear sense that the commander-in-chief at the time, President Clinton, was unwilling to
signal to the world, by ordering such attacks, that this was an acceptable form of war-making. For such a precedent might
lead adversaries in future conflicts to resort to cyberspace-based attacks – and the US system would provide the richest and,
in many ways, most vulnerable set of targets in the world.
In the decade since the (first?) Kosovo War – tensions rose once again in 2008, in the wake of the Kosovars’ self-announced
separation from Serbia – the failed Russian effort to advance the concept of some sort of behaviour-based arms control regime
has nevertheless continued to resonate in important ways. For example, scientists, lawyers and ethicists all tend to agree
that a behaviour-based solution should be sought because there is simply no way to control the diffusion of cyber attack technologies
and tools (Hollis
2007). Ironically, the Russians themselves have come under much suspicion that they are the principal malefactors in a series
of troubling intrusions into various nations’ infospheres. The Americans, for their part just as ironically, have continued
to oppose the creation of an international ethical or legal control regime over cyber warfare – yet they have also continued
to be extremely circumspect about not actually engaging in such attacks against other countries. And so it seems that nations
are going to continue to tread with care when it comes to strategic warfare in the virtual domain
.
8.5 Will terrorists wage war from cyberspace?
The behaviour of nations, likely driven by concerns about reputation and vulnerability to retaliation in kind – or worse,
to escalatory violence, even
to the nuclear level as a Russian strategist once threatened (Thomas
1997, p. 77) – cannot begin to explain why terrorist networks have failed to become serious cyber warriors. Unlike nations, networks
– particularly those such as al Qaeda and its affiliates, which operate covertly in scores of countries – have no territorial
‘homeland security’ to worry about. They have little or no infrastructure of their own to protect. And, as people whose basic
concept of operations is to kill innocent noncombatants, coming to be known for their disruptive cyber skills could hardly
be viewed as deepening the damage to their reputation. Indeed, a shift from destruction to fundamentally disruptive – and
mostly nonlethal – acts would seem to indicate a great ethical improvement.
With this in mind, it is important to consider the conditions under which al Qaeda or other terrorist networks might begin to emphasize cyberspace-based attacks, or if they would continue to pursue
the same level of violence in the physical world even as they began to disrupt the virtual domain. There is a possibility,
however, that the need for traditional violence could diminish; and that cultivating a capacity for ‘mass disruption’ might
even reduce the terrorists’ desire to develop or acquire nuclear, biological or chemical weapons of mass destruction. These
are the central issues that define the discussion of any possible shift towards cyber terrorism; and they should all be considered
– from both a practical and an ethical perspective.
Perhaps the single greatest impediment to terror networks’ development of top-flight cyber skills lies in the area of human
capital. It takes many years to become a first-rate computer scientist – as opposed to being just a disaffected ‘tekkie’,
or Script Kiddie, capable of downloading (but not fully exploiting) attack tools from the Web – at least a decade. And this
presupposes study at a leading institution, or under the tutelage of a very experienced master operator. This requires a long-term
investment strategy on the part of the terror network, and a willingness to risk the discovery of the asset – or his or her
disillusionment, failure or disaffection – during the training period. The alternatives to a traditional computer science
education for one's operatives would be either to pursue the private tutorial route, or to consider the recruitment of
hacker mercenaries. Both of these options carry serious risks, as either the tutor or the cyber mercenaries could be part
of a counter-terror infiltration ‘sting’, or they could be under surveillance by law enforcement or intelligence services.
If recruited from or through a criminal organization, there would also be the possibility of betrayal by the crime lords,
who might find it profitable to sell them out. So, in the face of high ‘entry costs’ and serious operational security risks,
terror networks might be self-deterred from pursuing a serious capability for cyber warfare. It may be that dispersed terrorist
cells and nodes, which rely heavily on cyberspace for communications, training, propaganda and operational control (Weimann
2006), are loath to risk compromising, perhaps even losing, these virtual capabilities by running the risk of bringing a counter-terrorist
predator into their midst.
Yet there are conditions that could possibly justify bearing such costs and undertaking such risks. For example, the prospect
of fundamentally transforming terrorism itself – changing it from a bloody exercise in compellence to a mode of conflict driven
by economic and psychological costs imposed on hapless, often helpless targets – might prove very attractive. Pursue, however
briefly, a thought experiment in which al Qaeda or some other terrorist group with an ambitious agenda has developed or acquired a serious capability for mounting
cyberspace-based attacks. Power grids are taken down, financial markets are disrupted, and military command and control is compromised at critical moments.
All in the name of the group's call for some policy shift, e.g. removal of American troops from some or all Muslim countries;
agreement to hold democratic elections in Saudi Arabia or another authoritarian monarchy; or formal establishment of Palestine
as a state. However understandable these goals might be, their intrinsic worth has always been undermined – and overwhelming
public opposition has been sparked, in Muslim countries as well – by terrorists’ deliberate infliction of deadly violence
upon noncombatants. In this thought experiment, though, the deadly violence would be replaced by disruptive acts that could
actually serve to put the spotlight more on the terrorists’ goals rather than on their means of achieving them. The ethics
of the terrorists would have improved – at least at the margins – and the likelihood of attaining their objectives might also
rise a bit, or perhaps a lot.
The foregoing suggests that such a state of affairs could both increase the effectiveness of terrorist campaigns and reduce
the level of international opprobrium that usually accompanies their activities. In the first
Russo-Chechen war (1994–1996), the rebels showed how this could be done in the real world by hijacking aircraft and later
a Black Sea ferry – but then releasing their captives after having called attention to their cause. The Chechen insurgents
won this round against the Russians (Lieven
1998, Gall and de Waal
1998), but returned to more traditionally violent acts of terror in their second war – including hostage seizures at such diverse
targets as primary schools and opera houses (each of which ended with large numbers of innocents killed) – and lost most of
the world's sympathy for their cause. A cautionary tale that terrorists should heed, perhaps
.
8.6 Three paths to cyber terror
Coming back to the current terror war, it seems clear that al Qaeda's development of a capacity for cyberspace-based warfare
would also track very nicely with Osama bin Laden's calls, in videotaped messages that aired on 24 January 2005 and 7 September
2007, for his jihadist adherents to wage ‘economic warfare’ against the United States and its allies. Yet there is a risk
that traditional terrorists – more alpha males than alpha geeks – might reject such a shift, much as the Chechens apparently
did. They could feel emasculated by the imposition of any serious restrictions on their violent acts. And along with this
psychological antipathy, worries about the serious costs and potentially grave security risks of pursuing a cyber warfare
programme could forestall the emergence of this sort of terrorist innovation. But there are at least three paths ahead towards
a cyber capability that would still leave plenty of room for committing old-style acts of violence.
First, a terrorist group, such as al Qaeda, could continue its existing level of violent activities – for example, the insurgencies
in Iraq and Afghanistan, coupled with occasional ‘spectaculars’ (e.g. like 9/11, Bali or Madrid) in other parts of the world
– and simply encourage new cells to form up around cyber capabilities. This strategy requires only a ‘call to virtual arms’.
In a loose-jointed network – again, al Qaeda is a good example – sympathizers are sure to hear and heed such a call, and a
capacity for cyber warfare could emerge that in no way imperilled the network core or compromised its security. But such an
approach is fairly passive, requiring faith in the likelihood that those with the requisite skills will sign up for the cause,
or the jihad. Further, this ‘grafting on’ approach would do nothing to reduce the international opprobrium that would be heaped
on the organization for its continuing bloody attacks on noncombatants.
A second approach would be for a terrorist group to deliberately scale back on the number of acts of physical violence that
it commits, closely integrating such strikes as would be allowed with cyber capabilities. An example of this strategy would
be to use some type of cyber attack – aimed at, say, causing a massive power outage – that would put people ‘on the street’, making them vulnerable to attacks with explosives, chemicals, even a
radiological ‘dirty bomb’. While this might make moving into the virtual realm more attractive to the terrorists, the fact
that cyber attacks were being mounted in order to enhance the effects of physical attacks would probably increase the world's
antipathy towards the terrorists.
A third option for the terrorists would be to make the strategic choice to emphasize cyber attacks in the future, while restricting
the network's acts of physical violence to clearly military targets. This poses the prospect that the rise of cyber warfare
could improve a terrorist organization's ‘ethical health’. And, as to physical attacks, they would be conducted in a more
‘just’ fashion, given the emphasis on military targets. Some terrorist groups – notably the
IRA – have tried to be very discriminate about their targeting choices, demonstrating a clear concern about ethical matters
that helped make it possible for them to be seen as viable partners in peace negotiations. It is hard to think of al Qaeda
in this fashion today; but the vision of a terrorist group that struck only military targets with lethal violence and then
caused
costly but nonlethal cyber disruptions with the rest of its energies, is an odd, intriguing one.
The principal point of the
preceding section is that the rise of a real capability for cyber warfare might ‘make terrorism better’, in ethical terms. Not giving it quite
a Robin Hood patina, but redefining its image in far more palatable ways. And if this sort of cyberspace-based rehabilitation
were possible for terrorists, it might also be a path that traditional militaries could consider. Why shouldn't a nation-state
strive for a capacity to knock out power with logic bombs rather than iron bombs? Classical interstate war itself might be
‘made better’ by such means. Probably not, however, if such wars were waged secretly, conjuring visions of Pohl's ‘cool war’
world. But if cyberspace-based warfare were conducted in a more openly acknowledged manner – say, in the wake of an actual
declaration of war – and followed the laws and ethics of armed conflict, then a shift towards disruptive rather than destructive
wars might well take place. Yet, even in this instance, there would be the ethical concern that war had been made not only
better, but too attractive – and so would increase the penchant for going to war in the first place.
Thus a world with more, but less bloody, armed conflicts might emerge in the wake of cyberspace-based warfare's rise. In some
respects, this information-age effect would parallel the impact of the early spread of
nuclear weapons over a half century ago. As Kenneth Waltz, dean of the ‘neorealist’ school of thought in international relations,
put it so well at the dawn of that era, ‘mutual fear of big weapons may produce, instead of peace, a spate of smaller wars’
(
1959, p. 236). But there is a big difference between the deterrent effect of the threat of escalation from conventional to nuclear
weapons, and that engendered by a shift from virtual to physical violence. Where prudence seemed to guide the careful avoidance
of escalatory actions when nuclear war loomed ahead, there may be less worry that cyberspace-based actions might lead to some
sort of more traditional military confrontation. And, for all its chest-thumping quality, any threat to use weapons of mass
destruction in response to cyber disruption may simply not be credible. Indeed, during the heyday of the Cold War, both the
notions of ‘massive retaliation’ and ‘flexible response’, which posited first uses of nuclear weapons in response to conventional
acts of armed aggression, were also viewed sceptically. As the great strategic theorist Thomas Schelling once observed, massive
retaliation as a doctrine was ‘in decline from the moment it was enunciated’ (Schelling
1966, p. 190). Back then, this scepticism proved healthy, and put a premium on developing solid traditional military capacities
as the backbone of deterrence. Today, and tomorrow, a similar sort of flinty-eyed scepticism about massive retaliatory responses
to cyber attacks, coupled with ethical concerns about proportionality, may point just as clearly to the need to emphasize
cyber
defensive means – rather than punitive threats – as the best way to deter cyber attacks
.
8.7 Conclusion: cyber war is more ‘thinkable’, more ethical
When it comes to notions of ‘just wars’ and ‘fighting justly’, the information revolution seems to be creating a new set of
cross-cutting ethical conundrums for nation-states. The rise of strategic warfare in cyberspace makes it easier for them to
go to war unjustly – especially in terms of starting a fight as an early option rather than as a last resort. Yet, at the
same time, fighting principally in cyberspace could do a lot to mitigate the awful excesses that have led so many wars in
the ‘real world’ to be waged in an unjust manner. Against this, however, is a greater ethical concern about escalation risks.
A nation that has been attacked by another in cyberspace may choose either to respond in kind – setting off something like
Frederik Pohl's ‘cool war’ – or may opt for a self-defence strategy of retaliating with physical military force. Either way, the conflict
escalates, and ethical considerations of noncombatant immunity, proportionality and ‘doing more good than harm’ are likely
to be trampled.
So thought must be given to what is to be done. Soldiers can now contemplate ‘clicking for their countries’ in ever more effective
ways (Dunnigan
1996; Alexander
1999). Virtual mobilization periods might come to be reckoned in micro-seconds, putting enormous stress on crisis
decision-making. Imagine a latter-day Cuban missile crisis that unfolded in thirteen minutes rather than thirteen days. Add
to these considerations the fact that rich, inviting, poorly protected civilian targets abound, and one can easily see how
a cyberspace-based mode of conflict may indeed become a more attractive policy option, upending traditional wariness about
war-making. Further, this attraction may be reinforced by the notion that, while inflicting potentially huge economic costs
on one's adversaries, this can be done with little loss of life. Thus, a rationalistic attempt might be made to offset going
to
war unjustly by actually waging the war in a more just – that is, far less lethal – fashion. A most curious and contradictory
development, unparalleled, it seems, in the history of conflict.
These ethical issues are further complicated by the rise of nonstate actors, ranging from individuals to small groups, and
on to full-blown networks of actors. The capacity to wage war is no longer largely under the strict control of nations. Although
we have seen previous eras in which violence migrated into the hands of smaller groups – witness the parallel histories of
pirates and bandits (Hobsbawm
1969, Pennell
2001) – the sheer accessibility of cyber warfare capabilities to tens, perhaps hundreds, of millions of people is a development
without historical precedent. And where piracy, banditry and
early examples of terror networks have sometimes proven nettlesome, they have only rarely – and briefly – won out over their
nation-state adversaries. This may not be the case in cyberspace. If so, then the ethical dimensions of acts of war and terror
conducted by networks of individuals, operating via the virtual realm, will become just as important as the considerations
for nation-states.
At the level of policy making and international norm-setting, it seems that all the aforementioned ethical concerns are driving
nations and networks towards critical choice points. For nations, there is the question of whether to encourage the rise of
behaviour-based forms of cyber arms control or to encourage the development and use of virtual warfare capabilities – in the
latter case fuelling a new kind of arms racing. The answer, in this case, is not at all clear, as simply refraining from the acquisition and use of cyber warfare
capabilities may doom nations to keep fighting their wars, when they do break out, in old-fashioned, highly destructive ways.
Whereas, the great potential of cyber warfare is that it might lead to the swift and relatively bloodless defeat of enemy
forces in the field, and the disruption of war-supporting industries at home. Even if this prospect might make war somehow
more ‘thinkable’, the prospect of waging one's conflicts in less bloody fashion is something the concerned ethicist cannot
dismiss. Perhaps the answer lies in crafting such cyber capabilities – as they allow the prospect of fighting more justly
during a war – while at the same time maintaining the very strictest adherence to ethical strictures that should govern choices
about going to war in the first place.
As to terror networks, they too seem to be facing a strategic choice about whether to embrace cyber warfare fully. A big difference
for the networks is that the risks of being discovered and tracked down during the development and acquisition process are
considerable – something of a lesser worry for most nations. Beyond the matter of operational security, there may also be
the problem that terrorists will simply have a hard time making the shift in mindset from being all about the symbolic use
of destructive violence to embracing notions of inflicting costly but largely nonlethal disruptions upon their targets. The
foregoing analysis of their situation, however, suggests that some clear-headed ethical reasoning might impel them to make
a serious shift in the direction of cyber-based disruption. For a terrorist group that is capable of causing huge amounts
of economic damage while killing few – perhaps none – would be an organization that would be a lot harder to hate. Among the
mass publics that made up the terror network's core constituency, those who sympathized with the group's goals would be far
less likely to be alienated by this kind of cyberspace-based mode of operation. And even the network's targeted enemies would
have a less visceral response than has been evinced to such physical attacks as 9/11, or the Bali, Madrid and London bombings.
In sum, whether nation or network, there may be sound ethical reasons for embracing cyber warfare and/or cyber terror. A most
curious finding, as one would expect any ethical examination of conflict to lead inexorably towards an affirmation of peace
and nonviolence. But when one considers the endless parade of slaughters humans have visited upon each other since they became
sentient – and when one notes that the recently concluded century was the bloodiest of all in human history – then perhaps
it is not so odd to find ethical value in exploring a very different new kind of warfare.