© The Author(s) 2019
Lawrence Freedman and Jeffrey MichaelsThe Evolution of Nuclear Strategyhttps://doi.org/10.1057/978-1-137-57350-6_15

15. The Strategy of Stable Conflict

Lawrence Freedman1   and Jeffrey Michaels2  
(1)
Department of War Studies, King’s College London, London, UK
(2)
Department of Defence Studies, King’s College London, London, UK
 
 
Lawrence Freedman (Corresponding author)
 
Jeffrey Michaels
The question which dominated the work of the formal strategists was whether or not the balance of terror was really delicate and what could be done to stabilize it. A delicate balance would demand hair-triggers and cool nerves, offering the possibility of overwhelming victory or an equally overwhelming defeat. The answer to this question would have to come largely from technical considerations, concerning the feasibility of counter-force attacks and ballistic missile defence. But complementing an evaluation of the properties of the weapons themselves was an assessment of the sort of strategies that could be sustained by the societies of the contending parties. The significance of this issue was captured by Brodie in 1954:

We have a number of alternative possibilities for the future oriented around a single criterion—that is, the expected degree of success of a blunting mission. Now it should be clear that a political or military strategy suitable to one condition may be most unsuitable to another. If, for example, we are living in a world where either side can make a surprise attack upon the other which destroys the latter’s capability to make meaningful retaliation (which is almost a minimum definition of ‘success’ for the enterprise), then it makes sense to be trigger-happy with one’s strategic air power. How could one afford under those circumstances to withhold one’s SAC from its critical blunting mission while waiting to test other pressures and strategies? This would be the situation of the American gunfighter duel, Western frontier style. The one who leads on the draw and the aim achieves a good clean win. The other is dead. But if, on the other hand, the situation is such that neither side can hope to eliminate the retaliatory power of the other, that restraint which was suicidal in one situation now becomes prudence, and it is trigger-happiness that is suicidal!1

Those developing the first US ICBMs saw this issue with great clarity. Bernard Schriever, leading the programme for the USAF, spoke to RAND at the start of 1955, extolling the invulnerability of the new weapon and its virtue as having the ‘highest probability of not being used’. There would be no grounds for the Soviets miscalculating the American ability to retaliate.2 At the time the Convair Corporation (later part of General Dynamics), was being considered (successfully) to be the Air Force contractor for the ICBM programme. An operations analyst for Convair Corporation, Warren Amster, wrote an influential paper developing this idea. It impressed C.W. Sherwin, a former Chief Scientist for the Air Force, who publicized Amster’s conclusions in the Bulletin of the Atomic Scientists in May 1956.3 No other article anticipates so succinctly so much of what was to become conventional wisdom by the early 1960s. Amster argued that: ‘We may well expect that the conversion to intercontinental missiles will be followed shortly by the development of military strategies that are fundamentally deterrent.’ He identified the key features of missiles. They would not be very good at fighting each other, being too well-hidden and protected to be caught on the ground and too fast to be caught in the air. Compared with missiles, cities were extremely vulnerable—soft, immobile and sprawling. Sherwin drew the conclusions:

If forces are very costly to attack, and cities are very cheap to attack, the optimum force will not be very large. If the forces become more vulnerable, and the economies more effectively protected, security is reduced. If this development is carried far enough … an arms race will develop, and—considering the nature of the new weapons—great advantage will thus accrue to the initial attacker.4

Anticipating the introduction of missiles, Amster and Sherwin looked forward to a stable deterrent. To illustrate the consequences of this stable state, Sherwin provided a simple ‘game’ involving two tribes living in close proximity and armed with poison darts. This was an interesting contrast with Brodie’s metaphor of an ‘American gunfighter duel, Western frontier style’. With poisoned arrows the two tribes could not disarm each other, there was no defence and the poison was fatal. However, it took time to be effective, so there was a probability of retaliation. The only rational outcome was for neither tribe to start shooting.

Transposing this game to East-West relations, Sherwin reported Amster’s rules for maintaining this steady state if threatened by accident, design or some expanding local conflict. The rules were only to engage in strategic bombing if attacked in this manner oneself, responding to attacks on cities with comparable attacks on the opponent’s cities, and to attacks on bomb-carriers by sending off an equivalent number of carriers to those destroyed to bomb the opponent’s cities. The retaliation would thus be measured, its purpose ‘not to win, but to prove to the attacker that his losses are likely to be incredibly large, in the hope that by this demonstration the war will be stopped before both sides are irreparably destroyed’.5

Deterrence would thus work through the punitive threat of irresistible hurt to the enemy’s social and economic structure, rather than through the prospect of victory in combat. Even after the outbreak of war the attempt would be made to contain it through nuclear attacks that would emphasize what might happen in the future rather than to punish past aggression. This would be possible because new technologies were liable to make stability in deterrence almost ‘natural’.

One of those influenced by Amster was Tom Schelling. Schelling, an economist with a background in trade policy and oligopoly theory, thought naturally in terms of combinations of confrontation and cooperation. His theoretical structure was much more elaborate than Amster’s, yet many of its precepts were strikingly similar. He defined deterrence as being concerned with the exploitation of potential force, using it to persuade a possible enemy that in his own best interest he should avoid certain courses of action. Schelling realized that nuclear strategy would have to be based on the essential properties of nuclear weapons (their power to cause immense pain and destruction) rather than on properties more relevant to previous generations of weapons. In the first chapter of his book, Arms and Influence, Schelling spelled out the consequences of this recognition.

He distinguished between two ways of employing military strength—as brute force and as the power to hurt. Brute force provided a means of overcoming the enemy and of acquiring his territory. The power to hurt affected the adversaries’ interests and intentions rather than his capabilities, and could now be employed whatever the other’s capabilities. Because it was concerned with pain, its value lay in its potential, as ‘latent violence’, rather than in its actual use, though it might have to be used on occasion to communicate the potency of the violence at hand. Used carefully this power could influence the adversary. By making the implementation of a threat contingent upon future behaviour, actions could be manipulated. It was thus a form of bargaining, albeit dirty, extortionate and reluctant; it was diplomacy based on coercion. Brute force could be used as an alternative to bargaining but threatening hurt involved bargaining.

Instances of emphasizing the threat of pain could be found from military history, but they had tended to come at a late stage in warfare. A military victory had been needed before the population could be put at risk. Now with nuclear weapons the threat had come to the fore while the quest for military victory was less relevant. If ‘there is no room for doubt how a contest in strength will come out, it may be possible to bypass the military stage altogether and to proceed at once to coercive bargaining’. Military strategy could no longer be the science of military victory. Rather it would be the art of coercion, intimidation and deterrence.6 If anything, he suggested, a stable balance of terror could simply be viewed as ‘a massive and modern version of an ancient institution: the exchange of hostages’. The exchange provided a guarantee of good behaviour; an unpleasant device, but effective in the absence of trust, good faith and mutual respect.

The notion of military victory was alien to Schelling’s scheme. Bargaining strength required retaining the power to hurt and success depended on skill in exploiting it. Striving for a military victory could unsettle the whole relationship, because of the ‘reciprocal fear of surprise attack’. In such an aggravated situation, the bargaining aspect was lost: the controlled and considered use of strength for the promotion of political interests would be displaced by the scramble to avoid military defeat. In order to ensure that military strength was used solely for the purposes of bargaining over political interests, efforts had to be made to avoid war through miscalculation. Both sides must feel confident in their second-strike capabilities. This could be assisted if each took steps to reassure the other that they were not planning a first strike.

Schelling used this criterion of stability to classify weapons. The best weapons were protected from a first strike and insufficiently accurate for counter-force attacks. Their invulnerability provided no incitement to launch on warning and would give their owner the confidence that there would always be weapons available for a second strike. Their incompetence at counter-force attacks obviously made them inadequate for first strikes. This inadequacy would demonstrate to an opponent that only a second strike was a serious option. Cities would be threatened, not weapons. A capability to kill millions of people became morally neutral because it was reactive. A first strike, killing hundreds of weapons, was the more heinous crime because that would, almost automatically, trigger a second strike. The crime was to start a nuclear war; not to prosecute it with murderous intensity.

A weapon that can hurt only people, and cannot possibly damage the other side’s striking force, is profoundly defensive; it provides its possessor no incentive to strike first. It is the weapon that is designed or deployed to destroy ‘military’ targets—to seek out the enemy’s missiles and bombers—that can exploit the advantage of striking first and consequently provide a temptation to do so.7

Such reasoning led one to conclusions that seemed quite bizarre to the military mind. The argument was to abstain from the most advanced and militarily useful weapons, while encouraging the enemy to improve his defences. For example Schelling discussed the ‘nuclear-weapon submarine’. Because of the difficulty of detecting and destroying submarines they were considered the most invulnerable launch platforms for nuclear weapons and so admirable for second-strike purposes. Schelling noted that Americans ought not to want a monopoly of these submarines for ‘if in fact we have either no intention or no political capacity for a first strike it would usually be helpful if the enemy were confidently assured of this’.8

Assuming that the conditions for stable conflict were in place, with neither side confident that they could strike the other without a risk of being struck back, might there still be circumstances in which the behaviour of the two sides introduced a dangerous instability? In exploring this issue two games became influential.

The first was Prisoner’s Dilemma, invented by Merrill Flood and Melvin Dresher at RAND but given its most famous formulation by Albert Tucker in 1950. Two prisoners were under interrogation, unable to communicate with each other. Their fates depended on whether or not they confessed and whether their answers coincided. If both remained silent they were prosecuted on a minor charge and received light sentences. If both confessed they were prosecuted but with a recommendation for a sentence below the maximum. If one confessed and the other did not then the confessor got a lenient sentence while the other was prosecuted for the maximum sentence. The two players were left alone in separate cells to think things over. The prediction was that they would both confess, limited by their inability to communicate and so unable to go for the solution that would be of the greatest mutual benefit. Confessing reduced the risk and promised some benefit if the other foolishly stayed silent. This was the minimax strategy guaranteeing the best of the worst possible outcomes. A key feature of this game was that the two players were forced into conflict.

Another game, which came along later, was Chicken. The 1955 movie Rebel Without a Cause featured Californian teenagers driving stolen cars towards a cliff edge. The one who jumped out first was branded as ‘chicken’. A few years later the philosopher Bertrand Russell, then campaigning hard for nuclear disarmament, compared the destructive practices of the superpowers with this game played by ‘youthful degenerates’, except that he described it as involving two cars being driven at speed at each other. The one who swerved first would be the chicken. ‘The game may be played without misfortune a few times’, observed Russell when talking about international crises, ‘but sooner or later it will come to be felt that loss of face is more dreadful than nuclear annihilation.’9 In 1960 Herman Kahn picked this up in his book On Thermonuclear War.10 The prediction was that both players would swerve, because humiliation would be preferable to death. This again was the minimax solution, of greatest mutual benefit but not one achieved by an act of co-operation or out of any sense of shared interests. Russell linked the game to the ‘brinkmanship’ of John Foster Dulles. A crisis could take on the appearance of a game of chicken if each side was anxious to persuade the other that it was willing to go to the brink of war while both were desperately anxious to avoid an actual nuclear exchange. The possibility of miscalculation was inherent in the game (perhaps underlined by the fact that in the movie one driver goes over the cliff edge because his sleeve gets caught in the car door, while its star, James Dean, died before the film was released while driving his car at an estimated 100 mph).

Comparing these two games showed how the rules of Prisoner’s Dilemma forced two potential collaborators into conflict, while those of Chicken encouraged a mutually tolerable result despite conflict.11 If these games were considered as single events between players who knew nothing of each other but only the rules of the game the results were predictable. Working within the rules, there was no way out of the Prisoner’s Dilemma. The rules could only be circumvented by setting up external mechanisms for establishing and confirming trust and agreeing on co-operative behaviour. By contrast, the rules of Chicken could be manipulated. Even if both players knew that they would swerve some time before collision, it was possible to win by delaying swerving to the last possible moment. If both players were equal in the quality of their cars and their driving skills, then what was at question was each player’s nerve, and their assessment of the other’s nerve.

There were a number of ruses available to each player to create the impression of dedication to a collision course—they could swagger, boast, feign drunkenness, or pretend to lack the option of swerving. Throwing the steering wheel out of the window or dismantling the brakes would be taken as a daring commitment to go straight ahead regardless. The point about such ruses was that a rational player, desirous of victory, had to put on a play of irrationality. Strategic advantage could come by appearing to take leave of one’s senses. This would increase the probability of prevailing in a head-on clash. The obvious difficulty when translated into nuclear confrontations was that the relevant leaders must limit their bluffs. They could not commit to a patently irrational course of action without some escape. Whatever the show put on for the benefit of the other players (and spectators), in practice one foot would hover close to the brake pedal and the hands would stay firmly on the steering wheel. It was hard for a leader to appear completely mad, but an element of madness could provide an advantage over an opponent who was determined to appear sane.12

The problems of playing this game became even greater with repetition. How a player performed in one game would affect perceptions of future performance. If a player capitulated next time round an even greater test would be faced. The other player would be more confident. If over time an impression of weakness was created then the required adjustments to behaviour could create a dangerous instability, either because this player was seeking to compensate for an unfortunate image or the opponent was seeking to exploit it. Added to the particular payoffs for each individual game were the higher stakes for a larger super-game. This super-game had the characteristics of Prisoner’s Dilemma. The question was whether to co-operate to call off the endless games of Chicken (with the consequent risk of a sudden challenge) or to persist in these games in the belief that both would, in the end, swerve.

There was a further problem of the tempo at which these games were played. The processes of decision-making and decision-implementation varied in time for each player. In an arms race both may decide to produce a certain type of weapon but one may get it before the other and so enjoy a significant if temporary superiority. Two powers set on a collision course may both be determined to swerve, but one may be slower to respond to the decision and so find himself an inadvertent victor. The problems of timing add to the instabilities.

Both games presented stalemate as the predominant result. Using Prisoner’s Dilemma as an analogy for an arms race, each player strived for strategic advantage because the other was likely to be doing the same, so they ended up by neutralizing each other. The policy imperative that flowed from this was to explore means of co-operation to secure the stalemate at a lower level. In the game of Chicken, both sides started equal and neither, if both acted rationally, could ‘win’. When the game was played between two nuclear powers, the rationality of swerving was increased because the consequences of collision were so horrendous. But if regular confrontations could not be avoided, the policy imperatives that flowed from this game were disturbing and ambiguous. There was a temptation to pretend to be all those things that Game Theory assumed you were not—non-calculating and ‘non-utility maximizing’. These temptations were accentuated by the knowledge that the game was to be repeated. Beyond the paradox that one game encouraged a search for co-operation while the other a tough and uncompromising stand, both encouraged some amendment to political relations—either in the extension of areas of co-operation or in some shift in what Snyder labelled the ‘balance of resolve’.13

How well would these games play in real life? However rational the decision-makers would they fully grasp what was going on at a time of crisis? Would they be led into folly by the devices introduced to give deterrence credibility? Could stability be based on fragile systems? The issue was raised in two best-selling novels. In former RAF officer Peter George’s Red Alert14 an Air Force general, described as being simultaneously both delusional as well as a ‘paragon of rational thinking about the inner workings of deterrence policy’, orders an attack.15 As the President tries to work with Moscow to prevent catastrophe, US aircraft get past Soviet defences. Although the recall code for the bombers is found, one bomber continues to its target and the President offers up Atlantic City to compensate for the destruction of a Soviet city. Fortunately only one hydrogen bomb partly detonated and that was in open countryside. In Fail-Safe, which came out in 1962 (as the Cuban Missile Crisis was unfolding), a technical malfunction unleashed a US nuclear attack.16 Attempts to stop all the bombers getting to Moscow failed, and the city was destroyed. Only by the reciprocal sacrifice of New York—the ‘Sacrifice of Abraham’—was general war avoided.17 So influential was this novel that the philosopher Sidney Hook felt obliged to issue a refutation, arguing that the technical dangers were exaggerated and that such writing encouraged a greater fear of the US nuclear deterrent than the Soviet desire for world domination.18

Soviet leaders had begun to comment extensively on the dangers of accidental war once it became known that the Americans had nuclear-armed aircraft in the air at all times. They pointed to the dangers of an accidental nuclear war as a result of misunderstandings or crews that were suffering from intoxication, tiredness or had become otherwise deranged. They also saw propaganda potential in the possibility that American aircraft could have accidents over friendly territory.19 In March 1958 a member of the crew of a B-47E bomber flying over South Carolina accidentally released a nuclear bomb, which fortunately did not have its nuclear core inserted at the time. Khrushchev took the opportunity to ask what would have been the consequences if the release had led to an explosion. Would it have triggered a world war? There were other accidents that followed. In one of the most serious in 1966 four thermonuclear weapons fell out of a B-52 after a mid-air collision over Palomares in Spain. Three fell on the ground and one had to be recovered from the sea.20

Whatever the use made of this fear by Soviet propagandists or nuclear pacifists there was still a real basis for concern. The spectre raised by Red Alert was of a terrible crisis developing independent of any political conflict. Kahn praised George’s novel for demonstrating how a general might negate ‘the elaborate system set up to prevent unauthorized behaviour’.21 When Stanley Kubrick made a film based on Red Alert he named it after a character he introduced into the plot—Dr. Strangelove—who was supposedly modelled on Kahn and worked for the ‘Bland Corporation’. The plot was of a deranged general sending a wing of nuclear-armed B-52 bombers to attack Russia. All but one is recalled, but the consequences of one nuclear explosion becomes truly catastrophic because—unbeknownst to the Americans until it is too late, the Soviets have installed a doomsday device consisting of many buried bombs to be detonated automatically should the country suffer a nuclear attack. Kahn had written about a doomsday machine in On Thermonuclear War, describing it as being

protected from enemy action (perhaps by being put thousands of feet underground) and then connected to a computer which is in turn connected, by a reliable communications system, to hundreds of sensory devices all over the United States. The computer would then be programmed so that if, say, five nuclear bombs exploded over the United States, the device would be triggered and the earth destroyed.

He did explain that such a device was never likely to be adopted by a government, although this appears to be for reasons of expense as much as operational considerations. However, the principal objection was that such a device was not controllable—the consequences of a failure would be so high that there would be little appetite to take the risk in building one.22

Schelling advised on the script for Dr. Strangelove. The question of avoiding accidents was a major preoccupation. In a 1960 article he stated that what might appear as accidents reflected past choices that then made possible the loss of control. ‘The point is that accidents do not cause war. Decisions cause war’. He was urging thought about the structure of nuclear relationships to make these decisions less dangerous. Schelling never doubted the possibility of rushed and foolish decisions or supposed that they could be precluded through ever more rational processes. We need deterrence, he explained, not only to get at the ‘rational calculator in full control of his faculties’ but also the ‘nervous, hot-headed, frightened desperate decision that might be precipitated at the peak of a crisis, that might be the result of an accident or false alarm, that might be engineered by an act of mischief’. To do that it was necessary to make it self-evident that starting war would be unattractive in all circumstances, even if an enemy attack was feared. The sort of policy choices this affected was whether it was better to spend on better warning or to ensure that a retaliatory capability could survive without warning, or whether it was better to have fewer weapons to reduce risks of accidents or more to make clear that one could not be easily disarmed. What Schelling was urging was thinking about the structure of the nuclear relationship to make these decisions less dangerous. As he emphasised in the article, this would also require cooperation with the Soviet Union.23

The basis for the strategy of stable conflict was that there was a shared interest in avoiding mutual destruction. This required coming to terms with a nuclear stalemate and accepting that this was preferable to both sides striving for victory. This was anathema to Strategic Air Command. Its commanders were already irritated with the stress on the vulnerability of its forces to a surprise attack. They disliked a focus on defensive measures rather than how best to conduct an offense, and they had no intention of being caught in a surprise attack. Yet it was an issue that was hard to ignore and was generally accepted as a high priority by the end of the decade. As it was best to assume that what was a priority for the United States would become one for the Soviet Union, what would it mean when its forces were also invulnerable to surprise attack? This issue was at the heart of the questions of whether a nuclear war could ever be won in a meaningful sense, the character of the developing arms race, and the sort of risks political leaders might be expected to run. The key judgement was whether there was any premium in getting in the first blow in an effort to knock out the enemy’s retaliatory capability. If a first strike could result in a decisive advantage then a dangerous edginess might develop at times of crisis that could lead to war through miscalculation. If there was no premium in a first strike then both sides would be more cautious and concentrate on diplomacy in a crisis.