World War II set the stage for the evolution of American conventional war thinking in two very different ways. The first was the way in which it had been fought, with an unprecedented reliance on air power and on amphibious operations, with each of these innovations inevitably changing how Americans and anyone else thought about normal or “conventional” war.1 The second was the way it was so abruptly ended, when in August of 1945 American atomic bombs fell on Hiroshima and Nagasaki, producing a surrender that most had not have expected until 1947 at the earliest.2
The very phrase “conventional war” may indeed owe most of its meaning to the introduction of nuclear weapons; when anyone was asked to speculate about a war after 1945 in which nuclear weapons would not be used, the phrase for this often came to be “conventional” war. In later decades, the “conventional” phrase also sometimes has been used more narrowly to refer to something other than terrorism, or something other than guerrilla war, but for much of the Cold War, it referred simply to any war fought as it had been before Hiroshima.
The United States military establishment before World War II had still been directed by two separate cabinet-level Departments, somewhat illogically named the War Department and the Department of the Navy. While the two obviously had a great deal of necessary interaction in this time, this might not have felt qualitatively different from the interactions of the Department of Justice and Treasury, or between Agriculture and Interior.3
One general lesson extracted from the World War II experience was thus that this degree of autonomy and loose coordination could not be retained, for at least two major reasons. The necessity for amphibious operations was one obvious factor. In the Pacific, many were conducted by the Marine Corps, which had grown much larger in the process of preparing for this specialty, and then in having to engage the Japanese across the Central Pacific. The Marine Corps was closely tied to the Navy, under the Secretary of the Navy, with many of its officers being Naval Academy graduates, which eased the problems of land-sea coordination. But all the amphibious operations in the Atlantic, and even many of those in the Pacific, involved U.S. Army units.
The problems of coordination between the Army and Navy were overcome successfully, with the Allied landing at Normandy being a great achievement. Yet Normandy seemed a precursor to even more complex operations to come. For example, the final invasion and defeat of Japan was expected to entail an operation four times the size of the Normandy landings. Even closer integration between naval and ground forces, on training, equipment, and tactics, appeared essential to achieve maximum operational effectiveness.4
The role of aircraft in warfare also loomed much larger in the experience of World War II, as the aircraft carrier almost totally replaced the battleship as the capital ship for naval combat, and as air power advocates in the U.S. Army Air Force envisaged winning the war by aerial attack alone. Advocates of air power within the Army had been pressing for a separate U.S. Air Force ever since World War I, with the separate Royal Air Force in Britain being a model. While most naval aviators did not favor joining such a separate air force (although the RAF had for a time incorporated carrier-based aircraft), a few saw this as the only way to secure recognition of the paramount role of airpower in future combat at sea, as well as for the coordination of land-based and carrier-based aircraft.5
It is fair to say that the U.S. Navy and its affiliated Marine Corps were the most reluctant to see tighter integration or unification of the American defense establishment at the end of World War II. The Marine Corps feared that it would simply be merged into the U.S. Army, while the Navy and Marine Corps both feared that they would lose their aviation to a newly created and independent Air Force.6 Yet the general sense of the American public and the Congress, drawing on the lessons of how the war had been fought before Hiroshima, was that some unification was now “of course” required. Rather than conventional war being fought mostly separately in two distinct theaters, land and water, it would now have to be fought in three, land, water, and air, and very often along the interfaces of these theaters.
When military traditionalists resisted such unification and reassignment, this could have been dismissed as the simple inertia of nostalgia and tradition, or it could be made to seem more formidable when dressed up with theories of “bureaucratic politics” subsequently espoused by political scientists. Yet in another sense, the final chapter of the Pacific War might justify skepticism about the supposed imperative of unification. When World War II ended before any amphibious invasion greater than Normandy had to be launched, amphibious operations would soon appear to be all but obsolete. Skeptics about the air power theories promulgated by the U.S. Army Air Force might note that the conventional bombings of Germany had not induced Hitler to surrender, and that similar attacks on Japan prior to August 1945 had not noticeably weakened Japanese resolve; but the ability of a single B-29 to inflict surrender-inducing destruction seemed qualitatively different.7
“LIMITED WAR”
If nuclear weapons were now to be the “absolute weapon,” in Bernard Brodie’s phrase,8 there would be a question about whether any more ordinary warfare or “conventional warfare” was to be fought. One possible consequence of the introduction of nuclear weapons was that war would now be so horrible that it would never be fought again. A very different possibility was that the next war would end human life as we know it. Air power enthusiasts might see nuclear weapons as finally realizing what they had predicted all along, that airplanes had become so dominant that other weapons might hardly matter.
If these initial reactions to the introduction of nuclear weapons, “weapons of mass destruction,” had been correct, the issues of integration of Army and Marine Corps ground combat capabilities would have been largely irrelevant, as would preparations for amphibious warfare, and the bulk of the defense budget ought to have been directed to the newly independent and nuclear-equipped Air Force. If the Soviet Union now retained much larger conventional forces than the United States, and was in a geopolitical position from which it could threaten to occupy Western Europe, it was presumably American nuclear forces that deterred Moscow from launching such an invasion. For as long as the United States was the sole possessor of nuclear weapons, these would offset and outweigh the Soviet advantage in traditional military force. Just as the atomic bomb had been used to force Japanese soldiers to leave China two years earlier than anyone would have predicted, they could be counted on to keep Russian soldiers from rolling into Frankfurt and Paris.
But the American nuclear monopoly would last for only four years, and the phrase “conventional war” would soon become synonymous with a somewhat heretical concept of “limited war,” a possibility that might have been foreseen as a probability if the Soviet Union or another adversary were able to acquire nuclear weapons of its own. In June 1950, when the Korean War broke out, that probability became reality. A “limited” war was one in which each side—fearing for the safety of its cities—held back its own nuclear weapons, as long as the other side did not use theirs. As the two nuclear forces deterred each other, they might then not be usable to deter the tank forces of the opposing side. The implications of Korea seemed clear: the same Russian-made T-34 tanks that rolled into Seoul in 1950 might soon enough be rolling toward Paris, unless American and allied conventional forces were strengthened to rebut them.
“Limited war” was a strange concept for most Americans. It imposed a new set of rules by which one used only part of one’s arsenal against an enemy in a war.9 Such constraints were manifestly unpopular in the Korean War when President Truman relieved Douglas MacArthur from command,10 and they were still unpopular at the time of the Vietnam War. Unlike Europeans, who had often seen wars fought in the past because their monarchs were competing for the ownership of one province or another, the citizens of the American democracy had tended to see war as something that one treated as a holy fight against evil, or otherwise stayed out of.
“Limited war” thus brought “conventional war” back into relevance, as nuclear air power might not be able to do much more than deter itself. For some logical perspectives, this was bad news, and for others it was good. For those in 1945 who had thought that war would never be fought again, because of its sheer nuclear horror, this was bad news: peace was not inevitable; wars would break out again. For those who in 1945 had thought that the next war would see the destruction of every city and an end to civilized human life, “limited war” (and “conventional war”) were good news: even if war were still inevitable, the result would fall short of Armageddon.
Politically, for those who were status quo-oriented, like most Americans content to hold the line of containment, not really aspiring to any military liberation of Communist-controlled Eastern Europe, limited war was bad news, for an aggressive enemy could launch wars on the Korean model, trying to expand the Communist empire one small bite at a time. For those opposed to the status quo, such as the Communist leaders in Moscow and Beijing, limited war was good news, as the Chinese openly endorsed this phenomenon as the means to what they called “liberation.”11
Within the U.S. armed services, the Strategic Air Command of the U.S. Air Force understandably regarded the notion of limited war as bad news, because it suggested that nuclear weapons would not suffice to solve the entire American defense problem. “Conventional war” had returned to significance. The Army, Navy, and the Marine Corps, and the tactical portions of the Air Force, conversely welcomed the concept of limited war, because it meant that their role in the defense of the United States was safe. No one would claim that any military leader or any service would look forward to actual conventional wars, any more than Americans in general looked forward to being attacked. Rather, these portions of the U.S. defense establishment could aspire to deter such wars by being prepared for them.
Overall, the advent of “limited war” suggested that American strategic nuclear forces alone could not effectively deter a Soviet bloc tank attack. If so, it might take American ground forces to forestall such an attack, deterring it by the prospect of “denial,” denying the enemy battlefield success, rather than by the prospect of “punishment,” the prospect of the nuclear destruction of Soviet cities.12
EXTENDED NUCLEAR DETERRENCE
The prospect of limited war, illustrated so painfully in the Korean War, initiated a long debate that would last for the rest of the Cold War as to whether “conventional” defense preparations were really the only way to deter Communist conventional attacks. The extreme version of such an analysis argued that the only role for American nuclear weapons was to keep the Soviets from using theirs, and that “extended nuclear deterrence,” threatening American nuclear escalation to deter Soviet armored attacks, was always too dangerous, and perhaps inherently undoable. This was basically the argument offered by Secretary of Defense McNamara within the Kennedy administration, stated publicly by him afterward in the 1980s.13
But many West European leaders, and also the American strategic planners of the Eisenhower administration, had been reluctant to accept such a wide-ranging shift back to conventional warfare preparations, if only because of the enormous economic cost of matching the Soviets tank-for-tank. Since the days of the Czar, as Mackinder describes, Moscow had the advantage of a central geopolitical position. The power at the center can direct its forces in any direction, into any of the peninsulas sticking out from the Eurasian land mass, into Korea (as in 1950), into Turkey, into South Asia, or into Western Europe. Although the Soviets could attack in any of these places, the United States could not afford to defend them all simultaneously with strong conventional forces. So the possibility of American nuclear escalation was thus never renounced, even in the Kennedy administration. Even today, long after the Cold War has ended, the United States has not relinquished the option of employing nuclear weapons to defend valuable areas such as the NATO countries of Western Europe or South Korea.
But such policies of exploiting the American nuclear arsenal to protect American allies always ran a double risk, a risk that explained why McNamara and many others advocated enhancing U.S. capabilities for conventional warfare. First, there was the fear that the United States might be bluffing about being willing to escalate to nuclear war if, say, West Germany came under conventional attack. If the Soviets called that bluff, the territory attacked would quickly have come under Communist rule, and the United States would suffer a global humiliation, calling into question its commitments everywhere else.
Second, there was the fear that the United States might not have been bluffing, but might not have been credible enough in its statements of commitment. In this case, if the Soviets foolishly called the bluff, the result would be a thermonuclear World War III.14
The alternative, proposed by some, would have been for the United States to renounce extended nuclear deterrence altogether, to endorse Soviet and other proposals for a policy of “no first use” of nuclear weapons, and instead to take conventional war very seriously, massively expanding the resources assigned to preparing for a Soviet attack. It is easy to belittle the argument against this—that maintaining an army big enough to defeat the Soviet army using conventional weapons alone would have cost too much—for it suggests risking a nuclear holocaust simply to lower American and West European tax rates. Yet it has to be noted that Western economic prosperity was a major factor in the ultimate winning of the Cold War, and that a great deal of the West’s quality of life today is the result of having avoided excessive military spending during the Cold War. If the United States had renounced nuclear escalation on behalf of NATO and of South Korea, many more young men would have spent much longer periods of time in military service. Much more of Western steel would have gone to constructing tanks, rather than automobiles; preparations for conventional warfare would have slowed down the capital growth central to the expansion of the Western economies.15
Indeed, it can even be argued that U.S. acceptance of Soviet proposals for nuclear-free zones, as in the Rapacki Plan, and of a no-first-use policy, would have forced an increase in Soviet defense spending as well; larger arrays of NATO tanks might have called forth larger arrays of Warsaw Pact tanks, as perhaps neither side could have felt secure if the other side were secure. Deterrence without nuclear weapons would have been a far costlier proposition all around.
“PREPARATIONS” FOR CONVENTIONAL WARFARE
For all the years of the Cold War, the years of “extended nuclear deterrence,” the question of how one made such deterrence credible remained central. How could Washington convince the Kremlin that the United States would escalate to the nuclear level, with the prospect of an all-out World War III thereafter, if Moscow merely rolled forward its conventional armored forces and advanced into West Germany?
One way to make the linkage more believable was to issue declarations, “jawboning,” that dismissed the option of limited war and spoke instead of “defending our allies with every weapon we have.” The Eisenhower administration regularly issued statements of this genre, and later years saw NATO alliance declarations of “flexible response” amounting to the same warning: the United States and its allies might rely on conventional defenses initially, but if such defenses did not hold, nuclear escalation would follow.
A related link was the deployment into the prospective battle area of “tactical” or “theater” nuclear weapons. Such “battlefield” nuclear weapons were nominally intended to blunt the Soviet armored advance by destroying advancing tank columns. Whether or not they would have actually achieved this result, however, their employment would certainly have crossed the nuclear/conventional line, with the prospect of massive damage to the surroundings. Again, there was the larger prospect of escalation to all-out nuclear war.16
A third way to establish this linkage was to deploy some conventional forces, along the front, not with an eye toward repulsing the massive arrays of Warsaw pact armor, but to irrevocably engage the United States and therefore make the prospect of all-out war real. To be sure, the “conventional forces” deployed on the European central front were never told that they were merely to be a “tripwire” or “plate-glass window” triggering nuclear escalation (the shoulder-patch that was never designed or worn would have included the Latin for “plate-glass window”). Their assigned mission was to stop a Soviet conventional advance—a mission that they would most probably not have been able to accomplish. In the process of their being overrun and killed or captured, however, American opinion would have been sufficiently outraged to have made a nuclear response plausible. Furthermore, in the process of being overrun, these forces would have employed their “tactical” nuclear weapons consistent with the artillery tradition that ammunition be expended to prevent its capture by the enemy.
The sum total of these nuclear-conventional linkages made it extremely difficult for war planners in Moscow to conceive of ways for Soviet armored forces to “liberate” Western Europe without inducing World War III in the process. It was most probably not the prospect of a conventional defeat that deterred the Kremlin from exploiting its advantage in geography and in conventional forces, but the prospect of a nuclear escalation.
INCONSISTENT CONVENTIONAL WAR AIMS
If the prospect of U.S. nuclear escalation thereby underwrote the security of NATO and South Korea, it followed that plans for conventional defense were not entirely serious. The “conventional” forces deployed on the European central front genuinely believed that they were there to do battle. This conviction actually increased their effectiveness as a tripwire. The nuclear weapons deployed to South Korea and NATO may have been labeled as intended for the “battlefield,” but they were more surely a link to all-out escalation.
If the Truman administration had not achieved real integration in war planning, neither did the Eisenhower administration. Eisenhower reduced the defense spending levels that the Truman administration had approved after the outbreak of the Korean War, shifting back to a greater reliance on threats of nuclear escalation while investing less in conventional capabilities. As part of an effort to assuage the separate military services, which were naturally unhappy about budget reductions, he allowed them autonomy again in their war planning. Indeed, it was a common criticism of defense-planning in the Eisenhower years that the various services were structuring themselves for very different versions of any future (conventional) war. The Army was investing substantially in airborne units, although the Air Force was not doing so in regard to the troop carrier aircraft to transport them. The Navy and the Marine Corps were planning for still other forms of a new conventional war.17 If one accepts the argument, however, that allied security depended on extended nuclear deterrence rather than on a genuinely effective conventional defense, then such mismatches of American military planning were not so much of a problem. At least through the 1950s, the United States, for the most important parts of the world, was not seriously planning for a conventional defense. An Army unit mismatched with an Air Force unit in West Germany might still work very well as a tripwire.
THE ROLE OF CIVILIAN PLANNERS
If war after 1945 was always to lie in the shadow of the possibility of all-out nuclear war, it followed, for various reasons, that civilians would be much more involved in the shaping of strategy than before. When asked why civilians played such a large role in writing on post–World War II military strategy, today’s military officers often cite the United States Constitution and its stipulation that the civilian President of the United States is the Commander-in-Chief. Yet most of the American writing on strategy and the conduct of war before 1939 was done by people who were or had been military officers, with the man-on-the-street being reluctant to read a book on this subject that was not authored by a “professional.” After World War II this changed, quickly and dramatically. A good illustration is the title of Bernard Brodie’s first book. During World War II, Brodie, a young civilian political scientist, published an apologetically titled Layman’s Guide to Naval Strategy.18 Some years later, after the nuclear age had begun and after Brodie’s pathbreaking The Absolute Weapon had vaulted him to prominence as a defense intellectual, his first book reappeared, updated and now called simply A Guide to Naval Strategy.19
The post-1945 intrusion of civilians into what had been the professional domain of Army and Navy officers came about not just because of the U.S. Constitution, but also for at least three other reasons. First, the scientists who had developed nuclear weapons in the Manhattan Project felt a certain amount of guilt and responsibility for what they had unleashed on the world. As every issue of military policy and strategy now had to take the nuclear factor into account, they became active participants in seminars on campus and in many other entry points to the policy process.20
Second, the entire issue of escalation and limitation, of guessing whether one’s adversary would be deterred or not, brought into play the special insights of psychology, of political science, and of economics and game theory, as military strategy overall had become an exercise of not simply defeating an enemy on the battlefield, but also of bargaining and negotiating with that enemy at the same time, lest unnecessary escalation produce disaster for all concerned.21
Third, even where war could be contemplated and planned in its more traditional “conventional” form, the application of new technologies, as amphibious and aerial and other techniques were brought to bear, might require special expertise not commonly found among career military officers.
To summarize, even if nuclear weapons had never been invented, there would have been a greater civilian input into planning for a “conventional war.” But Hiroshima and Nagasaki had ordained that no war in the future could be thought of only as a “conventional” war, since the possibility of a nuclear escalation was always with us. This brought in the physicists, since they had invented the big bombs. And it brought in the social scientists with relevant insights on managing the interface between the “conventional” and the “nuclear.”
CHANGES UNDER KENNEDY AND MCNAMARA
The United States had fought a major conventional war against Germany and Japan in World War II, ended abruptly by the beginning of the nuclear age. It fought another conventional war in Korea, shortly after the Soviet Union had acquired its own nuclear arsenal, with the big question being posed thereafter on whether Korea was to be the model for many more “limited” conventional wars in the future. As noted, the Eisenhower goal (criticized, but never totally abandoned by the Kennedy administration and those that followed) was to prevent any recurrence of Korean-model conventional wars. As it happened, no such wars occurred.
During the Kennedy era, Defense Secretary McNamara assigned a lot of attention to enhancing preparations for conventional war, in face of great resistance among the European NATO allies, reluctant to shift away from extended nuclear deterrence.22 The Kennedy administration’s increased emphasis on conventional combat, which entailed a 20 percent increase in the defense budget, soon enough found a war. But the action instigating the employment of U.S. forces turned out to be not a conventional armored attack on the 1950 pattern of Korea, but a Communist guerrilla offensive in South Vietnam.
If all one means by “conventional war” after 1945 is non-nuclear, then the phrase is almost synonymous with “limited war.” Guerrilla war and terrorism and even MOOTW (“military operations other than war”) might be included. Some analysts prefer to distinguish among these categories and define them separately, of course, but it is fair to note that the enhanced preparations for conventional war launched by Secretary McNamara in the Kennedy administration found their most extensive employment in the guerrilla war in Vietnam.
Why did American preparations for a war on the style of World War II or of the Korean War remain untested after 1953? Most probably it was because the devices of extended nuclear deterrence proved effective and plausible enough. Or perhaps it might have been because conventional preparations themselves posed some plausible capability for repulsing a Communist armored attack, so that what held back Moscow and Beijing was a “deterrence by denial” rather than a “deterrence by punishment.” Even with the opening of Soviet archives and the interviewing of former Soviet leaders, it may always be difficult to tell which factor accounted for the military success of containment—a “soft” conclusion that emphasizes the uncertainty inherent in strategies of deterrence.23
GUERRILLA WAR
If containment showed signs of failing in the latter stages of the Cold War, it was not because Soviet-built tanks were plunging into new territories to “liberate” the workers and peasants of the world, but because the techniques of guerrilla war were enjoying some success in Southeast Asia, and then in Africa and in Latin America. Guerrilla war, and the “counterinsurgency techniques” designed to oppose such guerrilla attacks, constitute a variant of “conventional” war, of course, because nuclear weapons are not involved on either side, and because the same American ground, air, and naval forces originally designed for more ordinary warfare are employed in fighting against the guerrillas.24
Yet guerrilla warfare differs in several important ways, so much so that Americans tended to regard such conflicts as other than “conventional.” Despite the historical blindness of a few military analysts, who saw guerrilla warfare somehow as an invention of the Chinese Communists under the leadership of Mao Zedong, this kind of warfare had been around for a much longer time, winning its name in the Spanish resistance to Napoleon’s armies at the beginning of the nineteenth century, but having already been used by the American patriots at times in the American revolution.25
The Spanish phrase “guerrilla” might seem to be the diminutive of “guerra,” implying that guerrilla warfare was simply a “small” version of ordinary warfare. Yet this would be quite misleading, for some guerrilla wars have involved very large forces on each side, and some ordinary wars have been fought without committing large numbers of forces. What the phrase really addressed was a technique of not holding a line and not wearing uniforms. Instead of declaring “They shall not pass!” one lets the enemy pass, and then ambushes him afterward.
The advantage of such an approach was that it denied the “conventional” enemy information on who to shoot at. The disadvantage, of course, was that one could not protect one’s hinterland against invasion, one could not maintain a national capital, schools and hospitals, and other trappings of a normal country.
To give Mao his due, his analysis and advocacy of guerrilla war did include at least two crucial innovations.26 First, Marxists and other advocates of guerrilla war asserted a correlation between success in guerrilla war and popular support, with the most-quoted phrase from Mao being “the guerrilla is to the people as a fish is to water.” One could find here the basis of a sound counterinsurgent strategy: If you want to defeat a guerrilla uprising, you must prevent the people from going over to the guerrillas’ side. But the elementary logic possessed a broader political significance, for by implication a victory by any guerrilla, by Mao in China, or Ho in Vietnam, or Castro in Cuba, proved that the people were on that guerrilla leader’s side. The logic here was very upsetting for most Americans, who wanted always to be on the side of democracy and the people’s will in the conflicts of the world.
If A is required to achieve B, then B proves that A existed, and this amounted to a major logical trap for many Americans, for guerrilla wars came to look like a plebiscite, like a free election, an election where it would be unnerving to have American support on the wrong side, on the side of the minority rather than the majority.
The ties of warfare to mass support are an old subject in the analysis of military history. The armies of revolutionary France made up for their lower professional military competence by being able to mobilize larger fractions of the total population in a levée en masse, and by presumably harnessing revolutionary fervor in big battles and contests of endurance. The large popular armies of World War I and World War II similarly reflected the more total mobilizations of an entire nation at war.
Yet even where wars after 1789 were no longer fought by smaller bands of mercenaries and professionals, it would have been absurd to claim that battlefield victory in a tank versus tank encounter reflected European popular will, or that the outcome of the Battle of Britain proved that the British cared more for democracy than the Germans cared for Nazism. Popular involvement in warfare is relevant on many dimensions, including the willingness to pay costs in warfare, the issues of military subordination to civilian control, the balance between human and material inputs in combat, and the more general questions of military competence; but none of these quite make the outcome of a campaign seem somehow equivalent to an election contest.
Mao’s second contribution, in addition to the alleged ties to popular support, was to project a dialectical analysis of guerrilla warfare, somewhat parallel to Marx’s original dialectical vision of Communist revolution. According to Mao, what might have seemed a conventional battlefield victory for the ordinary armed forces in the initial stages of a guerrilla encounter was only the sure sign of their ultimate defeat, and what might have seemed to be a total defeat for the guerrillas, as they had to flee the battlefield, was a sure sign of their ultimate victory. This certainly was a great morale-booster for Communist guerrillas at the outset of a war. And it was also a great morale-sapper for Americans and others enmeshed in a long, drawn-out guerrilla onslaught. To drive the enemy from the field and to claim a great victory was seemingly to fall into a trap; once dispersed, the guerrillas planned new ambushes, thereby drawing U.S. forces deeper into the morass and ever closer to exhaustion and defeat.
The two parts of the Maoist interpretation of guerilla war were thus very demoralizing for any typical American. This was the kind of war where an initial success was not a sign of ultimate victory, but instead of ultimate defeat. This was the kind of war which by its very outbreak suggested that the opposing side had the people behind it.
Some analysts of guerrilla war saw all of the above as merely adding some important caveats to the means employed for conventional warfare. In addition to deploying better tanks and helicopters, one had to “win the hearts and minds” of the people. This surely was good advice, in that one always presumably does better in a ground war if the local people are friendlier to our side than to theirs. Yet, if one ever concluded that this popular support was decisive in such a contest, one had rounded the corner to a point where the entire political legitimacy of one’s campaign became open to question.
Other analysts, standing a little further back from the problem, questioned the entire Maoist premise that popular support determined the outcome of guerrilla wars as if such a war simply amounted to a violent form of democratic election. The advice given to President Johnson by W. W. Rostow, as the United States persisted in the Vietnam War, was an example of such a counter-analysis. Rostow argued that guerrilla tactics are in effect alternative methods for waging ordinary warfare, effective here and ineffective there, just like armored warfare or amphibious warfare, in no sense proving the wishes of the people.27 Supporting such an analysis, one could cite the cases of two opposing sides engaging successfully in guerrilla tactics in the same territory, each disrupting normal life, each ambushing the other. One example would come from the Arab underground and Jewish Haganah taking turns attacking British forces at the end of the British mandate in Palestine, and another would come with the IRA facing off against the Protestant Ulster underground in Northern Ireland. Even with the “new math,” it was not possible that both sides had the majority of the people behind it.
Yet such a skeptical analysis of the popular validity of guerrilla attacks did not carry the day in Vietnam, as a large slice of Americans came to conclude that the war was imposing too many casualties (guerrilla war is almost always an endurance contest, a contest of who can stand the pain longer with the more stubborn side prevailing), and that we had backed the wrong side, the unpopular side.28
The United States did not hold on in the Vietnam endurance contest, but rather lost as students and others rallied to oppose the war, as the morale and military effectiveness of Army units fell, as draftees sometimes threw fragmentation grenades at their officers, and as the professional military more generally felt betrayed by a political establishment that had radiated an enthusiasm for “counter-insurgency” as the key to enhancing U.S. conventional warfare capabilities.
Democrats had criticized the Eisenhower administration and the Republicans for being too reliant on extended nuclear deterrence, and not taking conventional warfare seriously enough. When many of the same Democrats then turned to denouncing Vietnam, and to blaming the military for the damage that a protracted guerrilla war inevitably entailed, the military’s feeling of betrayal laid the groundwork for a substantial identification in the future with the Republican Party.29
THE ALL-VOLUNTEER FORCE
As a related consequence of the Vietnam War, the draft was terminated, in favor of the All-Volunteer Force (AVF) system that has governed military recruitment ever since. The shift to the AVF drew the support of a very unlikely coalition, drawing together people like Milton Friedman on the right and Noam Chomsky on the left—people who would never have had a cause in common otherwise.
After Vietnam, leftists tended to condemn all military preparations as evil. A great deal of campus opposition to the Vietnam War had indeed been tied to the prospect of being drafted to fight in that war, or to the guilt of having evaded being drafted (as someone else was then forced to go), with student deferments being key to escaping the war.
American rightists were what the rest of the world would style “liberals” (a term that has a very different meaning within the United States), people who favor voluntarism over coercion for all social decisions, who trust free choice and markets over government “command-economy” decisions at every bend of the road. Someone like Milton Friedman would qualify as a “liberal” almost anywhere outside of the United States, while being called a “conservative” at home.30 When Americans speak of a “conservative” approach to military-manning issues, they thus sometimes draw together two extremely different analytic viewpoints, the free-choice backers of the AVF system, and “social conservatives” (Charles Moskos might be one example) who see a military service (and perhaps other forms of “national service) as a “duty,” something that young Americans “should” take on.31
The ending of the draft made sense by many arguments, but was debatable on others. For the shorter term, as conceived by the Nixon administration, it might have been the only chance the president had for maintaining an American willingness to stay in Vietnam. Opinion polls showed that student and other opposition to the war correlated closely with the answer given to the question “do you expect to be drafted to fight in Vietnam?” As positive answers to that question diminished, the vehemence of the opposition to the war also decreased. If the war were henceforth to be fought by volunteers, by people who had chosen to serve rather than being coerced into the military, the immediate felt-cost of the war for the American public declined. If the Navy and Air Force could now assume the main burden of sustaining the American commitment, punishing Hanoi with air attacks, while the Vietnamese themselves handled the bulk of the ground fighting, there might be a chance of maintaining some resistance to the Communist guerrilla offensive—at least so Nixon calculated. Until Watergate brought an end to the Nixon presidency, the calculation was not a preposterous one.
When the final collapse of the Saigon regime came, it was hardly then an episode of guerrilla war. As Soviet-built tanks smashed through the gates of the Presidential Palace in Saigon, they marked the culmination of a classic conventional offensive, complete with pincer attacks and armored breakthroughs, with U.S. aircraft unable to punish Hanoi as before, and unable to meaningfully reinforce the South Vietnamese defenders. If the Vietnam War began as something that U.S. policymakers chose to see as something other than a conventional war, it certainly ended in a conventional victory for the North Vietnamese communists.
American popular involvement in warfare, in terms of economic sacrifice and in terms of personal risk in exposure to combat, had been an important variable for the entire twentieth century. World War I and World War II had both required mass mobilization and compulsory military service—for a time seemingly hallmarks of the American way of war. The shift to the AVF might have reflected some positive hopes now that newer technologies might yield a new way of war, thereby reducing the need to expose large numbers of young men to the hazards of combat. But in the near term it also reflected a sense that, at least so far as an ambiguous guerrilla war was concerned, Americans were simply unwilling to pay such a price.
CHANGES UNDER REAGAN
The end of the Vietnam War thus gave rise to widespread doubt about whether the United States was willing to accept the casualties and other costs of any future wars, and whether it could compete effectively in contests of resolve with Communist opponents. The immediate aftermath of the fall of Saigon in 1975 suggested falling dominoes around the world, with Communism spreading, containment not holding, and the United States losing rather than winning the Cold War.32
If the election of Ronald Reagan as President in 1980 reversed this perception, it was hardly because Americans had suddenly gotten over their aversion to casualties, or their reluctance to commit forces to conventional combat. The Reagan Republicans continued the Nixon experiment with AVF. Although U.S. defense spending increased under Reagan, and although, in a manner that no one really predicted, the Communist hold on Eastern Europe collapsed leading to the eventual the breakup of the Soviet Union and the termination of Communist rule in Moscow, the effects of the so-called “Vietnam Syndrome” lingered. Under the terms of the Weinberger and Powell “doctrines” even the discussion of a prospective intervention needed to be accompanied by an agreed upon “exit strategy.”
Fearing that the Soviets were gaining a decisive edge in strategic weapons, the Reagan administration had responded with the Strategic Defense Initiative, which proposed to be able to protect American cities against ballistic missile attack, but could probably never have done so. What SDI did deliver, however, was a tremendous enhancement of high technology—especially information technology—applicable to all manners of warfare, including conventional warfare. As the Soviets sought to keep pace with this technological competition, their economy basically cracked under the strain.
Reagan might thus well be credited with rallying the West after its post-Vietnam malaise, leading to a surprising Western victory in the Cold War, a mere fourteen years after what had seemed like such a colossal defeat. But the victory was achieved in terms of technology and economic resources, rather than by enduring the horrors of combat. For Americans, it was a bloodless victory—and all the more welcome for that. When it came to conventional war, Americans remained committed to approaches that promised to minimize the number of their young men and women likely to be killed or wounded.33
DESERT STORM
The immediate aftermath of the surprising Western victory in the Cold War saw another surprise, as Saddam Hussein’s regime in Iraq invaded Kuwait. Saddam probably expected that a world celebrating the end of the Cold War and the presumed end of threats of thermonuclear war would shrug this off as a simple rectification of a historical injustice. In a sense, the challenge to international law and order posed by the invasion of Kuwait very much recalled what the League of Nations had faced in the actions of Mussolini and Hitler during the 1930s. A world committed to peace is very vulnerable to an aggressor that claims to be beset with grievances and demands their rectification, through any means necessary. If the alleged perpetrator of injustice does not resist the aggression, the common sense intuition of the world is that war has not yet broken out, as when the Kuwaitis did not resist the Iraqi invasion, or as when the Czechs in 1939 did not resist the German occupation of Prague. To defend the peace, truly peace-loving nations must thus paradoxically choose to go to war.
Clausewitz captured the essence of the logical problem best in his comment that “the aggressor is always peace-loving.”34 President George H. W. Bush very effectively maneuvered around the paradox here, in mobilizing American and global resistance to the Iraqi invasion. And the advanced technologies that had been developed in the previous decade—precision-guided munitions, greatly improved reconnaissance and intelligence capabilities, and “stealthy” aircraft chief among them—allowed the United States and its allies to win a conventional war quickly and at a very low cost in casualties.
Defeat in Vietnam, the surprisingly swift turnabout that produced victory in the Cold War, the startling challenge by Saddam Hussein, and the surprisingly quick and effective solution in Desert Storm: all of these combined to transform American thinking about conventional war. A new American way of war had emerged, based on several expectations: that future conventional wars would not require the mass mobilization of the younger population; that wars would be fought by volunteers and would entail low casualties; and that they would emphasize the application of high technology. If this new way of war allowed many American warriors in uniform to remain more remote from the battlefield, all the better. If it allowed all of the American public to feel remote from that battlefield, this is what Americans felt they needed.
CONVENTIONAL WAR AT SEA
The naval component of conventional warfare has had an interesting evolution of its own over the years since World War II. The United States Navy after the defeat of Japan was larger than all the other fleets of the world combined, a measure of saliency comparable to what the Royal Navy had accomplished after Trafalgar. For a considerable portion of the Cold War, this American superiority was so established that little discussion had to be assigned to naval strategy. The Soviet Navy seemed a minor adversary, expected mainly to be confined to supporting the flanks of the advancing Soviet ground forces, as it had during World War II.
The years from 1945 to the end of the Cold War, and indeed to the present, saw almost no conventional warfare at sea, even while the U.S. Navy at times was heavily engaged in supporting warfare ashore, in the Korean War, and in Vietnam. With the interesting exception of the South Atlantic War between Britain and Argentina over the Falklands, there would be no attacks on warships on the high seas. The largest ship sunk in combat since the end of World War II was the Argentine cruiser General Belgrano, a former U.S. Navy vessel and survivor of Pearl Harbor given to the Argentines after World War II.35
“Conventional war” involving naval vessels, and naval aircraft flying from these vessels, was confined to land and in the immediate coastal waters, perhaps out to the twelve-mile limit on sovereignty. But on the high seas and over the high seas, a bizarre peace prevailed throughout the Cold War, even as Soviet and American warships would circle around each other, and as Soviet and American aircraft would fly past each other.
Anyone predicting this absence of war on the high seas in 1944 would have been dismissed as wildly out of touch with reality. Throughout history, when warships have historically been built and deployed, they have come into use. Moreover, when conventional wars have been fought on the land, the antagonists have typically carried their quarrel onto the high seas.
One could explain this absence of conventional conflict on the high seas in a variety of ways. One obvious explanation was the postwar U.S. monopoly of naval power (although that monopoly did not extend under the seas: the Soviets in 1945 already had a considerably larger number of submarines than the Germans had possessed in 1914 or in 1939). In any event, this explanation became less persuasive by the 1960s and 1970s, as the Soviet Union invested heavily in surface combatants, and after a time, even in aircraft carriers.36
A different explanation might cite the “tactical” nuclear weapons deployed on warships of the United States Navy, and also the Soviet Navy. Attacks on such ships carried the risk of “going nuclear,” just as with “tactical” nuclear artillery on land, with a risk then of all-out escalation to a nuclear holocaust. In this sense, the presence of nuclear weapons onboard may have inhibited attacks on U.S. warships. Notably, even when lucrative targets such as American aircraft carriers were regularly conducting air operations against Communist forces during the Korean War or during the Vietnam War, none came under attack.
Yet another explanation might note that the entire process of fighting “limited” conventional wars after 1945 was counter to all of military tradition and intuition. Once engaged, limited wars established their own rules, with adversaries on opposing sides learning how they could be fought again, what weapons the other side would use, and what they would not use. This was the process of “tacit bargaining” described so well by Thomas Schelling.37
It might thus simply be a major historical accident that the antagonists in the Cold War learned how to fight limited conventional wars with each other on land, without very much risking World War III in the process, and did not learn how to do this at sea. As an interesting counter-factual, one can imagine a post-1945 world in which just the reverse pattern of accidents had occurred, where all the wars between the opposing sides would be fought at sea, as a pattern tried out once could be reused again with less risk of all-out escalation, and where none of such fighting would have occurred on land.
As things actually happened, the Navy’s experience with conventional war since 1945 has not been comparable to the battles against the Japanese Navy or German submarines in World War II, but has entailed a partnership in the conventional wars fought on land and over the land, and perhaps just off-shore or in inland waterways. This remained true even after the United States ceased to hold the degree of naval superiority it had attained in 1945, and beginning in the 1970s faced a burgeoning Soviet naval presence in every ocean of the world.
U.S. Navy spokesmen utilized the mid–Cold War growth of the Soviet Navy to conjure up scenarios of naval battles on the high seas, thereby bolstering their call for renewal and enhancement of American naval capabilities after Vietnam. As always, such preparations for contingencies were entirely plausible, but no such battles occurred. As a consequence, some of the theories of naval warfare and technical preparations remained untested. Navy officers took fire when flying missions over the land, or when taking small “riverine” boats up channels in combat with an enemy ashore. But they could sleep well at night when they back on board the major ships of their fleet.
BUREAUCRATIC POLITICS
In the agony and the aftermath of the Vietnam War, many political scientists in the United States became quite attracted to theories of “bureaucratic politics,” and to analyses of the defense policy process that made the interplay of the separate armed services look pathological and wasteful, rather than an attempt to genuinely serve the national interest. By some of such theories, each of the military services was inclined mainly to look for increases in its budget, and for accompanying increases in mandate, with this explaining the procurement of unnecessary weapons and the otherwise seemingly irrational plunge into Vietnam. According to other versions of such theories, the services were simply wedded to their own traditional ways of doing things, their own “standard operating procedures,” clinging to outdated fighting platforms—strategic bombers, tanks, large aircraft carriers—in ways that conflicted with the true national interest.38
Such theories of bureaucratic behavior, of course, found application in the civilian side of government as well. Indeed, they could be employed to explain the infighting among academic departments at any state university or even a private university. Yet the enthusiasm for such theories stemmed heavily from military examples, including many examples from the Vietnam War.
On an ad hominem basis, one could discount these theories by noting that their proponents tended to be people who regarded John F. Kennedy as their favorite president since Franklin Roosevelt, and who at the same time hated the Vietnam War,39 in part because many of them had feared having to fight in it. Reconciling one’s love of Kennedy, and one’s loathing for the disastrous war that had loomed up on his watch, was no easy task; one solution was to devise theories by which the bureaucracy was actually functionally insubordinate, pushing its own ventures, refusing to give the president real advice on options, but only options that suited the services’ purposes, and leaking information to the press whenever the elected president ruled against service interests. Theories of bureaucratic politics let the president off the hook. Some of the motivation imputed to the lower-level policymakers was inevitably true, and not such a new story. An expanding budget or agency usually increases the prospects of promotion and produces more gratification. In the private sector, it is similarly well understood in micro-economics that individual firms wish to pursue profits, and serve the consumer only because the workings of a competitive market force them to take consumer preferences into account, as the way to make this profit.
The real question for the military, or for the entire government, was always whether generals or admirals and bureaucrats were really so free to pursue their private or parochial goals and therefore to ignore the national interest. Although the “command economy” of government could most probably not address consumer demand as effectively as the market does for the private sector, the processes of the executive branch and the oversight of the legislative branch nonetheless provided many mechanisms by which the wishes of the voters, the will of the people, and the national interest, could still be implemented.
Most of the examples chosen by bureaucratic politics theorists came from the military, and indeed from the American military. There were a number of possible explanations for this. First, anecdotes were easier to obtain in American examples than in foreign countries, because the entire defense policy process in the United States has always been leakier and less respectful of the classification rules of secrecy. Related to this, it was easier to pick up stories told in English, rather than in some foreign tongue.
Another explanation is a bit more complicated, however. It was entirely possible that bureaucratic politics theories also explained the behavior of enemy military services, as the Soviet navy or the Chinese Communist navy would be looking for excuses to build ships, or purchase anti-ship missiles, etc. But if such potentially adverse navies were thus for no good reason augmenting their war-fighting capabilities, the United States Navy would then be facing a real threat, and not just an imaginary one. For bureaucratic politics theorists to be illustrating their analyses with adversary examples would thus have been to undercut their basic argument, that American defense spending was higher than it needed to be, that there was no real threat that the U.S. armed services were responding to.
Bureaucratic politics analyses thus probably captured more attention than they really deserved, in large part because they reflected personal feelings about the Vietnam War and the prospect of having to fight in that war. The arguments made were certainly fair, that military bureaucrats (and other bureaucrats) might want to serve personal agendas rather than the national interest; but what was left out were all the counter-levers that can be applied, in any government around the world, to steer results back toward the national interest.
FURTHER INTEGRATION
Theorists of bureaucratic politics were hardly the only opponents of the Vietnam War who shared the view that the separate military services preparing for conventional war were too concerned for their own independence and separate agendas. This criticism had been voiced before, during, and right after World War II. Its revival after Vietnam became a major driver for the congressional intervention in the Goldwater-Nichols Act.
As noted from the very outset of this chapter, the United States came out of its victory in World War II believing that a major problem of future conventional warfare was likely to be one of coordination—drawing together the Navy and Army, and the very much expanded Marine Corps that had emerged during that war, and the various component air forces. Some of such problems of coordination are the inevitable concomitant of exploiting specialties. For a navy or an air force to be most effective at what it does, it inevitably must offer less than the maximum of coordination with another military service doing what it does, and this is no different from the inherent problems of coordination between the Department of Justice and the Department of Agriculture, or at a university, the departments of English and Economics.
To paint all insufficiencies of coordination as selfish bureaucratic intrigues or mindless “organizational process” is to make the entire process look too pathological. Yet, as noted at the outset, the revolutionary lesson of most of World War II (before Hiroshima and Nagasaki seemed to impart a very new and different lesson) was that the conventional warfare balance between coordination and specialization had to shift, that more coordination was now becoming essential.
Whether insufficient progress in this direction was a major explanation for the American failure in Vietnam is debatable. As noted, the special feelings of political scientists about Vietnam may taint their perspective on that war.
Yet, to pick a much smaller example, the 1983 American intervention into Grenada showed very glaring examples of excessive autonomy in the four American military services participating in the operation.40 In what seemed almost analogous to the Soviet–American lines drawn across 1945 Germany to avoid friendly-fire casualties as the last Nazi resistance was being overcome, the Pentagon had to draw a line across the tiny island of Grenada, a country smaller geographically than the District of Columbia, to separate the Navy and Marine Corps sector to the north from the Air Force-Army sector to the south. The lack of interoperability was so bad that it at one point required that telephone calls be patched through ATT switchboards back in New York to allow communications between the two halves of the American effort.
Senator Barry Goldwater (R, AZ) was neither a dove nor a radical. He had never doubted the justice of the American cause in Vietnam. He had a well-deserved reputation for being generally quite sympathetic to the American military. But after Grenada, he reflected the feelings of many Americans: the time had come for legislative action to require greater coordination and “jointness” across the armed services.
The services resisted this effort. It is a fair statement that services will always seem to be resisting pressures for coordination and integration. Some of this will reflect nothing more than the fact that coordination can always be overdone, that the benefits of specialization can be lost in the process. Where resistance reflects more than this elementary caution, where it is the product of bureaucratic turf-protection or simple inertia, it is more easily condemned. But some of the accusations of such motivations may indeed be misplaced.
The Goldwater-Nichols Act was a major legislative attempt to overcome the barriers to interservice coordination noted above. At the very top of the structure, it added to the powers of the Chairman of the Joint Chiefs of Staff, as compared with the separate service chiefs, adding the post of Vice Chairman, and designating the Chairman as the principal source of military advice to the Secretary of Defense and to the President. At lower levels, it imposed requirements of joint-service experience for the promotion of officers to higher rank. Analysts would generally agree that the Act contributed to the goal of “jointness” and integration, while they would at the same time agree that the problems of separation have not been totally solved.
THE “REVOLUTION IN MILITARY AFFAIRS”
Some of the demand for greater operational coordination of the military services in conventional warfare stemmed from a demonstrable need for such coordination, evident ever since World War II and affirmed in subsequent conflicts. By the 1980s, the emphasis on ever greater coordination—the need to integrate and synchronize—took on a somewhat different rationale. In the eyes of some military theorists, the greater possibility of such coordination, emphasizing the use of computers and remote sensors and enhanced communication systems, held the promise of reducing—almost to the point of eliminating—the inherent confusion of the “fog of war.” Much of the enthusiasm for what became known as the “revolution in military affairs” (RMA)41 stemmed from the calculations of the future of conventional war produced by both sides during the last decade of the Cold War.
In fact, the Soviets were out in front of the Americans in theorizing about the RMA, the concept of a “military revolution” stemming from their apprehensions of what might be available to the United States and its NATO allies, as the Reagan administration invested heavily in the electronics and other technologies of what was labeled the Strategic Defense Initiative (SDI). If the chance of successfully defending American cities against Soviet nuclear missiles was indeed very slim, Soviet military leaders envisioned that the application of these technologies to the conventional battlefield in central Europe might provide the West with a quantum leap in its conventional capabilities, tilting the military balance decisively in favor of the U.S. and its allies. The fears outlined by Soviet Marshal Nikolai Ogarkov, once translated into English, became a source of instruction for American defense planners. In 1991, some of these Soviet fears then seemed to be realized—and the full potential of RMA was seemingly demonstrated—as the American-led coalition in Operation Desert Storm easily defeated the Soviet-equipped Iraqi tank forces of Saddam Hussein.
Bringing to bear virtually real-time knowledge of the battlefield, in precisely targeted attacks coordinating missiles, aircraft, and ground forces, the RMA promised to transform conventional warfare. To begin with, the United States and its allies (but especially the United States since, in addition to being far ahead of the Soviets and Chinese in the battlefield application of information technology, it was also far ahead of Britain, France, or Germany) might now actually have a clear edge in conventional warfare, after all the Cold War decades where this advantage had been conceded to the Soviet bloc (because of the sheer numbers of tanks and troops that the Communists could deploy, and because of the central geopolitical position they held). All the earlier reasoning had required the addition of threats of nuclear escalation as a compensating equalizer to respond to the supposed Communist advantage in conventional war. But now with the introduction of the RMA in the 1980s, or especially in the 1990s after the dissolution of the Warsaw Pact and the breakup of the Soviet Union, it appeared that U.S. conventional forces held the advantage.
Second, closely related to this, the concomitant logic was that the United States might thus want to move away from the postures of “flexible response” involving an implicit threat of nuclear escalation, with the Russians conversely backing away from their endorsement of “no first use” of nuclear weapons. The side with the conventional disadvantage was necessarily reluctant to forfeit the nuclear card, while the side with the conventional advantage might wish to exclude nuclear weapons from the game.42 After Desert Storm, for the first time since 1945, the United States found itself in a position where—viewed from an American perspective—nuclear weapons were losing their rationale.
The United States did not totally shift to an endorsement of “no first use” of nuclear weapons or formally renounce all threats of nuclear escalation threats, even though many American analysts contended that it was high time to do so. For some (like former Secretary of Defense McNamara), the inherent risks of an all-out escalation to a thermonuclear war had always seemed to outweigh any benefits; now the RMA seemingly made such risks altogether unnecessary, since U.S. conventional forces had become so dominant.
In actual practice, even after the Cold War, the United States, perhaps because of sheer inertia, or because of the inherent uncertainties of predicting conventional war outcomes, retained the nuclear escalation option. But the shift in emphasis within the military services was unmistakable: conventional war was once again the track to promotion; nuclear warfare assignments were not. If this had always been true for the Marine Corps and the Army, it was now similarly true for the Air Force, where duty with the Strategic Air Command, renamed Strategic Command, lost most of its luster, compared with involvement in tactical support for the conventional wars of the future. As confidence in achieving a decisive conventional victory increased, the Army and Navy were quite happy to shed themselves of “tactical” nuclear weapons.
EVOLVING AMERICAN POPULAR ATTITUDES
As a third major consequence of the RMA, the promise of elegantly applied information technology, as demonstrated so aptly in Desert Storm, reinforced the general American leaning to low-cost war. The American desire to avoid casualties had been evidenced already in the protests against the Vietnam War, and in the elimination of compulsory military service. The advent of the RMA now seemingly endowed the United States with a greater ability to avoid such casualties.
It is very hard to be opposed to lowering casualties in warfare, since all our humane instincts support this. Our sense of morality may be troubled a bit more, however, when the other side suffers casualties, as the retreating Iraq Army was decimated on the “highway of death” in 1991, as only we and our allies escaped the human costs of war; the aftermath of Desert Storm thus induced a fair amount of speculation about possibilities of nonlethal warfare.
More broadly, the new technological possibilities embodied in the RMA made all of warfare seem a bit more remote, reducing the involvement of American human beings generally, raising issues and concern of a moral detachment, as Americans could have their country fight wars without becoming very much involved themselves, except as spectators.43
There are many moral and practical reasons for sparing one’s population the horrors of being involved in war. But there are some political and moral problems that may emerge when, because of a reliance on a smaller number of professionals rather than a larger sample drawn from the entire country, and because of a reliance on the latest in technology, a country can go to war without the bulk of the population having their lives disrupted in the process.
A fourth consequence of the breakthroughs of the RMA, and the resulting enhancement of American conventional military capabilities, was to reinforce older American prejudices against multilateralism, against international organizations like the United Nations, and against allies in Europe or elsewhere. If American ground and air forces were now more than a match for the former Soviet Union, or anyone else for that matter, then the assistance of allies like France or Germany was no longer necessary. If the governments in Paris or Berlin were not inclined to go along with American policies in some crisis, so be it. From the 1950s through the 1970s, the United States might have needed allied help to hold back a Warsaw Pact onslaught. But in the post–Cold War world, with the application of breakthroughs in information technology, many Americans began to feel much less need for the support of such allies.44 After 9/11, this resurgent unilateralism emerged in full form.
“MOOTW”
In the aftermath of the Cold War, military planners in the United States saw some risk of conventional war along the lines of Saddam Hussein’s aggression into Kuwait. But the logic of “collective security” was that a resolute resistance to aggression ought to deter such actions in the future. America’s apparently unprecedented conventional dominance raised the prospect that the Pentagon’s vast conventional military capabilities might remain unused for longer periods of time. After Operation Desert Storm, who would presume to mount a direct military challenge to the United States?
An armed force that thus has to be ready for conventional war, but does not fight them much of the time, must always concern itself, if for no other reason than to serve the public well, about other missions it can perform. This concern to demonstrate the utility of conventional U.S. forces manifested itself during the last decade of the twentieth century with “military operations other than war” (MOOTW).45
The end of the Cold War, and the breakup of the Soviet Union, promoted a great deal of speculation about whether wars and military preparations in general might become a thing of the past. As the threat of a thermonuclear holocaust had been largely lifted from the world, so presumably was the prospect of tank battles and amphibious invasions, with some analysts seeing a much reduced role for the sovereign state more generally, and thus a much reduced role for military forces.
A debate about whether the end of the Westphalian system of separate sovereignties was at hand promoted speculation that “soft power” was about to transcend “hard power” in importance, with trans-boundary economic interactions and cultural interactions becoming more important than military power, just as “people power” was ostensibly becoming more important than the decisions made by states.
These post–Cold War expectations were challenged immediately, of course, by the example of Saddam Hussein’s invading Kuwait. The resolute world response to this invasion had demonstrated that military forces were still required to fend off the aggressors of the world, and one saw the bizarre spectacle of the Pentagon reducing the size of conventional forces in 1990 even while it had to work overtime to prepare the counter to Iraqi aggression. If the precedent set by George H. W. Bush’s “New World Order” was established well enough, however (the precedent that Woodrow Wilson’s League of Nations had been designed—unsuccessfully—to maintain), the actual punishment for aggression would rarely have to be imposed, and the conventional military would still be a lot less in play than before.
Facing budget cuts and such predictions of a diminished relevance of military force, the search for “military operations other than war” would have seemed entirely predictable to anyone familiar with theories of “bureaucratic politics” by which every agency scrambles to maintain its budget and mandate, even when the real need for its services is diminishing.
In fact, however, the proposed new array of missions was not simply the result of the Pentagon’s looking for an excuse to stay in existence. The array of real problems and issues emerging with the end of the Cold War indeed suggested a number of new functions that uniform forces alone could accomplish. Many of the material preparations for conventional war turned out to have great value in the other roles proposed now for the military. Beyond these warfighting capabilities were other qualities inherent in a competent military organization, not least of all the ability to receive an order dictating that people “be there at 0400” and to have everyone show up, possessing the right kit, and ready to move out in unison.
So the end of the Cold War did not render U.S. conventional forces obsolete. Instead it confronted them with a host of new tasks. The various missions here were sometimes all collected under the broad label of “peacekeeping,” as many of them were conducted under the auspices of the United Nations Security Council. “Peacekeeping” in its narrower sense, as first practiced in the Suez Canal zone in 1956, had involved the use of conventional forces for interposition. Bringing along very little in the way of armament, but exercising their ability to move in unison and establishing a visible uniformed presence, battalions offered from the armies of nine UN members deployed between the opposing sides in the 1956 war of Egypt against France, Britain and Israel. The aim of this unique gambit was to make it difficult for the opposing sides to continue fighting a war, for fear of hitting the symbols of the world community.46
Unlike the model of collective security, international military forces did not align with either side, did not defend a “victim” against an “aggressor,” but instead merely made war difficult or impossible by getting in the way. Rather than wearing a maximum of camouflage for effectiveness in combat, “peacekeeping” units altogether abjured camouflage by wearing United Nations light blue headgear and patrolling in vehicles that were painted white. Rather than achieving their mission by shooting (this would indeed have been the sign of the failure of the mission), they would achieve it by remaining in place.
While no United States forces (and no Soviet forces either) were to be so deployed during the years of the Cold War, NATO forces from Canada, Norway, Denmark and the Netherlands participated in such missions. Behind the scenes, the United States military’s capacity for conventional war did play a substantial role, as the logistical requirements of such “peacekeeping” missions were handled in large part by the United States Air Force.
“Collective security” had amounted to a continuation of conventional warfare, with the important difference that the missions would be launched by an international mandate in response to an aggressor whose identity could not be identified in advance. But “peacekeeping” (a very useful misnomer) had amounted to something very different, the interruption of a conventional war that was already underway, by the internationally sanctioned deployment of a third “army” between the two that were at war, a third army not very prepared to fight, but symbolizing the world’s desire for peace. A glance at color photographs of participants in these new missions would presumably tell which was which. The “peacekeepers” would wear blue hats or blue helmets. The participants in collective security would be in full camouflage and helmets for battle.
Other new missions for U.S. conventional forces in the aftermath of the Cold War exploited unique American logistical and command and control capabilities. In the wake of earthquakes or typhoons, “humanitarian assistance” frequently benefited from the nearby presence of an American aircraft carrier or air base. U.S. forces were employed to rush food and water to large numbers of people. Few governments or people objected to this use of American military forces.
But some of the new problems in the world after 1990 were more man-made. Ethnic strife erupted in many of the areas freed from Communist rule, destroying any illusions that Marxist indoctrination might somehow have eradicated the rivalries of the past. Ethnic conflict, most notably resulting from the breakup of Yugoslavia, produced new responsibilities for outside military forces, including those of the United States. The result would be labeled as “humanitarian intervention” rather than humanitarian assistance. The aim was to prevent one group from massacring another, to shelter minorities and escort them to safety, and, if the only way to end ethnic strife was through ethnic separation, to police the plebiscites held to determine whether a majority of any particular district wanted to belong to one country or to the other. Some of these new missions required an ability to engage in conventional violence when international authority was challenged, while others still depended heavily on the military’s capacity for concerted movement and efficient logistics. Some entailed functions at such a small and local scale that they would have been better handled by military police rather than armored units, or by normal police instead of the military.
All in all, the attention paid to MOOTW in the 1990s thus reflected real needs, as well as bureaucratic searches for mission. The collapse of the Warsaw Pact and the Soviet Union had indeed made tank battles less likely in general. The needs of collective security nonetheless dictated that some conventional warfare capability be maintained. In truth with many traditional U.S. allies eager to trim their military budgets after the Cold War, being prepared for any large-scale conventional war fell more and more on the United States. But day-to-day business lay elsewhere—in addressing the disappointments to which the end of the Cold War gave rise and which kept the world in considerable turmoil.
RESPONDING TO 9/11
The persistence of terrorism qualified as one of those disappointments. During the 1990s, U.S. conventional forces were already finding use in the preliminary skirmishing with violent radical Islam—one thinks of the cruise missile attacks on al-Qaeda training camps in Afghanistan in 1996. But these operations remained peripheral, not even rising to the magnitude of normal guerrilla war. In efforts to thwart terrorists, military units would always be of some value, for example in guarding sensitive targets, but few analysts saw anti-terror as a major mission for conventional forces before 2001.
Before the al-Qaeda attack on New York and Washington, the dominant view was that terrorists would never wreak mass destruction. Their presumed purpose was to win sympathy for their causes, or to become governments themselves. Employing chemical or biological weapons, or nuclear weapons, against innocents seemed unlikely to advance such aims.
When confronting such non-WMD terrorist attacks, states put some considerable effort into locking up the weak spots, in making it more difficult for terrorists to highjack airliners. As noted, these protective efforts might generate some work for the “conventional” military as well. But until 9/11, the preferred solution was to ignore the discomfort imposed by such terrorism as much as possible, i.e., to frustrate the terrorist attempt to win attention and sympathy.
This all seemed a little analogous to the problem many parents face when their children are being bullied in the schoolyard. One advises the child not to break into tears when harassed by the bully, but to pretend not to be bothered, which, after a time, will lead the bully to shift to someone else as a target. If all the students behave this way, the bully may give up his sadistic inclinations, and perhaps do something useful like trying out for the football team. One’s child will protest that it is difficult to hide the pain when being bullied, but one coaches the child on the importance of not giving away one’s true feelings in the world of politics and strategy.
Societies were imperfectly capable of minimizing the visible discomfort they felt in face of terrorism, just as they were imperfectly capable of locking up all entry points at which terrorists could inflict pain, but, prior to September 11, 2001, the consenus might well have been that the problem was manageable, that terrorism was losing its ability to influence the flow of politics, and that no great military effort would be entailed here.
All of this changed drastically, of course, with the attack on the World Trade Center and on the Pentagon. Airplanes were hijacked here not to hold passengers hostage in order to get attention for one political grievance or another, but to employ such airliners as guided missiles to destroy major buildings, taking thousands of lives. Any notion that there was an inherent upper limit to the damage terrorists were willing to inflict was now disproved in a matter of minutes.
The terrorists now seemed immune to the normal deterrence of retaliation, with this being underscored by the fact that actual skyjackers all committed suicide by crashing the hijacked airliners directly into buildings. If terrorists could not be totally stopped by defenses (the previous assumptions about the terrorist challenge had assumed that skyjackers would, once in a great while, be able to bypass airport checks and seize an airliner), and if they could not be deterred, the concomitant was that the United States might have to initiate conventional warfare to penetrate the country from which they were launching their attacks, so as to root out their bases. If the outside world was later to question the American prerogative of launching preventive wars and preemptive attacks, the shock of the 9/11 attack had actually seen most of the world backing the American right to invade Afghanistan, since the Taliban regime in Afghanistan was openly hosting Osama bin Laden and the al-Qaeda organization.
Rather than entailing “military operations other than war,” the terrorist menace, once it had moved to this higher scale of mass destruction and casualties imposed, thus substantially reactivated the primary mission of conventional forces: once again, as had been the case in World War II, the object of the exercise was to seize territory and terminate political threats to the American homeland. The subsequent debate on whether Iraq was also a part of the “war on terrorism,” about whether the launching of a preventive war by the United States was again justified, did not address the general question of whether or not after 9/11 the U.S. conventional forces needed to be configured to engage in such invade-and-occupy missions. The only question was whether Iraq was really in this category of necessary target.
Some analysts of international politics had seen the rise of terrorism as one more illustration of the weakening of the state and by extension of the declining relevance of preparations for war. After all, the argument went, terrorism reflected the same social injustices and sense of deprivation that also drove forward the popular demonstrations of “people power.” As transnational forces weakened and marginalized the prerogatives of Westphalian sovereignty, the same processes of economic interdependence that made separate countries less able to deal with economic issues also made the world more vulnerable to terrorist attacks.
But, when the terrorist attacks began to threaten the lives of tens of thousands of citizens, the response of the American government, or indeed of any other government so threatened, became very traditional again: it was to “rediscover” conventional war. What was new about much of the response to 9/11 was not the form it took, a use of air and ground forces, with support from the sea, in conventional warfare. What was new was the provocation, not conventional aggression by the armed forces of an opposing country, but a major terrorist attack. What was new, for Americans, was that United States conventional forces henceforth would have to initiate a conventional war, rather than responding in such a war. The political novelty of this might impose a major strain on how Americans see themselves, and on how the world sees the United States, but it did not really change the mission of conventional forces, for it was indeed a return to traditional military operations.47
In what is labeled as the “global war on terrorism,” the phraseology was not as metaphorical or hyperbolic as in “war on poverty.” In fact, the exercise of warfare as more narrowly understood was back.
SOME SUMMARY COMMENTS
To summarize some of the tendencies outlined here, conventional war has become much more complicated, in its interactions of weaponry, than it was before World War II, with the application of information technology eventually emerging as the preferred means of addressing such complications. During most of the Cold War, the deterrent implications of nuclear war overshadowed conventional warfare. As a consequence, such wars were fought much less frequently than they would otherwise have been—the period becoming, at least in some respects, the “Long Peace.” Yet it was the mere preparations for the contingency of such wars that played a large role in sorting out the political confrontation between Communism and liberal democracy.
As the Cold War ended, the risks of nuclear war seemingly declined, but predicting the likelihood of conventional war proved elusive. According to some analyses, the physical capabilities for such warfare could find use in missions other than war. The struggle against guerrilla war and against more minor terrorism had already demonstrated the potential for such adaptations, even as the Cold War continued. Yet in Vietnam especially such demonstrations had come at a very high political cost. Because of the pain and frustration of guerrilla war, and for other reasons, Americans by the latter part of the Cold War had developed a fixed preference for minimizing the human exposure to combat and casualties that had traditionally characterized conventional warfare. At the end of this trail, the sudden increase in the level of destruction that terrorists were willing to inflict brought conventional warfare back to center stage, rather than being something that seemed as outmoded as Westphalian sovereignty.
The response to the new threats of terrorism thus hardly amounted to a marginalization of conventional armed forces, or of the traditional governments commanding them. Instead, the international organizations and nongovernmental organizations that had been touted to be such a major factor in the post–Cold War world became marginalized. Terrorism may indeed be one more transnational factor, but eliminating al-Qaeda’s ability to launch strikes from Afghanistan entailed a national exercise in conventional warfare by the armed forces of the United States.
Notes