Actual war is a very messy business. A very, very messy business.
—Captain James T. Kirk, Star Trek
Let me tell you a story. Partly it is a story about how what we today call defense analysis came to be. It is not the complete story; indeed, the complete story may be beyond our poor ability to relate or to comprehend. This simple version of the story is woven from three main strands: what we have come to term operations research, systems analysis, and wargaming. (In this context, however, I use the term wargaming to refer to its “serious” uses by the military and defense-analysis communities, rather than the hobby uses described earlier in this book.) Each of these strands, at one time or another, has been seized upon by decision makers as the key tool for understanding the critical realities of warfare—to make predictions about the future direction of war so that today’s decisions would lead to tomorrow’s success.
The rest of the story is this: war is too complicated and too critical for us to bet our future on the process or product of any one of those tools. The best we can do is to use all of our tools in an integrated way in order to understand the past, investigate the present, and prepare for the future. Decision makers need to learn how to learn—to learn about how to deal with an uncertain and unpredictable future; to learn about how to understand its complexities; and to learn about how to make good decisions today and tomorrow in spite of those complexities, uncertainties, and unpredictability. To do that, researchers, analysts, and professional wargamers must learn to use operations research, systems analysis, and wargaming, complemented and supplemented by a deep understanding of history and current experience, in a continuous cycle of research to educate, advise, and support those decision makers.
There are two theories, however, evolved by Mr. Lanchester to which I may safely draw attention. The first he has called the N-Square Law, and it is, to my mind, a most valuable contribution to the art of war. It is the scientific statement of a truth which, although but dimly perceived, has been skillfully used by many great captains, both Naval and Military, but it is now for the first time stated in figures and logically proved.
—Maj-Gen. Sir David Henderson, KCB, Aircraft in Warfare
The physical sciences have had a long and checkered relationship with the practice of the military arts. The mythic role of Archimedes in the defense of Syracuse against the Roman war machine, and his death at the hands of one of its legionaries, is one of the earliest stories. Because military force is a highly distilled application of physics and chemistry, the harnessing of scientific knowledge and engineering expertise to create, produce, and apply better and more destructive weapons has been at work from pre-biblical—probably prehistoric—times. The application of scientific techniques to the higher levels of warfare, operations and strategy, took some halting first steps in the geometric formalisms of Jomini’s analysis of Napoleon’s methods in Précis de l’Art de la Guerre (1838). Despite the criticisms and ultimate discrediting of that work, the lure of applying scientific and quantitative reasoning to the problems of military operations proved to be overwhelming.
At the height of the slaughter on the Somme during World War I, one of the legendary foundational pieces of modern scientific analysis of warfare made its appearance. Written by British engineer Frederick W. Lanchester, Aircraft in Warfare: The Dawn of the Fourth Arm (1916) proposed some simple mathematical models of combat in the form of two sets of simultaneous equations, which became known as the linear and square laws.
Lanchester characterized the linear law as representative of ancient combat, in which battle could be thought of as a series of individual duels. In this case, a numerical advantage could not be exploited fully because it was difficult for more than one warrior to engage a single enemy simultaneously. In this case, a side with a qualitative advantage in the skill of its individual fighters could offset a numerical advantage of its enemy. This is seen most clearly in the basic equation of the linear law. It is a linear equation describing the state of the combat in terms of the number of losses on both sides at any point during the combat and on the effectiveness of each side at defeating its opponent. The state equation takes the form
(Effectiveness of Blue) × (Blue losses) = (Effectiveness of Red) × (Red losses).
If both sides fight to the finish, then the Blue army will win (that is, have at least one survivor when Red is wiped out) as long as
(Effectiveness of Blue) × Initial (Blue army size) > (Effectiveness of Red) × (Initial Red army size).
On the other hand, Lanchester characterized his square law as representative of modern warfare, in which effective long-range firearms would allow several soldiers to concentrate their fire on a smaller number of the enemy. In this case, the relevant state equation analogous to that above takes the form
(Effectiveness of Blue) × [(Initial Blue strength)2 – (Current Blue strength)2 ] =
(Effectiveness of Red) × [(Initial Red strength)2 – (Current Red strength)2 ]
In this case, Blue wins the fight to the finish if
(Effectiveness of Blue) × (Initial Blue strength)2 > (Effectiveness of Red) × (Initial Red strength)2
In this case, the size of the army plays a disproportionate role in determining the winner because its effect is squared while the effect of the individual fighting power is not. Quantity has a quality all its own.
Lanchester’s discussion of the principle of concentration and this N-Square Law looks very like the mathematical equations later common in the operational research of the Second World War. Though he supported his “law” by an insightful analysis of Lord Horatio Nelson’s battle plan for, and the actual course of, the battle of Trafalgar, Lanchester’s analysis was largely of the a priori type; that is, he derived his mathematical relationships from deductive reasoning, rather than basing it on inductive analysis of data from actual battle outcomes.
Before Lanchester, the primary emphasis of scientific support to warfare lay in the creation of new weapons. What made Lanchester’s work so groundbreaking was the fact that his “law” pointed to how technology could better be used in combat. Lanchester’s equations argued that if one side in a fight could concentrate a large fraction of its killing power against a smaller fraction of the enemy force, it would achieve greatly superior results than spreading itself out to engage all the enemy forces at once. Indeed, focusing on aircraft, the core subject of his book, Lanchester argued from the basis of the square law for the fundamental importance of numbers in air operations, whether against aerial or terrestrial targets. In the latter case, he argued that an attacker should force the anti-aircraft defenses of the enemy to disperse their fire over a large number of simultaneously attacking aircraft rather than send smaller numbers of aircraft in sequential attacks, which would allow the defenders to concentrate their fire against those smaller numbers. This principle remains a core tactical concept of air and missile combat today, that of saturation of the defense.
Soon after the start of World War II, P. M. S. Blackett, “widely regarded as ‘the father of operations research’” became involved in the organized application of scientific principles of study to support military operations (McCloskey 1987, 454). Blackett was a professor of physics at the University of Manchester when called upon to support the war effort more directly. He organized small groups of scientists to conduct what he called “operational research.” His earliest work was for the Army, in support of the pressing business of improving the anti-aircraft defense of London under the German bomber blitz of 1940. He later moved to RAF Coastal Command and the Admiralty to deal with a range of other military issues, including the Battle of the Atlantic against the German U-boat menace.
Blackett wrote two fundamental and important reports in 1941, which he later updated and included in his 1962 book Studies of War. He proposed assigning scientists to support operational military staffs to provide the operators with “scientific advice on those matters which are not handled by the service technical establishments” (Blackett 1962, 171). The data to support their analyses included the usual reports provided to such staffs, such as weather, after-action reports, and similar administrative information. The basic methodology used by the scientists revolved around “variational methods.” These methods were common tools in scientific fields such as biology and economics, which were characterized by the study of complex phenomena based on only limited amounts of relevant quantitative data. This state of affairs he contrasted with the situation in physics, “where usually a great deal of numerical data are ascertainable about relatively simple phenomena.” As a result, Blackett saw the object of operational research to be the eminently practical one of helping operators to find “means to improve the efficiency of war operations in progress or planned for the future. To do this, past operations are studied to determine the facts; theories are elaborated to explain the facts; and finally the facts and theories are used to make predictions about future operations” (177).
In addition to Blackett, the first generation of operations researchers counted among their number several other future Nobel laureates in hard science, including E. A. Appleton, A. H. Huxley, J. C. Kendrew, and C. H. Waddington. They supported real operations by applying the thinking of real scientists. Their practitioner’s view of science was centered on learning from experience. But that experience was organized and codified by an appropriate and rigorous “combination of observation, experiment, and reasoning (both deductive and inductive)” (199–200).
Once the United States entered the war late in 1941, American scientists involved themselves in the war effort as well, and were soon learning from and cooperating with their British forebears. Philip Morse and George Kimball later documented some of the principles and techniques of the American effort in another classic of operations research literature, Methods of Operations Research (Morse and Kimball 1946). In its original edition, Morse and Kimball define operations research (OR), as the Americans would call the field, using precisely Blackett’s words quoted earlier.
Another prominent American practitioner was Doctor Charles Kittel. Kittel is credited with one of the most widespread definitions of the field: “Operations research is a scientific method for providing executive departments with a quantitative basis for decisions. Its object is, by the analysis of past operations, to find means of improving the execution of future operations” (Kittel 1947, 150; emphasis in original).
Morse and Kimball elaborated on some of the foundational ideas of the technique, explaining that “certain aspects of practically every operation can be measured and compared quantitatively with similar aspects of other operations. It is these aspects which can be studied scientifically” (Morse and Kimball 1951, 1). Because even the smallest military operations are extraordinarily complex in their execution, the first step in the analytical process was to “ruthlessly strip away details” so as to identify “very approximate ‘constants of the operations’” and to explore how those constants varied from one operation to another. The key point Morse and Kimball make is that such constants “are useful even though they are extremely approximate: it might almost be said that they are more valuable because they are very approximate” (Morse and Kimball 1946, 3; emphasis in original).
This important distinction between practical utility and theoretical precision they termed “hemibel thinking.” A bel represents a factor of ten in a logarithmic scale, and a hemibel is the square root of 10, or about a factor of 3. Morse and Kimball argued that getting within a factor of 3 of some theoretical “actual” value for operational data and constants was good enough. This was so, they argue, because there is usually a large discrepancy between the theoretical optimum of any operation and its actual outcome.
If the actual value is within a hemibel (i.e., within a factor of 3) of the theoretical value, then it is extremely unlikely that any improvement in the details of the operation will result in significant improvement. In the usual case, however, there is a wide gap between the actual and theoretical results. In these cases, a hint as to the possible means of improvement can usually be obtained by a crude sorting of the operational data to see whether changes in personnel, equipment, or tactics produce a significant change in the constants. In many cases a theoretical study of the optimum values of the constants will indicate possibilities of improvement. (Morse and Kimball 1946, 38)
As an example of this sort of thinking, Morse and Kimball presented some data related to submarine sightings of merchant ships during patrols. The data table they presented is given in table 15.1 (Morse and Kimball 1956, 2164).
They rounded all the numbers to one or two significant figures, “since the estimate of the number of ships present in the area is uncertain, and there is no need of having the accuracy of the other figures any larger.” The operational sweep rate (Qop) tabulated here is a calculated figure encapsulating the efficiency of the sweeping force. The difference between Q for regions B and E is considered insignificant because it is less than a hemibel. But that between B and D is greater than a hemibel and demands further investigation. This investigation “shows that the antisubmarine activity on region B was considerably more effective than in D, and, consequently, the submarines in region B had to spend more time submerged and had correspondingly less time to make sightings.” The analysis thus suggested that it would make sense to transfer subs from region B to region D (assuming no constraints about doing so). Furthermore, additional calculations comparing the operational sweep rate to its theoretical maximum showed that “no important amount of shipping is missed because of poor training of lookouts or failure of detection equipment. … The fact that each submarine in region D sighted one ship in every five that passed through the region is further indication of the extraordinary effectiveness of the submarines patrolling these areas” (Morse and Kimball 1956, 2165).
Thus, the core of operations research is its strong emphasis on learning from experience. This scientific perspective is a source of both strength and weakness in the approach. The scientists themselves could gain only limited direct experience of war—especially the chaos of combat. For the most part, they had to rely on secondhand reports from the military. As a result, the scientific experience of the OR practitioners had to find a modus vivendi with the practical experience of military commanders and staffs. Fortunately, the hard school of real war, and the reality of the relationship between the scientists and the operators helped prevent a strictly mechanical and scientific view of operations from dominating actual combat decisions. The founding practitioners of operations research were sensitive to the prerogatives of the military command, and sensible about their supporting role; as a result, they delineated a sharp dividing line between their own responsibilities and those of the commanders. Both sides came to recognize that the results of an operational analysis formed only part of the basis for executive planning and decisions. The operations researcher had the responsibility of recommending the course of action that his scientific and quantitative analysis concluded was best (if such did in fact exist). But it was the executive officer’s job to incorporate that recommendation with others from different sources, particularly qualitative ones such as his own knowledge and experience, to make the final decision.
Blackett cautioned the OR analyst to avoid splitting hairs too finely when advising action based on his analysis:
Though the research workers should not have executive authority, they will certainly achieve more success if they act in relation to the conclusions of their analysis as if they had it. I mean by this that when an operational research worker comes to some conclusion that affects executive action, he should only recommend to the executives that the action should be taken if he himself is convinced that he would take the action, were he the executive authority. It is useless to bother a busy executive with a learned résumé of all possible courses of action leading to the conclusion that it is not possible to decide between them. Silence here is better than academic doubt. (Blackett 1962, 203)
There is a sense of balance here, one that is critically important because, unlike the physical sciences with which most of the original practitioners were familiar, the operations of war are subject to the often chaotic and unpredictable behavior of human beings and their creations. There is also the seed of danger here, a seed that would bear fruit later as the postwar analysts’ desire to believe the analysis provided the best, if not the only, basis for action would lead to their forgetting the wisdom of hemibel thinking.
During the war however, despite the delicacy of the relationships and the sometime tension between scientists and warriors, the sheer intellectual quality of the original operations researchers, and their astute ability to work within the military establishments of World War II, helped them produce both well-supported and practical results. A litany of OR contributions to the Allied war effort would be out of place here, but a couple of examples might illustrate the general shape of those efforts.
The course of anti-U-boat operations in the Bay of Biscay is a prime example; it is described by Morse and Kimball in the “How to Hunt a Submarine” article referenced earlier (Morse and Kimball 1956) and is also discussed in fuller detail in a book by currently practicing OR analyst Doctor Brian McCue (2008). During much of the war, the vast majority of German U-boats operating in the Atlantic had to cross the bay as they moved to and from their bases in France. Long-range Allied bombers conducted ASW operations against them. A key measure of effectiveness (MOE) developed by the OR analysts was the number of sightings made by the Allied aircraft. By monitoring changes in this MOE over time, Allied scientists were able to detect when German countermeasures—in the form of radar warning receivers—were becoming effective. This in turn led to equipping Allied aircraft with different, less detectable, radars. The competition between searchers and submarines was tracked and abetted by the MOEs developed and monitored by the OR analysts.
Another example illustrates how an analyst’s willingness to question common operational practice—coupled with some basic arithmetic applied to carefully collected operational data—could make a significant contribution. This is what happened in the case of Cecil Gordon’s work for Coastal Command in early 1942 (Budiansky 2013, 203–206). Operating a fleet of aircraft requires careful juggling of the amount of time aircraft spend in each of four possible states: actually flying missions; serviceable (ready to fly) but not flying; being serviced by the maintenance and repair shops; and waiting to be serviced. Common RAF practice was that at least 75 percent of all aircraft were to be ready for action (flying or serviceable) at all times.
Gordon looked at this policy and decided that it was responsible for unnecessarily limiting the number of flight hours that units could produce. After much debate, in which Prime Minister Churchill himself became involved, Gordon was allowed to conduct a test of his ideas using an operational squadron. He demonstrated that a better policy was to “fly enough to ensure that the maintenance shops were fully employed at all times. To get more flying hours, in other words, you had to increase the breakdown rate. That would mean more aircraft needed repair at any given time, but the total throughput of the maintenance shops would increase” (Budiansky 2013, 203; emphasis in original). Indeed, Gordon’s test squadron nearly doubled its flying hours. What’s more, the data analysis also revealed that routine inspections of working systems “in many cases increased breakdowns, apparently the result of disturbing components that had been working fine” (Budiansky 2013, 206). The result was that the new flying policy recommended by the scientists was implemented across Coastal Command.
These and other wartime successes convinced scientists, civilian administrators, and the military establishment alike that the new science of operations research should be expanded upon to address the increasingly complex problems of an uncertain peacetime. (Not to mention their application to nonmilitary fields, which I will not consider here.)
In the afterglow of that wartime success, Morse and Kimball argued that the increased mechanization that characterized World War II had created the conditions that allowed operations research to come to prominence, and presumably to flourish in an increasingly mechanistic future.
Another reason for the growing usefulness of the application of scientific methods to tactics and strategy lies in the increased mechanization of warfare. It has often been said, with disparaging intent, that the combination of a man and a machine behaves more like a machine than it does like a man. This statement is in a sense true, although the full implications have not yet been appreciated by most military and governmental administrators. For it means that a men-plus-machines operation can be studied statistically, experimented with, analyzed, and predicted by the use of known scientific techniques just as a machine operation can be. The significance of these possibilities in the running of wars, of governments, and of economic organizations cannot be overemphasized. (Morse and Kimball 1946, 2; emphasis in the original)
But overemphasized it was. The temporary wartime expedients evolved into permanent peacetime positions within the defense establishment unlike anything seen before. In the past, civilian scientists had been called upon to provide peacetime advice to the Army and Navy by serving on such committees as General Boards, but now civilian scientists, soon to be known by the sobriquet “analysts,” became an integral part of the nascent “military-industrial complex.” As the Cold War with the Soviet Union shifted into high gear, and the administration of President John F. Kennedy took office in early 1961, a new kid appeared on the block, the “whiz kid.” And the whiz kids brought with them a new idea; it came to be called systems analysis.
The contribution of analysis was so clearly positive that military officers urged its continuation into peacetime, when, paradoxically, defense analysis is largely deprived of its empirical footing. In the absence of evidence that might falsify their hypotheses, analysts have too often felt free to propound the hypotheses as truths.
—from “The Defense Debate and the Role of Analysis,” CNA 1984
President Kennedy’s new Secretary of Defense, Robert S. McNamara, introduced both new faces and new ideas to the department. Among those new faces was Charles J. Hitch. While working at the RAND Corporation during the 1950s, Hitch had developed new ideas about applying concepts from economics to defense matters. When he was appointed assistant secretary of defense (comptroller), a position he held from 1961 to 1965, he found himself in a position to implement many of those ideas. Together with his colleague and collaborator Roland McKean, Hitch had articulated the foundations of the new field of systems analysis in a 1960 book titled The Economics of Defense for the Nuclear Age. They described systems analysis as “a way of looking at military problems … [as] economic problems in the efficient allocation and use of resources” (Hitch and McKean 1960, v; emphasis in the original).
They argued that
Economy and efficiency are two ways of looking at the same characteristic of an operation. If a manufacturer or military commander has a fixed budget (or other fixed resources) and attempts to maximize his production or the attainment of his objective, we say that he has the problem of using his resources efficiently. But if his production goal or other objective is fixed, his problem is to economize on his use of resources, that is, to minimize his costs. These problems may sound like different problems; in fact, they are logically equivalent. For any level of budget or objective, the choices that maximize the attainment of an objective for a given budget are the same choices that minimize the cost of attaining that objective. (Hitch and McKean 1960, 2)
Hitch and McKean argued that their way of thinking economically about the problems of defense was the single best way to integrate all points of view so that discussion and agreement could be reached on common terms—that it is nothing less than a lingua franca of defense decision making. Systems analysis became the means for building consensus about the details of defense programs, once some general decisions were made, primarily by congressional budget decisions, about how many resources should be committed to defense. In this view, the goal of rationalizing choices about military matters devolves into decisions about efficiency. Furthermore, Hitch and McKean argued that there were only three “interrelated and interdependent” approaches to achieving the sought-for efficiency:
The systems analysis philosophy embodied in item number three above thus did not inherently depend on the quantitative techniques of item two. Rather, the approach required only five key elements, which together form what may be called the systems analysis paradigm:
Hitch and McKean argued that systems analysis “is a way of looking at problems and does not necessarily depend upon the use of any analytic aides or computational devices.” Nevertheless, sometimes such tools “are quite likely to be useful in analyzing complex military problems, but there are many military problems in which they have not proved particularly useful where, nevertheless, it is rewarding to array the alternatives and think through their implications in terms of objectives and costs.” In any case, such quantitative analyses “are in no sense alternatives to or rivals of good judgment; they supplement and complement it. Judgment is always of critical importance in designing the analysis, choosing the alternatives to be compared, and selecting the criterion” (Hitch and McKean 1960, 118–120).
One of the earliest—and perhaps splashiest—examples of how McNamara intended to apply systems analysis to Defense decisions may be found in his 1963 decision not to agree with the Navy’s recommendation that its next aircraft carrier (which would become the CV-67) should be nuclear powered. McNamara initially rejected that recommendation as being based on “inadequate information,” leading McNamara to request “a full study of the whole ‘nuclear-propulsion’ question” (Murdock 1974, 80–81). The Navy’s response “merely listed the advantages of nuclear propulsion and recommended adoption. McNamara again rejected the analysis, listing the failures of the study, notably “the failure to weigh added cost against added effectiveness.” Turning to the Center for Naval Analyses (CNA) to provide an analytical argument the Secretary might accept, the Navy was disappointed when the CNA study failed to support their case. So the Navy did its own in-house study, attempting to play the cost-effectiveness game—but not very well. The new Navy study “listed the factors determining effectiveness, ranked both types of propulsion for each factor, weighted the factors (for example, ‘other factors,’ which included the ‘advancement of technology,’ constituted eight percent of total effectiveness—the nuclear carrier was 1.25 times better on ‘other factors’) and concluded that a nuclear task force was 1.21 times better that a conventional force and cost only 3 percent more.” McNamara was not impressed. He “carefully destroyed the final Navy effort to justify nuclear propulsion. He concluded that since his information was inadequate for a decision on the future of nuclear propulsion, expedience dictated the choice of conventional power for the time being.”
This incident shows that
McNamara’s insistence on a cost effectiveness approach is clearly demonstrated, as well as his initial unwillingness to let the military provide the analysis. The Navy’s inability to do so (reinforced undoubtedly by the suspicion that analysis would not provide a rationale for the desired position) led McNamara to reject the consensual judgment of the military. This example illustrates McNamara’s determination to make “rational” decisions—a determination which resulted in an increased reliance on the Systems Analysis Office. Whether the OSA’s greater role brought more analysis into the making of decisions is another question. (Murdock 1974, 81)
Despite the emphasis systems analysis theoreticians placed on judgment and perspective over quantitative techniques, systems analysis practitioners soon came to apply more and more frequently the sort of a priori mathematical modeling and analysis that is anathema to practitioners of traditional operations research. As early as 1943, Blackett had warned against such methods:
One possible method of procedure is to attempt to find general solutions to certain rather arbitrarily simplified problems. In times of peace, when up-to-date numerical data on war operations are not available, this method may alone be possible. This procedure is to select, out of numerous variables of a real operation of war, certain important variables which are particularly suitable for quantitative treatment, and to ignore the rest. Differential equations are then formed and solutions obtained.
Certain results obtained by this method are of great interest. An example is Lanchester’s N2 Law. … But it is generally very difficult to decide whether, in any particular case, such a “law” applies or not. Thus it is often impossible to make any practical conclusions from such an a priori analysis, even though it be of theoretical interest. (Blackett 1962, 179)
Some thirty years later, J. A. Stockfish echoed Blackett in his book Plowshares into Swords: Managing the American Defense Establishment (1973). He argued that
model-building designed to treat combat and evaluate existing and, especially, conceptual weapons systems is used (or more accurately, abused) in the bureaucratic setting. This abuse occurs because models and model building tend to become equated with the scientific method itself. But scientific endeavor also requires that models (or theories) be validated, which necessitates recourse to empirical methods. It is this latter part of the scientific method that is largely absent in the existing military study and evaluation system. (Stockfish 1973, 190)
A fundamental problem that plagues analysis of future hypothetical systems and combat is the lack of real operational data to form the basis for developing the underlying principles of a priori models of future operations. Even more important from a scientific perspective, there was no way to disprove the theories and conclusions embodied in the models. Not surprisingly, alternate solutions to the problem emphasize one or the other of these issues. Stockfish identified “a basic philosophical difference” between those he called “structuralists”—analysts who sought more and more detailed microscopic “realism” to overcome the perceived structural shortcomings of their models—and the empiricists, who argued that
a “realistic theory” is a contradiction. The purpose of theory is to distill from the mass of data that constitutes “reality” the facts and variables that are relevant. The criteria for evaluating theory is relevance, and the hallmark of relevance is predictive value. Without independently derived evidence to support the assertion that follows from theory, the most sophisticated theory (or model) will still be judged against common sense. (Stockfish 1973, 199–200)
There seemed to be a sort of hubris inspiring many of the postwar (and sadly, current) generation of systems analysts. (I call them the true believers.)1 Such analysts had come to believe that their a priori models—implemented on more and more powerful computers but spun from whole cloth with virtually no real data to calibrate and test them against—could solve ever more complex problems, not only of military operations but also of human motivation, and the effects of culture (see also Alt et al. 2009). The care with which Hitch and McKean had originally described their ideas was not always visible in the way these other and later practitioners of the art of systems analysis spoke about and used it and its results. By the 1980s, the whiz kids and their philosophy had lost much of their charm.
Even bureaucrats such as R. James Woolsey, a former undersecretary of the Navy and later the director of Central Intelligence, argued that the approach had, at best, outlived its usefulness. In 1980, Woolsey left the Department of Defense; in his book of that year, he pointed his finger squarely at the analysts who, to him, seemed to have fallen into an almost mindless pattern of doing calculations primarily to spark debate about the calculations and to build a consensus about just what inaccuracies everyone involved could agree upon as the basis for moving forward, regardless of their relationship to anything even vaguely real.
Over the course of the last two decades, planning military forces, particularly for the navy, has become a matter of concocting [such a great word!] rather elaborate scenarios for specific geographic areas of the world. These scenarios are boxed in by innumerable assumptions, and force options are created and then tested in the scenarios using complex computer simulations—campaign analyses and the like. … The interesting question about … most scenario-dependent navy force planning, is not “Why don’t we do this slightly differently?” but “Why are we doing this at all?” (Woolsey 1980, 5–8; emphasis in the original; interpolation mine)
Woolsey suspected that the entire process was more about the interests of those who managed it than about the truly substantive issues. He even found it at times counterproductive: “it has fostered the idea that we can predict the scene and nature of future conflicts, even if we do not plan on being the ones who start them, and that we should not proceed with weapons programs until there are agreements about such scenarios and such analyses” (ibid.). He argued that the fundamental raison d’être of systems analysis—helping to define the future DoD programs—was, or had become, bogus. As
a tool for designing forces, tools of marginal analysis frequently are themselves useful only in a rather marginal way. … The lead time for weapons design and production is vastly greater than our ability to forecast where war might occur or even what countries, such as Iran, might or might not be on our side. (ibid.)
Woolsey’s words reflected the existence of a growing debate about the roles and practice of defense analysis. About the same time as his book was published, the General Accounting Office (GAO) released its own report on defense analysis, Models, Data, and War: A Critique of the Foundation for Defense Analysis (GAO 1980). In the cover letter releasing the report, the Comptroller General of the United States wrote, “This report critiques the management and use of quantitative methodology in the analysis of public policy issues, focusing on the inherent limits of the methodology as a tool for Defense Decision, and the essential role of human judgment in any such analysis.”
Four years after the attacks of Woolsey, the GAO, and others, the Center for Naval Analyses—the direct lineal descendant of World War II’s first US operations research organization—formally entered the lists of the debate. CNA presented the OR counterattack against the dominance of systems analysis through two essays in its annual reports for the years 1984 and 1986. These essays, entitled respectively The Defense Debate and the Role of Analysis, and Systems Analysis in Perspective, threw down a gauntlet challenging the largely deductive approach that had come to characterize systems analysis, and calling for a renewed emphasis on the inductive approach that had originally characterized operations research.
The 1986 Systems Analysis in Perspective prominently featured an extract from Elting E. Morison’s essay The Parable of the Ships at Sea. I have always found this story fascinating, and it is worth reproducing at length Morison’s capsule summary of his main point.
Things went on well enough for the men in the naval service as long as they worked with familiar and limited means. Then gross and continuing expansion of the means threw them into a considerable confusion. At first it was simply a matter of trying to figure out how all the new apparatus worked, but then the uncertainty over the novel means extended to the ends. What were they to do with all these things that so enlarged their own capacities? For some time they hoped to solve this problem of ends by making the means work better—improvement in the technology, as it is now called. But that simply added to the confusion. Then Mahan explained what the purpose of a navy that had all these new things should be. Given such a defined and recognized end in view, the men in the service then found they had a way to put all the forces and materials that had distracted them into a sensible system that served the intended purpose. It was a system they could manage in an informed way. (Morison 1977, 151)
Spelling it all out: If you know the kind of war you want to fight you don’t have much trouble designing and controlling the machinery. Building on Morison’s point, the CNA essay argued that “the power of an organizing idea magnifies the apparent power of analysis. Once the ends of defense are clear, the selection of means becomes amenable to analysis” (CNA 1986, 17).
But there is an important paradox here—a solid consensus on a policy and the programs to implement it can be helpful to defense planning, but also it can be potentially fatal when that consensus gets it wrong.
In sum, a war might unfold in many ways, but it will unfold in only one way. Before the fact, it is impossible to know how one or another piece of hardware will affect the outcome. For war is decided by men and luck; the machinery is almost incidental. Of course, it is better to have the machinery than not. But in war, the demands of the immediate situation and the user’s ingenuity will determine how well a particular weapon is used. (CNA 1986, 12–15)
The forty years since Morse and Kimball had argued that man-plus-machine warfare behaves more like a machine had seen the industrial-style warfare of World War II replaced by the threat of nuclear annihilation on the one hand, and guerilla-style “wars of national liberation” on the other. To CNA and other operations analysts who considered themselves lineal descendants of Morse and Kimball, it seemed as if the masters might have got it wrong. But if wartime uncertainties dominate and overwhelm our ability to predict them using any models and techniques we can hope to create, what do we do? “Denied the solace of a ‘rational’ approach to defense planning, how do we attain a military posture that is robust enough to deal successfully with an unpredictable future?” (CNA 1986, 17).
CNA argued that there were two answers to this question. The first of these was consistent with both Morison’s parable and the systems analysis philosophy. That approach was “to forge a consensus about the ends of defense and about a military strategy compatible with those ends. But such a consensus is hard enough to reach in wartime, let alone in the demipeace we have ‘enjoyed’ since World War II.” The second answer was “to design organizations [and] institutions … capable of rapid, effective learning and adaptation to changing internal and external conditions” (Ackoff 1977, 39).
The essay concludes with a statement that, in the wake of the 9/11 terror attacks, is all too much on point: “This paradigm may be unsatisfactory to those who seek certainty in an uncertain world. But it is better than the false certainty offered by analysts and critics of analysis who would fine tune the future with inadequate tools and visions” (CNA 1986, 17). Of course, neither answer alone is sufficient, just as neither operations research nor systems analysis alone can solve all defense problems; we must pursue both lines at once, and in a complementary fashion. No real progress can be achieved without the building of consensus. But if we are to avoid the dangers of agreeing to be precisely wrong, that consensus must take a clear-eyed view of the limitations of operations research and systems analysis when applied in the real world.
We must recognize, as Jim Woolsey did, that the systems analysis “revolution” (if we may be so bold as to call it that) was about more than simply quantification; it was about more than applying some of the scientific (and pseudo-scientific) principles of the World War II operations researchers to the dangerous new world of the Cold War. It was about using a formal approach to thinking about problems in order to help articulate policy decisions and build consensus using a new language, a language of economics. Unfortunately, too often the conversation devolved into an “irreconcilable clash of competing theories of combat” (CNA 1986, 11). But, as Woolsey himself put it, “the intellectual tradition that has produced program analysis and systems analysis, is an important one. It is a tradition reaching back probably before, but certainly to, Locke, Mill, Adam Smith, Ricardo, and the roots of modern economics. But that tradition may not have cornered the market on reality” (Woolsey 1980, 14). Indeed.2
This is not a game! It is training for war! I must recommend it to the whole army.
—Prussian Chief of the General Staff Karl von Muffling, on watching a demonstration of the Reisswitz kriegsspiel
And so at last we come to wargaming. As the third principal tool or approach or philosophy of thinking about defense issues, it continued a slow but inexorable coevolution with both operations research and systems analysis. Wargaming’s modern roots can be traced at least as far back as the Prussian 1824 kriegsspiel of the elder and younger Reisswitz (see Jon Peterson’s chapter in this volume). Since then it has experienced cycles of popularity and obscurity throughout the past two centuries.
Reisswitz’s kriegsspiel grew out of a tradition of board games representing essential aspects of warfare for the education and edification of the nobility and warrior classes. But it went one step further. Unlike the abstractions of chess and other such games, kriegsspiel attempted to represent real military operations on a detailed topographical map of real terrain, such as might be used during actual military operations. Reisswitz emphasized that the game presented the players with a realistic basis for making tactical and operational decisions. Furthermore, he created a system of rules and charts that purported to determine the results of those decisions and the activities of the military units involved based on actual experience and data from field trials. The Prussian, and later German, army leadership saw such value in these sorts of games for educating staff officers and leaders, as well as for studying potential conflicts, that the various forms of kriegsspiel became major elements of their system. The successes they enjoyed in the wars of the late nineteenth century sparked interest and imitation in other Western nations, as well as Japan (see Perla 1990 for a fuller discussion of the history of kriegsspiel and military wargaming).
From its initial emphasis on tactical decision-making, wargaming by military professionals expanded into operational and strategic dimensions. Naval wargaming, too, developed in the post-Mahanian days at the end of the nineteenth century, experiencing an important period of development at the US Naval War College in Newport, Rhode Island. Prior to World War I, all the major European powers wargamed out the various war plans. For example, the Russians played out their initial campaign plan for the invasion of East Prussia. In an incident to be repeated many times in the future of wargaming, the Russian game strongly indicated the difficulties they would face with two widely separated armies, commanded by generals who were disinclined to cooperate with each other, against a more agile and concentrated German force. Ignoring the insights the game might have given them, the Russians followed the plan to the disasters at Tannenberg and the Masurian Lakes in August 1914.
Wargaming continued to be used as a method of exploring potential future conflicts between the world wars, and even during World War II. As discussed later, the US Navy made extensive use of wargaming between the wars to help develop the tactics and operational concepts that would prove successful in the Pacific War against Japan. After that war, new techniques developed to take advantage of electronic systems and computers to develop more and more complex games, in the hope of creating more and more realistic environments to explore and test new warfighting concepts, including concepts for nuclear warfare.
Unlike operations research and systems analysis however, wargaming did not enjoy much attention from academics beyond some small groups of hobbyists until the near-simultaneous growth of political-military gaming and hobby board wargaming during the 1950s (see Peterson 2012). Because of its less-than-academic origins, however, wargaming’s credibility remained somewhat suspect, especially among analysts weaned on the McNamara SA orthodoxy.
Part of the reason for this almost deliberate disdain for wargaming among many analysts was their tendency to view wargaming as nothing other than poor, unrigorous analysis rather than a distinct tool. The standard DoD “official” definition didn’t help matters: “a simulation involving two or more opposing forces using rules, data, and procedures designed to depict an actual or assumed real-life situation” (JCS 1987, 28). Instead of that definition, I proposed the following in 1990: “a warfare model or simulation whose operation does not involve the activities of actual military forces, and whose sequence of events affects and is, in turn, affected by the decisions made by players representing the opposing sides” (Perla 1990, 164). The key elements of this definition are to be found in the words “players” and “decisions.”
Wargaming is not in itself analysis—although it draws on analytical techniques. It is also not real—although good games strive to create the illusion of reality. Neither is wargaming duplicable—although you can play a game repeatedly, no two games can ever be identical. A “wargame is an exercise in human interaction, and the interplay of human decisions and the simulated outcomes of those decisions makes it impossible for two games to be the same” (Perla 1990, 164). As a result, wargaming does not pretend to—indeed, is simply not able to—address all problems associated with defense, as systems analysis claimed to do. Its focus is on human interaction, human knowledge, and human learning.
The essence of wargames is found in their basic nature. They are about people making decisions and communicating them in the context of competition or conflict, usually with other people—all the while plagued by uncertainty and complexity. Through these processes, the players live a shared experience and learn from it.
Modern “professional” wargames take many forms and use many different instrumentalities. Seminar wargames exhibit some surface similarities to the hobby game Dungeons & Dragons: game controllers present players with situations and call for them to decide what to do; actions and outcomes are discussed and debated and the situation updated to advance to the next critical decision point. Other types of games can look very like a commercial hobby board game. The hypothetical war in the Central Front of NATO as played out in a series of Navy Global War Games during the 1980s was represented by a large paper map with an overlay of hexagons and a set of unit counters that would have been familiar to any wargame hobbyist of the period. Rather than simple paper combat results tables, however, controllers used minicomputer combat models to adjudicate battle outcomes.
Whether played as seminar-style discussions or rigidly controlled tabletop or computerized map games, modern professional wargames continue to serve as both educational and training tools and as analytical research resources. In every case, however, wargames are helping their creators and participants to learn something useful and important about the decision-making environments they represent. Those environments cover the range of issues facing defense today.
In the past few years, games designed and conducted by my colleagues and me at CNA have ranged from explorations of the broad shape of the US defense program over the next thirty years to the types of systems and tactical concepts that the US Marine Corps might need to develop to face hybrid-warfare threats in the Middle East and elsewhere. We have done games to explore the broad scope of logistical issues that might arise in a potential major war with a peer competitor fifteen or thirty years in the future, and we have applied wargame techniques to explore political-military issues in Africa, as well as problems associated with managing and sharing water resources in south Asia.
Regardless of their form or subject, games motivate players to become engaged in the simulated world of the game. They provide the players some immediately applicable education in terms of facts and analysis, and they encourage the players to act on those facts to make decisions and to deal with the consequences of those decisions. These activities, in turn, help the players learn about themselves and about how they make decisions by allowing them to practice decision-making in a protected environment, or “safe container” (Brightman and Dewey 2014). Games help us organize information in meaningful and memorable ways; they help us see how and why things happen as they unfold before our eyes. Games help us explore what I call the five knows: what we know, what we don’t know, what we don’t know we know, and (the most difficult ones) what we don’t know we don’t know, and what we know that ain’t so—all through the mechanisms of discovery learning.
Learning games are all about change. Their goal is to change the learner—at least, to change the learner’s mind. Those of us who design and use games build a synthetic, working world, and help the players enter that world and bring it to life. Most of all, we help them change that world through their decisions and actions, and in the process they change themselves (the educational use of games) and us (the research use of games).
Recent scholarship has emphasized the idea that the proper source of insight from wargaming is beyond the mere decisions made by the players. Based on social science research that seems to indicate that most humans are poor judges of how they would behave if a hypothetical situation became real, Professor Stephen Downes-Martin at the US Naval War College has argued that using game decisions as the key information source for wargaming insights is an unreliable one (Downes-Martin 2013). Pursuing this line of thought, Naval War College professor Hank Brightman and student Melissa Dewey proposed that the true source of useful information and insight available in a game derives from the conversation among the players as they communicate by both word and action (Brightman and Dewey 2014). Indeed, “Wargaming is an act of communication” (Perla 1990, 183).
In all its varied dimensions, wargaming works to create a shared synthetic experience among its participants. To do so effectively, it draws on the inherent human propensity for telling, and learning from, stories. Its power derives
from its ability to enable individual participants to transform themselves by making them more open to internalizing their experience in a game. … [Indeed,] gaming, as a story-living experience, engages the human brain, and hence the human being participating in a game, in ways more akin to real-life experience than to reading a novel or watching a video. By creating for its participants a synthetic experience, gaming gives them palpable and powerful insights that help them prepare better for dealing with complex and uncertain situations in the future … [and] is an important, indeed essential source of successful organizational and societal adaptation to that uncertain future. (Perla and McGrady 2011, 112)
As a vibrant hobby wargaming community, who understood wargaming intuitively and based on their own experience, grew and expanded its reach in the last half of the twentieth century, it produced new generations of policy, systems, and operations analysts familiar not only with the techniques of wargaming but also with their important strengths and dangerous weaknesses. Most importantly, the growing community of professional DoD wargamers has begun to demonstrate the inherent power and persuasiveness of combining the full range of information and tools available into what I have termed the cycle of research (Perla 1990, 273–290). This cycle integrates systems analysis, operations research (particularly through its analysis of exercises and real-world experience), and wargaming into an active collaboration to paint a more complete picture of the problems we face in the future, and to identify more creative potential solutions to those problems.
Alone, wargames, exercises, and analysis are useful but limited tools for exploring specific elements of warfare. Woven together in a continuous cycle of research, wargames, exercises, and analysis each contribute what they do best to the complex and evolving task of understanding reality.
—Peter P. Perla, The Art of Wargaming
In his book The Logic of Failure (1986), German psychologist Dietrich Dörner explores human decision-making in complex and uncertain situations. He argues that there is “no universally applicable rule, no magic wand, that we can apply to every situation and to all the structures we find in the real world. Our job is to think of, and then do, the right things, at the right times, and in the right way” (Dörner 1986, 287). I argue in The Art of Wargaming that analysis, exercises, and wargaming, while sharing some common characteristics, are distinct tools for studying and planning for potential future conflict. When that latter book was published, our practical experience of real warfare was too limited to include the non-exercise aspects of operations research in my concept. Today, however, we are in the unenviable position of having an extensive body of operational experience and research to factor into our thinking—and into the background, context, and databases of our other tools.
Just as there is no “universally applicable rule,” neither is there a universally powerful tool. We cannot continue to make wise decisions in face of the increasingly complex world we must navigate if we rely on only one of our tools—or even on all of them but in isolated “cylinders of excellence.” Instead, we need to apply all our tools—operations research, systems analysis, and wargaming—to address those aspects of our problems for which they are best suited. Then we need to integrate and interpret their results to paint a more complete picture of both the problems and their potential solutions.
This cycle of research approach is one that has worked in the past. During the 1920s and 1930s, the US Navy integrated an extensive program of analysis and wargaming at the Naval War College with an equally extensive program of large-scale fleet exercises, titled for the most part “Fleet Problems” (see Nofi 2010). The process the Navy used followed this prescription:
Ideas developed or problems encountered on the game floor were analyzed by students and often tried out in the Fleet Problems, usually after some practical experimentation in the fleet and during routine exercises. Likewise, questions that arose during the Fleet Problems were often incorporated in an NWC game, of which there were some 200 in the period. As the process developed, the rules for both the Fleet Problems and the NWC wargames were continuously revised and updated. This kept the gaming process honest, because, as Rear Adm. Edward C. Kalbfus, President of the Naval War College, cautioned in 1930, “we can make any type of ship work up here, provided we draw up the rules to fit it.” (Nofi 2012, 296)
The results of this tightly spun cycle of research included most of the operational concepts, tactics, and systems employed by the US Navy so successfully during the war against Japan. Perhaps even more importantly, the process helped produce the mindsets and habits of thought used by the men who led and fought during that conflict.
A similar, though less widespread, dramatic, and influential example is one I actually participated in during the heyday of the Cold War in the mid-1980s. Based on ideas generated in the fleet, CNA conducted a series of technical analyses designed to explore the potential tactical advantages the Navy might gain against the Soviets by operating aircraft carriers from within fjords in north Norway. Other analyses tried to quantify the effects such operations might have on defeating a Soviet attack in the region by providing close air support and battlefield interdiction to support defending NATO forces. The idea captured the imagination of Vice Admiral Henry “Hank” Mustin, then the commander of US Second Fleet and NATO Strike Group Atlantic. To explore the full range of operational and strategic implications of adopting such an aggressive forward stance, VADM Mustin sponsored, and played himself, during a wargame at Newport. I was privileged to participate in that game as an observer/analyst.
VADM Mustin also directed at-sea exercises to explore the practicality of such fjord operations, and to identify requirements to make it work and obstacles to its success. Partly as a result of the game and the exercises, CNA and others embarked on more studies and analyses under the umbrella term “Targets that Count,” to explore what other Soviet targets carrier air wings might be able to attack or hold at risk from those areas to enhance deterrence or apply warfighting pressure.
Creating an integrated cycle of research like those described above does not happen automatically or by magic—there is no “magic wand” available to conjure with. It requires an integrated approach in which
each of the tools strengthens and supports the others. Analysis provides some of the basic understanding, quantification, and mathematical modeling of physical reality that is required to assemble a wargame. The game presents some of the data and conclusions of the analysis to its participants and allows them to explore the implications that human decision-making may have for that analysis. It can illuminate political or other non-military, non-analytical assumptions and points of view, raise new questions, and suggest modifications to existing or proposed operational concepts. (Perla 1990, 290)
Exercises, and to some extent real-world operations, provide the military opportunities to test the concepts with real people and real systems in real environments. When studied and analyzed carefully and rigorously, such exercises and operations can be used “to measure the range of values that mathematical parameters may actually take on, to verify or contradict key analytical assumptions, and to suggest even more topics for gaming, analysis, and follow-on exercises, thus continuing the cycle of research and learning” (Perla 1990, 290).
So it can be done; the cycle can be created and used. But it requires some person, some group, or some organization in a position of authority and influence to make it happen and to make use of its output to affect current and future decisions, concepts, and plans.
Most planners ride into the future facing the past. It’s like trying to drive a train from its caboose. … Top-down planning is usually initiated when senior executives go into hiding in the Bahamas. … Bottom-up planning is taken no more seriously than promises made in church.
—Russell L. Ackoff, The Corporate Rain Dance
So, where does that leave us?
Analysis is fundamentally about providing scientific, and especially quantitative, advice to support decision makers. Those decision makers may be operators in the field, conducting real actions against real enemies, or preparing for that real possibility; or they may be Pentagon bureaucrats concerned more about what to buy in the next budget—whether tools for future operations or balancing support for current operations with investments for the future. Making such decisions entails thinking about that unknown (and sometimes unknowable) future as well as our own organization and its overarching operating environments.
Two related but distinguishable approaches came into existence to help decision makers address these issues: “classic” operations research and “modern” systems analysis. Operations analysis (as it is also called) typically relies on real data from real-world operations to identify alternatives and recommend changes in tactics and perhaps in technology that should help to improve performance of operations in the field. Systems analysis, on the other hand, was created as a shared language using precise terminology and supplemented by mathematical models to build a shared paradigm for mustering and evaluating evidence, and to facilitate the formation of a consensus on issues not subject to the harsh tests and dictates of real-world performance data.
Wargaming is not analysis in the same sense as OR and SA. Unlike OR and SA it is not so much about the reductionist disassembling of problems into their component and quantitative parts. Instead, it is about the holistic integration of problems and the human beings who have to confront and act to overcome them. Wargaming is, or at least can be, predictive, but not in the absolute sense. It doesn’t tell us the future with certainty, but rather it shows us its possibilities. In a presentation given at the Connections conference in Baltimore several years ago, Professor Robert “Barney” Rubel of the Naval War College described this idea in terms of wargaming being “indicative”—of the potentials inherent in situations and of the hidden relationships that a game, especially a series of related games, can help us discern.
Here is where most of the classic forms of modeling and simulation fall down. They cannot forecast outcomes that are not already embedded in the underlying mathematical constructs of the model or simulation. At best, such techniques pick apart and illuminate outcomes that are consequences of what we already know well enough to embed in the models. They do not, in fact, generate new knowledge, but they can reveal the sometimes complicated, overlooked, and surprising consequences of old knowledge.
Wargaming is a far better tool for going beyond old knowledge and exploring unforeseen consequences. This power of gaming to illuminate dark corners of future possibilities makes it especially important in light of the concept of the Black Swan. Popularized by Nassim Nicholas Taleb in his eponymous book (2007), a Black Swan is an event with three defining characteristics: (1) it is unpredictable; (2) it has massive impact on the course of events; and (3) after the fact, we can convince ourselves that we could have foreseen it if only we had been more astute. Black Swans became connected to the infamous concept of “unknown unknowns” described by former Secretary of Defense Donald Rumsfeld. Wargames can be an effective tool for exploring Black Swans and other such off-axis paths toward the future because what is possible in a wargame when played most productively goes beyond what is possible in a closed model—because in a wargame one or more working human brains are engaged in conflict with others, and those brains generate a wealth of ideas that go beyond those created by modelers working in more static environments.
Wargames frequently help us identify where and how we can make improvements to what we plan or do and one of the most important of those improvements lies in learning how to adapt to change. We can see this highlighted in the use of wargaming by the Naval War College during the interwar period. An important—if not perhaps the most important—outcome of that long series of games was that the students and future leaders of the wartime Navy learned adaptive techniques. In a famous letter to the Naval War College after the war, Admiral Chester Nimitz stated that “the war with Japan has been [enacted] in the game room here by so many people and in so many different ways that nothing that happened during the war was a surprise—absolutely nothing except the Kamikaze tactics toward the end of the war; we had not visualized those” (quoted in Wilson 1968, 39).
It was not so much that War College gamers had gamed out everything the Japanese ultimately did in the war, but rather that the officers who would ultimately lead the US Navy during that conflict had had to adapt to changes in how the Japanese systems modeled in the games worked, in the tactics the different players used, and in the relative effectiveness of weapons as represented by changing model inputs. Remember that not everything the War College gamers did during that series of games was correct in the sense of representing actual Japanese tactics, strategy, or capabilities. (A prime example is our consistent underestimation of the Long Lance torpedo and how the Japanese would use it.) But by changing assumptions from game to game and year to year, the students were forced to learn how to discover crucial facts and adapt to them, not only to specific events, but also in general terms.
Another of Rubel’s examples from the interwar period was the development of the US Navy’s carrier doctrine and capability. The wargames at Newport showed Admiral Reeves, War College president at the time, the importance of putting large numbers of aircraft over the enemy fleet in short periods of time. But the operational doctrine used by the Navy’s experimental carrier, USS Langley, limited her to operating only about a dozen aircraft at a time. Reeves passed the game’s insights to Admiral Moffet, chief of Naval Aviation, and Moffet arranged for Reeves to take command of Langley. Very quickly Reeves and his team developed the arresting gear and barrier combination that allowed Langley to operate more than fifty aircraft at a time. Thus, one of the crucial steps toward developing the carrier techniques that helped win the war in the Pacific can trace its lineage to Naval War College gaming. By communicating the results of those games effectively to the key decision makers, the games and Reeves’s experiments helped Moffet shape the future.
The history of wargaming is full of examples like those above. It is clear that wargaming creates the opportunity for analysts, operators, and decision makers to have synthetic experience of rare events so that we may become more open to considering them in our thinking, operating, and planning. But just as reality limits our view of possibilities by our limited real experiences, a single wargame can also produce a “cognitive lock” on the specific events of that game. Participants may become just as vulnerable to overestimating the likelihood of the occurrence of game (or gamelike) events in the real world because of the immediacy of the gaming experience.
This danger argues for a broader and more formal application of gaming to decision making, one that operates in full partnership with modeling and simulation as part of the analyst’s toolkit. It is only by incorporating wargaming as an equal partner with operations research and systems analysis that we can exploit the capabilities of all three techniques to the fullest in our quest to learn, adapt, and avoid the pitfalls of a complex and uncertain future.
Such a combination, the cycle of research, is a necessary and natural element for applying what management scientist Russell Ackoff called “interactive planning.” Ackoff described this type of planning as one based on the belief that the future of an organization is largely under its own control. “It depends more on what we do between now and the future than on what has happened up until now.” That the future depends on decisions yet to be made. This type of planner “focuses on all three aspects of an organization—the parts (but not separately), the whole, and the environment. He believes that often the most effective way of influencing the future of an organization is to change its environment. He may not have as much control over the environment as he does over the organization, but he uses as much as he has up to the hilt” (Ackoff 1977, 38–41).
Ackoff contrasts interactive planning with reactive and preactive planning. Reactive planners try to fix the problems within an organization so that it can become again what it was once during the “good old days.” Preactive planners try to forecast the future and create “programs” based on their forecasts. Unfortunately, future forecasts can only be truly accurate when that future is fully determined by the past. “It turns out, then, that the only conditions under which the future can be predicted accurately are the determined ones that nothing can be done about. Then why forecast?” (ibid.)
Unlike those far more typical approaches to planning, in which planning is the exclusive job of the planners, interactive planning encourages all elements of an organization to participate in the planning process. They do so by articulating how things would be if they could “replace the current system with whatever they wanted most … subject to only two constraints: technological feasibility and operational viability.” It also focuses on learning and adapting to change. The process “is built on the realization that our concept of the ideal is subject to continuous change in light of new experience, information, knowledge, understanding, wisdom, and values … it is an ideal-seeking system, unlike a utopia which pretends to be beyond improvement” (ibid.).
The cycle of research is—dare I say it—an ideal means for implementing such an interactive planning system. Implementing an organization-wide cycle of research integrates all our tools in the process of understanding reality and making decisions. Within DoD, the Planning, Programming, Budgeting, and Evaluation System (PPBES) pays lip service to the idea of involving all elements of the organization by soliciting inputs from operational commanders, program sponsors, and others across the geographic and specified commands, the services, and the defense agencies. But the system is a bureaucratic one, not an interactive one.
What appears to be needed to make it more interactive and more effective at integrating the relevant facts and points of view is a way to bring together the operational and system perspectives—not only OR/SA analysts, but also the operators and the technical experts—in a way that will allow them to share and learn from each other’s perspectives. Wargames can play a central role in this process. Within the safe circle of a game, all perspectives can engage in focused conversation to expose all options and perspectives, build a shared experience and understanding, and create the catalyst for new ideas. From the wargame, the participants can take that shared overarching experience back to their own bureaucratic niches, and carry out their distinct duties with that shared view, that “organizational intent,” if you will, in mind. The cycle of research thus begun can gather energy as those individual persons and organizations conduct new analyses, new exercises and experiments, and new wargames, all focused on seeking to reach the current ideal and adapting that ideal to circumstances as they change.
We have all the pieces. If only we would stop seeing OR, SA, and wargaming as competitors for influence over decision makers instead of complementary tools to help them make better decisions. P. M. S. Blackett organized a group of scientists to create operations research and help win the biggest war ever fought. Robert McNamara introduced systems analysis as the lingua franca of the defense debate and helped overhaul one of the largest bureaucracies in the country. It remains to be seen whether and how interactive planning and the cycle of research can take us forward into the future.
Peter P. Perla has been involved with wargaming, both hobby and professional, for over fifty years. A lifelong interest in military history and games of strategy led him to the world of commercial wargames before his teen years. As a youngster, he had already published articles in the hobby press before becoming an undergraduate mathematics major at Duquesne University. After earning a PhD in probability and statistics from Carnegie-Mellon University with his thesis on Lanchester mathematical combat models, he joined the Center for Naval Analyses in 1977 as a naval operations research analyst. By the early 1980s he had worked on several navy campaign studies, designed naval games, documented existing Navy wargames, and led a study to define the principal uses of wargaming and to identify some of its fundamental principles. Over the next few years, he conducted and led research projects on a wide range of issues of importance to the US Navy and Marine Corps. In addition, he participated in a nearly a dozen classified major Navy wargames, including the Global War Game. In 1990, the US Naval Institute published the first edition of his book, The Art of Wargaming. This book became a fundamental international reference on the subject (including a Japanese-language edition), and a standard text at US military schools. Since that time, he has continued his work on Navy wargaming, and has branched out into analysis and gaming for other US government agencies, including the Centers for Disease Control and Prevention, the Department of Health and Human Services and the US Army’s Training and Doctrine Command. He is regarded as one of the nation’s leading experts on wargaming and its use in defense research. The History of Wargaming Project published a second edition of Dr. Perla’s book, including new material, in 2011.