Disagreement Is Not Disrespect

British military historian Sir Michael Howard wrote, “I am tempted to say that whatever doctrine the armed forces are working on now, they have got it wrong. I am also tempted to declare that it does not matter…. What does matter is their ability to get it right quickly, when the moment arrives.”1 Howard was exactly right. As we, the leaders, deal with tomorrow, our task is not to try to make perfect plans. It is not possible to make perfect plans, but we will not be held to a standard of clairvoyance. Our task is to create organizations that are sufficiently flexible and versatile that they can take our imperfect plans and make them work in execution. That is the essential character of the learning organization.

The Army began its journey to becoming a learning organization in the 1970s. In 1973, Chief of Staff General Creighton Abrams created the Training and Doctrine Command (TRADOC) to pull together the Army’s school system, training centers, and development activities—to put individual training and education and the responsibility for modernization2 under a single organization. The first TRADOC commander was General William E. DePuy.

As a captain just four years out of the Reserve Officers’ Training Corps (ROTC) at South Dakota State University, Bill DePuy had landed in France with the 90th Infantry Division on June 8, 1944 (D-Day plus 2), and participated in some of the toughest fighting breaking out of Normandy. The 90th Division had not been trained for the kind of fighting it had to do and was not well led. Ultimately, with new leadership and hard training, it distinguished itself; but it was the hard reality of having to learn the basics under fire that molded DePuy’s personality. He became one of the Army’s most outspoken disciples of tough, realistic small-unit training as the cornerstone of a sound, effective Army. Thus, in 1973, he was the perfect choice for the new Training and Doctrine Command. He devoted himself to restoring discipline to the Army training system based upon the fundamental precept that the Army’s training program must be uncompromising in preparing the Army to fight and win the nation’s wars.3

DePuy set out in two principal directions. First, he undertook to rewrite the basic war-fighting doctrine of the Army as a “key integrating medium for an increasingly complex military bureaucracy.”4 In other words, he recognized that change in how the Army thought about war must come first, to give the Army an intellectual context within which it could create a coherent and rational future.

At the same time, DePuy’s experience had taught him that Army training needed to be focused on the performance of well-defined tasks directly related to performance in combat, especially for the individual soldier and small-unit leader. Improvement of that process did not need to wait for a doctrinal revolution; all it lacked was a disciplined approach. DePuy and those who followed him created such a training system. The most important element was standards, without which quality performance is meaningless. The Army had had such standards for many years for things such as road marching and rifle marksmanship; DePuy’s inspiration was to broaden that approach to virtually every task taught in the training centers, in the schools, and eventually in the units.

For example, an infantry battalion task force must be able to defend; that is its task. (The tasks derive from doctrine and are very specific.) Conditions can vary; in my example, the condition might be doing so at night. The standard can also vary; in this case it might be defeating the enemy forward of the main defensive position. That is not a task that can or should be performed with the precision of eighteenth-century drill. Every combination of units, terrain, equipment, weather, and so on will result in a different outcome. So performance, beyond some gross metrics, cannot be understood and evaluated by simpl means. Thus, the next step was to create a structured way of facilitating learning from complex experiences that are often very ambiguous.

The answer came in what is called an After Action Review. An AAR takes place after every training event. Its purposes are simple: learning, improving, doing better the next time. The participants sit down width a facilitator called an “observer-controller” who has been with them throughout the event, and they discuss what happened. To do this effectively requires several things. First, there must be a fairly good basis for understanding what actually happened. In the training centers, electronic data collection enables high-fidelity recording and playback of events. It is like looking at football game films on Monday morning; you may think you know where you were at such-and-such a time, but in an environment where one hilltop can look pretty much like the next, you may or may not be correct. Thanks to unobtrusive sensors, the database can pinpoint exactly where you were and what you were doing. Soldiers call this “ground truth.” Combined with ground truth, there must be a fairly unambiguous understanding of what should have happened, and that comes from having standards derived from doctrine.

Given those elements, it is possible to talk about an event in a way that focuses on improving team performance without getting caught up in individual performance, rank, position, or personality. By asking questions such as “What did you think I wanted you to do?” (as opposed to questions such as “How did you screw that up?”), one can get to the roots of both success and failure. This is not an easy process, and it generally takes a lot of time, maybe two to three hours to “AAR” a major event. The cost in time alone is heavy, but the outcome is a much more in-depth understanding of what happened. The return on investment, measured by improved performance, is very high.

The most difficult challenge is developing a culture that values this kind of learning. A colleague in industry once described an attempt to initiate a similar program in his company. He told me of a dialogue with a loading dock foreman who, in great frustration, finally said to him, “Look, I can either ship product or talk about it. Which do you want me to do?” The answer can only be “Both,” but it is hard to make that answer a reality. It took a decade for the AAR process to become respected in the Army, for us to learn that you can do both—ship product and talk—and that carefully structured talking leads to more effective shipping or whatever. It is an investment that no one can afford not to make.

Over the decade it has taken for the AAR process to become imbedded in the Army’s culture, its value has been accepted and it has spread to activities other than training. In the Gulf War, commanders conducted AARs after each rehearsal and each battle. Today AARs take place in garrisons, on staffs, and in headquarters—everywhere soldiers gather to perform some task. My personal staff would hold AARs for me after a major event in which I had participated. I did not especially enjoy discussing the gaps in my own performance—especially when I felt pretty good about what I had done—but these AARs helped me improve, and they helped my staff learn to support me better.

For America’s Army, the AAR was the key to turning the corner and institutionalizing organizational learning. You probably never become a learning organization in any absolute sense; it can only be something you aspire to, always “becoming,” never truly “being.” But, in the Army, the AAR has ingrained a respect for organizational learning, fostering an expectation that decisions and consequent actions will be reviewed in a way that will benefit both the participants and the organization, no matter how painful it may be at the time. The only real failure is the failure to learn.

—GRS

THE LEARNING ORGANIZATION

Peter Senge, director of the Systems Thinking and Organizational Learning Program at the Massachusetts Institute of Technology, defines a learning organization as “an organization that is continually expanding its capacity to create its future. For such an organization, it is not enough merely to survive. Survival learning’ or what is more often termed ‘adaptive learning’ is important—indeed it is necessary. But for a learning organization, ‘adaptive learning’ must be joined by ‘generative learning,’ learning that enhances our capacity to create.”5 Generative learning is not about amassing a body of knowledge so much as it is about amassing a body of experience, interpreting that experience, and changing behavior as a result. Thin threads give an organization a way of learning by sponsoring and executing specific, very focused experiments about the future. In a larger sense, becoming a learning organization involves learning from everything you do, not just thin threads. Achieving and sustaining this kind of learning requires communicating openly, sharing information, and developing a culture in which team members share the responsibility for team performance and growth.

A structured, open process of sharing information about events is the basis for this kind of learning. In the Army, the After Action Review was the key step. Initially, creating this kind of feedback results in what Senge terms adaptive learning, but over time it does much more than that. An effective feedback process fosters trust throughout a team. Once an organization grows comfortable with dialoguing about performance after an event, it is a small step to dialogue more effectively about plans and preparations before an event. This fosters greater innovation and risk taking, which in turn lead to greater sharing of information—hence, continuous generative learning and better performance. Creating and participating in a structured feedback and innovation process (see Figure 11-1) is an effective first step toward growing a learning organization.

Figure 11-1—Feedback and Innovation

DEVELOPING THE AFTER ACTON REVIEW

To be successful, the feedback process must be structured. It cannot simply be a group of people talking about what they “think” happened and what they “feel” should be done next. The process of writing doctrine had enabled the Army to define the complexity of ground combat in terms of tasks, conditions, and standards that, while undoubtedly imperfect, were universally accepted. Feedback could therefore be structured around identifiable events and against measurable standards. “Performance last quarter” is not an identifiable event; it is too vague, too complex. “Delivery to such-and-such an account last quarter” or “Opening the new plant in India” is an identifiable event.

After Action Review Questions

Next, there must be a common understanding of what was supposed to have happened. In the Army, this is accomplished by reviewing the higher headquarters’ orders and the unit commander’s orders. All the commanders involved participate; these normally include those on at least three levels: the commander of the unit being exercised (the principal), the commander of the parent unit (the principal’s boss), and the commanders of the subordinate units. Because it is essential that everyone who contributed to the outcome participate, the leaders of supporting units will normally be there as well. Thus we have at least three levels of direct reports and the principal’s counterparts from adjacent “stovepipes.”

In a dialogue, these leaders discuss their various understandings of what was supposed to happen, reinforcing their effective communications patterns and identifying misunderstandings and weak communications patterns. This dialogue is facilitated by an especially competent officer whose experience normally makes him slightly senior to the commander of the unit being exercised. This facilitator is called an “observer-controller”; he or she has been with the commander of the unit being exercised throughout the entire exercise. His or her observations, while supported by data collected by other observer-controllers and by electronic means, are thus firsthand observations. His credibility derives from his experience, his access to information, and his skill as a facilitator.

The third key element in the AAR is knowing what actually happened: the “ground truth.” The observer-controller team is able to replay the exercise with a high degree of accuracy. Having reviewed the intent of the plans and orders and knowing the standards for each task, the participants can now evaluate their performance, discussing each action to discover why things happened the way they did.

As we look at the three AAR questions, it is in asking “Why” that opportunities for learning—for reinforcing successful behaviors and for improving unsuccessful behaviors—are discovered. And while it is not true that we learn only from our mistakes, our shortcomings, highlighted in such a process, give us the most fruitful basis for improvement. Mistakes made in this environment are not to dwell on but to learn from.

The AAR is not a critique. A critique is merely an assessment of success or failure. In the AAR process, the establishment of success or failure, sometimes in a very precise (and painful) way, is only a tool with which to learn. Nor is the AAR intended to fix blame; it is a process designed to improve performance. It will not work if the leader lets it become a scorecard or a basis for public executions. Sparky Anderson, the legendary manager of the Detroit Tigers, said it this way: “I love to make mistakes. How are you going to become a ballplayer unless you make mistakes? I’ve made more mistakes than I’ve done things right. But then they’re gone. Over.”6

The final element that must be in place for an AAR to be successful is a learning culture. Each team member must be doing his or her best to contribute to the team’s success. The environment must be nonthreatening on a personal level, and team members must be willing to take risks both individually and collectively, to learn, and to improve their performance.

Elements of the After Action Review

Seeking Insights

Let’s join an AAR in progress to see how it works

Imagine yourself at the Army’s National Training Center, deep in the Mojave Desert between Las Vegas and Los Angeles. Our unit, an infantry task force, has been “in the box” exercising for about a week. We are part of a contingency operation, participating, with real or assumed forces from other nations, in the defense of a friendly third-world nation that has been invaded by a neighbor. Things have been going pretty well for us overall, but we have had some bad days. We are tired but not exhausted; we feel good about our successes and concerned about our shortcomings. In the engagement that ended about four hours ago, we were defending a position and were attacked by a much larger force. It was a tough fight; we lost a lot of people, but so did the enemy. In our minds, it was less than a win, especially in Team Charlie’s sector, where the enemy made its main attack and some of the bad guys got through. We have been talking about it for about an hour and are getting down to the real meat of the dialogue.

OBSERVER-CONTROLLER (OC) TO THE TASK FORCE COMMANDER (CDR). “Let’s go over your concept for this part of the battle one more time.”

CDR: “I expected that the attack would be in the northern part of our sector, against Team Bravo. The terrain seemed to indicate that, and so did our intelligence. So that’s where I put our main effort—the strongest minefield, the heaviest artillery fires, and the greatest preparation. That’s where I was prepared to commit the reserve. But I knew he [the enemy] might come south [toward Team Charlie] or even do something unexpected, so it was important that I not move too quickly—we were prepared to go either way.”

OC. “How were you going to decide?”

CDR: “When he came through the gap in the mountains to our east [pointing to map] he would have to commit north or south. At that point, he was still nearly twenty kilometers out, but he would be beginning to move fast. Once he committed, I could begin to take him out with long-range fires at the same time that we were adjusting back here. The scouts were out there to tell me which way he turned.”

OC TO SCOUT. “What happened?”

SCOUT. ‘We got off to a late start because I had one element with some battle damage from the fight two days ago. We finally got into position—two positions really, one in the gap and one overlooking the gap—by midnight but we found his [enemy] people already up there. My team on the high ground spent the night fighting his infantry, and by morning my people up there were dead. The team in the gap ran into a mine and lost their communications capability.”

OPERATIONS OFFICER (s3). “I didn’t know any of that. How come we didn’t know you still had battle damage?”

SCOUT: “I thought it would be operational. It seemed like no big deal yesterday morning, but we never got the right repair parts.”

INTELLIGENCE OFFICER (s2). “I stopped getting scout reports about 0200 [2 A.M.], but I figured you would come back on the net at dawn.”

OC: “Let’s talk about that. What could you have done?”

SCOUT: “I had people still alive in the gap. We just couldn’t talk back. I should have had another radio or even a flare.”

S2. “I should have realized that you might be in trouble when you did not check in. I could have gotten something else working to back you up. We should have talked this over before you went out.”

CDR. “Damn, guys, we’ve been through some of this before. My plan depended on early warning. Next time, I want to review this more carefully myself and talk to the scouts before they go out. Deuce [intelligence officer, S2], you need to get your whole team wired more tightly into my head and with the Three [operations officer, S3]. XO [executive officer], the staff needs to give me some ideas about redundancy in a situation like this—look, we all lost the bubble, okay?”

OC: “Good, but now let’s move on. What happened next? Let’s review the tapes” [ground truth].

The OC now displays a series of computer-generated images of the battlefield on which the players can see the enemy force coming at Team Charlie. The enemy is unimpeded by fire because the task force was slow to realize that they were through the gap and even slower to realize that they had taken the unexpected route south to Team Charlie’s sector. On the large video screen, the nearly twenty kilometers between the enemy and the outer limits of Team Charlie’s fires are quickly filled by advancing enemy.

CDR: “By this time, I was hearing from the aerial scout and I could see that they were going to hit Team Charlie hard, so I ordered things into action—artillery, shifting the reserve, getting the gunships on target—but we had not slowed them down because I was unsure of where they were. I had expected to have twenty to thirty minutes and that we would have taken out twenty percent or more of his elements with the artillery. Suddenly, I had five minutes and he was at full strength with lots of momentum.”

OC: “In fact [plays audiotape time-synchronized to the large screen], at the time you were giving your orders, the enemy advance element was already beginning to breech Team Charlie’s minefields. It was Team Charlie’s forward observer who was beginning to engage them.”

FORWARD OBSERVER. “From my vantage point, I could see their lead elements as they came in, and so I knew where they were. Our plan was to reinforce our barriers with scatterable mines at the enemy’s breach site, and I had a quick fire channel open to shoot that as an emergency mission—we had talked about it and rehearsed it. Even then there was some delay, because the guns were still set up and waiting for the mission to fire deep; but they came on target pretty quickly.”

OC. “Just in time, in fact [plays computer-generated battle map], to begin stacking them up. That was the first time they slowed down. Charlie’s fire mission also helped to create a good target mass for the helicopter gunships the task force commander was bringing into battle position.”

And so it goes, for about another hour. The high level of detail enables people to discover their own roles: what they did right, what they did wrong, but, most important, how working together more effectively as a team leads to success. In this vignette, one team (the scouts and the intelligence officer) failed because they had prepared poorly, failed to develop options, and lost their focus at a critical time. One team (the forward observer and the howitzer batteries) succeeded because they had rehearsed, had a high level of trust, and were very focused. Both teams were small teams well down in the organization, but at the critical moment their actions were decisive.

The Army’s AAR guide suggests that the time spent in the AAR be divided 25-25-50 in answering the three questions: 25 percent reviewing what happened, 25 percent reviewing why it happened, 50 percent dialoguing about what to do to improve. That division of time is a good rule of thumb, but, as we saw in the vignette, the questions are not discrete; rather, they tend to run together in a stream of consciousness. The role of the observer-controller is very important; he or she guides the dialogue to the most important and most generalizable lessons.

The long-term legacy of the AAR is that the Army learned how to apply it beyond the training center, where the requirements for a good AAR can be carefully controlled. Conducting an AAR where there are only imprecise standards, where there is no thorough understanding of “ground truth,” or where there are no highly skilled observer-controllers is possible in a mature team, so long as everyone keeps in mind the weaknesses incurred by relaxing the framework. The leader may act as the facilitator, or someone else may perform that role. The objective or goal of the project or event may be taken as a standard. The participants can decide, as they conduct their review, whether or not they are comfortable with the level of information available. In this relaxed format, the AAR can be the basis for robust generative learning.

Desert Storm—A Look Back

It was this kind of less rigorous process that was involved in March 1992, when the Army’s division, corps, and major logistical unit commanders gathered to conduct an AAR of the ground operations in Operation Desert Storm. This meeting, a group of senior generals, had two objectives. The first was to look at the current systems and processes and to affirm those that had been satisfactory while identifying those that needed to be changed. The second objective was to try to look into the future to see how land warfare was changing. It was a complex AAR involving both learning how to improve the current paradigm and attempting to learn about the next.

Out of that meeting came a series of recommendations that formed the basis for immediate improvements to the force. As a result the 1st Armored Division that went into Bosnia was slightly different from and more effective than the 1st Armored Division that had fought in the Persian Gulf. Out of that meeting also came important insights for the Army’s new field manuals (writing new doctrine); for reengineering the process of developing new equipment and tactics (creating the Battle Laboratories); and for what became the Force XXI experiments (creating a more flexible Information Age force). That meeting also made a powerful statement to the participants and to the Army as a whole. In the words of General Fred Franks, who hosted the meeting, it said emphatically that “We were not going to stand still.”7

Such a complex AAR was possible because this was a team that had long ago bought into the process. Because they were comfortable with the process and with one another, they could have an effective AAR even though many of the normal aspects of the AAR structure were not in place.

Any team that has a clear task can use the AAR process to improve its performance. You must be able to focus on some discrete event (or at least be able to isolate the event). You must also be able to identify fairly clearly what was supposed to happen (intent and concept; task and standard) and what did happen (ground truth). It is normally helpful to have a facilitator of some kind, although a mature team can accomplish a good AAR without much help.

The process will not come automatically; you must structure your own learning for success. The following hints may be helpful.

  1. Do not start your AAR experience with an enormous, complex task; build the skills with simple but not inconsequential tasks.

  2. Make sure you have as much information as possible about what really happened, and make sure every participant has access to that information as you go through the process.

  3. Ensure that the leader endorses the ground rules.

  4. Finally, set aside enough time to really get into things. If you are reviewing a major project at a critical milestone, with twenty or thirty team members, it could easily take an afternoon to work through the most important issues. If you don’t allow enough time, you will be unlikely to get beyond the “measurables” and into the “unmeasurables,” where the most significant learning can take place.

The ultimate result of this kind of process, whatever you call it and however it is structured, is that people not only learn but become more engaged as leaders, sharing the responsibility for a team’s success. Formal leadership does not change hands, but the organization comes to see the leading roles of its members in a better perspective. It is a common experience at the training centers for plans to break down because some small element failed—a gap in a minefield was not closed properly, a scouting report did not get back to the commander, a resupply operation was not accomplished on time, or a unit got lost. The AAR process allows such shortcomings to be uncovered in a way that is as nonthreatening as possible, discovering the cause so that it can be corrected—fixing the problem, not the blame. One outcome is that the organization comes to understand the leadership role of all the team members; the responsibility of each leader to make sure that his or her task is accomplished is highlighted, and the critical path to success becomes more clear.

In Flight of the Buffalo, authors James Belasco and Ralph Stayer suggest that the new leadership paradigm should be a flock of geese. They write,

What I really wanted in the organization was a group of responsible, interdependent workers, similar to a flock of geese…I could see the geese flying in their “V” formation, the leadership changing frequently, with different geese taking the lead. I saw every goose being responsible forgetting itself to wherever the gaggle was going changing roles whenever necessary, alternating as a leader, a follower, or a scout. And when the task changed, the geese would be responsible for changing the structure of the group to accommodate, similar to the geese that fly in a “V” but land in waves. I could see each goose become a leader.8

The metaphor is powerful—leadership distributed among the team members, rotating as necessary so that the best-qualified leader makes the right things happen at the crucial moment, when his or her skill as a leader is most needed.

LESSONS LEARNED:
A STRUCTURE FOR ORGANIZATIONAL LEARNING
9

Organizational learning, in a broader sense, can occur only when an organization as a whole is communicating and adopting what is being learned in its various parts. Learning begins in isolation; one individual or one team learns something of value. Turning that into organizational learning requires a mechanism for sharing. Given the success of the AAR, could the entire Army benefit from what was taking place one event at a time in the training centers?

The answer lay in the rebirth of the Army’s lessons learned process. During World War II, Marshall had initiated “lessons learned,” under the direction of the chief of military history, to gather up what was being learned in the far-flung operational theaters and cycle it back into the training base and to other units. The system did not survive the war, but the idea behind it did, enabling it to be revived in both Korea and Vietnam. As the system matured, mechanisms were developed to feed these battlefield lessons into the development of new tactics, procedures, organization, and equipment. With that institutional background, it was a natural step, in 1985, to establish a formal Center for Army Lessons Learned (CALL) at Fort Leavenworth, Kansas, for the express purpose of capturing the learning taking place at the training centers and disseminating it throughout the Army. Since its inception, CALL has expanded its charter to include capturing lessons from actual operations. Furthermore, thanks to the capabilities of information technology to report and disseminate using on-line databases, experience can now be disseminated almost as rapidly as it is collected.

From Somalia to Haiti

The soldiers who went into Haiti in September 1994 were from the same 10th Mountain Division that had gone into Somalia in December 1992, but they were better prepared. Each soldier had a handbook that covered the current situation in Haiti, common phrases in Creole, preventive medicine for the tropics, and tactics and small-unit procedures for the kind of operation they were facing. Their predeployment training had been carefully tailored to include crowd control techniques, dealing with local officials, operating in urban areas, dealing with the media, and other unique challenges posed by the operation.

All that was made possible by a team from the Center for Army Lessons Learned that had been studying the Somalia operation and other, similar operations and had been developing contingency plans in the face of the deteriorating refugee situation in the Caribbean. Once the 10th Mountain was alerted, the CALL team deployed to Fort Drum, New York, the division’s home base, and began working with commanders and unit leaders to transfer all the knowledge at their disposal to the troops who would be on the ground in Haiti.

Other teams, drawing on the same information base, were working around the clock to bring the troops the latest equipment to assist them—still and video digital cameras, life-finder sensors to sense body heat in dark alleys, and laptop computers to downlink all available intelligence. The effort enabled the Army to increase the effectiveness of the division significantly.

CALL’s role did not end there. CALL teams deployed to Haiti alongside the troops of the 10th Mountain and began a collection-and-analysis effort to capture the knowledge that was being gained every day. When the troops who would rotate in behind the 10th Mountain—the 2d Cavalry Regiment and the 25th Infantry Division from Hawaii—were alerted, CALL dispatched teams to those units to deliver knowledge packages directly to deploying units—“real-world” maps, rules of engagement, intelligence, and direct feedback from the troops that had gone in ahead of them. Additionally, on their way to Haiti, these units cycled through the training center at Fort Polk, Louisiana, where the observer-controller teams, also working with CALL, created a training environment in which the troops could rehearse operational tasks under Haiti-like conditions.10

THE LEARNING CHALLENGE

Earlier we argued that, as we face our external environment, “We don’t know what we don’t know.” As we face our internal environment, it seems that the opposite is too often true: “We don’t know what we do know.”11 As an important organizational asset, knowledge is usable only if it can be identified and disseminated so as to contribute value. The challenge is to discover what is known in any part of the organization and, if it is valuable, make it known to all. Success in helping units prepare for Haiti was made possible by the establishment of a learning culture and by the expansive CALL knowledge base that makes the experience of one unit, anywhere in the world, quickly available to all. Thus there are three key elements: the right culture, the knowledge itself, and access to the knowledge.

The Process

The CALL experience also suggests a six-step process: targeting opportunity, collecting data, creating knowledge, distributing knowledge, short-term applications, and long-term applications.

Targeting Opportunity. Deciding what to learn from is the first and most critical step in the process. There will be some easy targets. If you are moving your business into a new part of the world or undertaking a new kind of operation, they will present prime opportunities for learning. If you have a successful process improvement or quality program in some part of the organization, it will present prime opportunities for learning. Other high-payoff learning opportunities may be less obvious. Look for things you do repetitively, where a lesson that seems minor in itself may have a high payoff when replicated many times.

Collecting Data. Collecting data involves observing a targeted event or process and recording what happens—but it is not easy. As much as possible, data collection should target factual events that can be measured against a clear standard or at least an intended outcome.

Observations should be as unambiguous as possible to minimize bias by the collectors, even the best of whom will tend to impose personal judgments as they observe events.

Creating Knowledge. Some of the data will have meaning by itself and can be disseminated with relatively little analysis—especially for quick-fix applications. More often, judgments about the quality of the data and interpretations about its meaning will be needed to realize its full value. This requires expertise, maturity, and a degree of isolation from the organizational hierarchy to preclude filtering, especially of bad news.

Distributing Knowledge. Distribution can be by any number of means but should include both push and pull strategies. In a pull strategy, knowledge is available to planners and students in a central, easily accessible knowledge base. Think of it as a library. In a push system, knowledge teams go on site to assist leaders preparing for similar operations, bringing selections from the library with them.

Short-Term Applications. Short-term applications are quick fixes, things that are relatively simple to diagnose and correct. They have the greatest value to whoever will be doing something similar next. Key short-term issues include failed planning assumptions, clarification of unknowns, invented procedures, identification of unanticipated problems, and on-the-spot fixes of unanticipated problems.

Long-Term Applications. Long-term applications address systemic issues that are identified through repeated observations and that often require interpretation. Long-term applications feed back into basic policies, organizational concepts, the formulation of strategy, and long-range plans.

BUILDING A KNOWLEDGE NETWORK

The Army’s CALL experience provides some other important lessons for anyone attempting to build a knowledge network.

Developing Competent Collection Teams. Collection team members need both data collection skills and subject matter expertise. Multidisciplinary teams can often illuminate greater complexity. Try forming ad hoc collection teams that combine people expert at the lessons learned process with people drawn into the process on a project basis to provide subject matter depth. Collection teams need to be trained to be as objective and nonjudgmental as possible.

Maintaining a Customer Focus. The process of developing lessons learned must be focused on the needs of the organization, and resources should be concentrated on those issues with the potential of adding the most value. This helps structure the process for success and garners support for expending the resources involved.

Exploiting Technology. The CALL experience produced little of enduring value during its first several years of operation because the data coming out of the training centers overwhelmed our pencil-and-paper system. Analysts received thousands of units of written, audio, digital, and video data and attempted to distill them into bulletins and booklets. CALL found that units often made similar mistakes and provided some insights into how to train to overcome those mistakes, but the results obtained tended to aggregate at such a high level that they were not always useful.

Putting CALL on-line, so that the CALL database was available to any user Army-wide, greatly improved the interpretive process, especially for quick-fix lessons. Quick storage and retrieval with great specificity at a very low level were now possible. In the pencil-and-paper system, for example, a logistics planner might have found “Depending on the light conditions, the unit will need an adequate supply of chemical illumination markers”—helpful but not much more than a reminder. On-line, he can get into specific data points,things like “We thought we would need about 6,000 chemical illumination markers over the two-week period, but because we were there during the new moon, we needed nearly twice that many to be able to manage traffic in the rear area “

Perhaps even more important, an on-line system enables users to access the knowledge base when they need something. This turned out to be much more effective than a paper-based system, whose lessons were too often not available at the time and place they were needed.

Protecting the Messenger. Some of the most important lessons will involve poor performance; exposing and identifying bad news raises tough issues in any organization. Sometimes lessons can be neutered so that poor performance is not easily identifiable with specific units; but most of the time, simply identifying an event will be tantamount to identifying its participants. Rapid dissemination makes interference or filtering more difficult. But in the final analysis, the data collectors and the lessons learned organization must be protected from editorial heavyhandedness. Over time, protection will flow naturally from a learning culture: people will value the process for its goodness. Initially, however, there is no substitute for high-level sponsorship.

Avoiding Reinventing the Wheel. Avoid targeting events for which the outcomes, whether plus or minus, will be predictable and unremarkable. Much of the challenge in distilling lessons learned is in separating poor performance at tasks the organization knows how (or should know how) to do from new tasks or new methods for doing familiar tasks. Careful targeting of high-payoff issues and selection of mature, experienced collectors and analysts will help you avoid this problem.

Keeping It Simple. Developing a simple system, particularly in the early stages, keeps expectations within bounds and facilitates early success—both of which are important to long-term success.

RULE TEN: LEARN FROM DOING

Learning from doing and sharing the knowledge gained are the essence of organizational learning. By listening to the organization and fostering a dialogue about performance, the leader opens the door to learning, sharing lessons learned, and reducing risk. By stretching the organization to act differently, to do new things in a learning atmosphere, the leader fosters an entrepreneurial spirit of innovation and growth.

More than anything else, learning involves listening and a willingness to change for the sake of growth and improvement. Many lessons have already actually been learned; they simply have not been captured or disseminated. Our organizations are already full of lessons learned, but these are of little value if they are not part of the knowledge asset or we fail to exploit them. Remember, McDonald’s Big Mac, fried apple pie, large fries, McDLT, and Egg McMuffin all came up from the bottom, not from McDonald’s large corporate machinery.12 To be useful, they had to be identified and shared.

A structured program of organizational learning not only helps disseminate best practices and new knowledge, it also reduces risk. The more people and teams know, the more effectively they perform and the less likely they are to fail.