If you have decided to use a team to accomplish a piece of work, then the next question is how to set it up. You will want to minimize its vulnerability to the process losses discussed in the previous chapter and, ideally, to increase the chances of positive synergy among members. Unfortunately, group process difficulties are notoriously hard to stamp out. Merely being aware of them, for example, does not mean that you can avoid them. So what is to be done?
One strategy for heading off group process problems is to structure members’ interaction in ways that minimize the chances that things will go awry. The Nominal Group Technique (NGT), for example, provides a multistep procedure that both guides and constrains group interaction. Intended for tasks that involve eliciting and prioritizing policy alternatives, the technique has been shown to significantly reduce a group’s vulnerability to the kinds of process problems that often develop for such tasks. The Delphi method goes even further— group interaction cannot compromise performance when Delphi procedures are used because members do not interact at all. Instead, they submit their personal views to a coordinator, the coordinator summarizes them and sends the results back to all participants, and that iterative process continues until convergence is achieved. And, of course, there are the numerous structured analytic techniques that have been developed for use by intelligence analysts in managing not just the cognitive processes they use in generating their inferences and assessments, but also the social dynamics of the analytic process.1
Although structured techniques assuredly can be effective in lessening a team’s exposure to possible process losses, they come at a cost. By limiting or constraining group interaction, they also necessarily cap a team’s potential for generating synergistic process gains. What might it take to create a team that is able to limit its vulnerability to process losses while remaining open to the possibility of generating positive synergies? To explore that question, consider the two intelligence teams described next. Although both teams produced acceptable outcomes, one was much more robust than the other and better able to exploit the full complement of its member resources in achieving its objective.
A senior intelligence official tasked a science and technology unit manager to assess the progress that potential adversaries were making in developing a specific technological threat. After meeting with the senior official, the manager stopped by his deputy’s office and asked him to put some people together to look into the matter. The deputy was pleased to get the assignment because he believed that attention to the threat was long overdue. He immediately drafted an e-mail message describing the issue and seeking ideas for pursuing it. A version of the message that included sensitive details went to selected colleagues in the intelligence community and a more general version went to various scientists in academic institutions and commercial laboratories. The deputy received a number of informative responses to his queries, and he asked those who provided the most promising if they would be willing to expand and elaborate on their initial comments. He explained that he would then integrate what they supplied with the contributions of others and put together a summary report for his manager. Almost everyone he contacted agreed to participate.
Meanwhile, the unit manager invited a dozen high-status scientists whose expertise and perspectives he especially respected to meet with him at headquarters to enrich his personal “take” on the matter. The manager deemed the meeting a success. A few attendees volunteered to look further into certain aspects of the problem, drawing on people in their own professional networks. Even those who did not volunteer for follow-up activities participated constructively in the meeting and said they would be available to meet again if needed. The manager wrote up his notes from the meeting and then waited to receive the findings of the team his deputy had convened.
A few weeks before the commissioning official was to be briefed, the unit manager asked his deputy for a written account of what his team had learned. The deputy was ahead of the game: he already had prepared his report and he and the manager spent a couple of hours discussing it in detail. The manager then integrated what the deputy’s team had produced with what he had learned from his interactions with the senior scientists and prepared his slides. The briefing went well. The official learned most of what he had hoped to find out and he advised the manager to expect some follow-on work in the near future. The manager passed the news on to his deputy, who shared his pleasure about how it all had turned out.
Sometimes developing an original solution to a hard problem is best done by a small, diverse group operating outside members’ regular organizations. That is how a continuing problem involving the extraction of certain kinds of data from some challenging locations got solved. The exfiltration team was one of many working groups created and supported by a special intelligence community unit (referred to here as DevOrg) that had been created to take on concrete problems that previously had defied solution.
Long before the exfiltration group was formed, DevOrg staff collaborated with government sponsors to properly frame the group’s task, to select the team leader, and to identify candidates for membership. A recently retired senior intelligence officer was invited to lead the team, and he, along with DevOrg staff, discussed who might be invited to join. Only individuals with high-level expertise in the problem area were considered for membership. But technical qualifications alone were not sufficient—candidates also needed to have demonstrated in their previous work both a learning orientation and respect for people who had expertise different from their own.
Eventually a dozen members were chosen, extracted from their day jobs, and given a specific objective and deadline: the group had six weeks to produce both a briefing for the government sponsor and a written back-up document. The work would be done at an off-site location, and members were warned that there would be no sneaking back to the office—the work would be far too focused, intense, and collaborative for that. The essence of the charge to members, according to one DevOrg staff member, was this: “You’ve all said to yourselves, ‘If only I were free to work on this problem full time, what I could accomplish!’ Well, this is that opportunity. It’s your team’s work, it’s your product, and it can make a real difference.”
The launch of the team was designed with great care. The leader first asked each person to interview another member to learn as much as possible about his or her special capabilities relevant to the exfiltration problem, and then to share that information with the rest of the group. Next, the leader talked about the necessity of collaboration across disciplines, emphasizing how much could be learned if members recognized and used the full extent of their teammates’ knowledge, skill, and experience. Paying attention to the badges people were wearing when they arrived, he said, would only get in the way. That observation struck home for one member: “I’ve spent most of my career hiding from guys like you,” he said to a teammate who came from an agency with which his own had tense relations. His comment jump-started a general discussion of what members might learn from one another that they could use in working on the exfiltration problem.
By the end of the launch meeting members had a good understanding of the team’s task, full awareness of the team’s diverse resources, and agreement about the basic norms of conduct that would guide their work. Within a couple of days, the team had organized itself, identified subtopics and the subgroups that would work on them, and was moving forward under its own steam. As the work progressed, DevOrg staff occasionally challenged the team to think about the problem from a different angle or provided seemingly irrelevant input that occasionally prompted a new way of thinking. Additional contributions and fresh perspectives came from the outside, as members contacted people in their own professional networks for assistance with particular problems or issues.
Despite its fast start, the exfiltration team repeatedly hit dead ends and found itself cycling back through issues that already had been addressed. Eventually, about halfway through the team’s six-week performance period, frustrations boiled over and the team experienced a significant upheaval. Although working through the team’s many difficulties was painful to all, members finally came up with both a new approach to the problem and a reorganization of the team itself. From then on, everyone focused intently on executing the task, and the team completed its briefing and written report just before its deadline.
Although both of the teams described completed an acceptable piece of work, as teams they could hardly have been more different. Indeed, the emerging threats team can perhaps best be described as two groups rather than one. There was the set of respondents to the deputy’s e-mail queries, who clearly fall into the coacting group category described in Chapter 2 (that is, members generated individual products that the deputy subsequently assembled). And there were the senior scientists who participated in the manager’s headquarters meeting, a group that just as clearly falls into the surgical team category (that is, members’ inputs served exclusively to assist the one person, the manager, who was responsible for the product).
Because the commissioning official found the manager’s briefing helpful, the emerging threat team would score well on the first dimension of team effectiveness discussed in Chapter 3, client satisfaction. But neither of the emerging threat subgroups did well on the second and third dimensions of effectiveness. They did not become stronger as performing units (they could not have, since the deputy’s group never even met, and the senior scientists attended only a single meeting), nor did individual participants appear to learn much along the way (the deputy’s participants gave their own views but never even heard those of others, and the senior scientists convened by the manager did not have enough time together to learn much from one another).
By contrast, it was entirely clear who was on the exfiltration team. Moreover, members were interdependent for achieving a common objective for which the team as a whole was responsible, and they worked together closely throughout the team’s entire six-week life. Assessment of this team’s effectiveness requires some inference. We know that the team’s briefing and written report were completed on time, but absent data about the reactions of its client we cannot confidently assess its performance on the first effectiveness dimension. The exfiltration team clearly did well on the second and third dimensions, however: it was stronger as a performing unit at the end of its six-week life than in its early days, and the team experience did contribute to the professional learning and development of individual members.
In sum, the exfiltration team was more of a “real” team than the emerging technologies group. Like other real work teams, it had a clear boundary that distinguished members from nonmembers. Members were interdependent in generating the product for which they had collective responsibility and accountability. And the team had at least moderate stability of membership, which gave members time and opportunity to learn how to work well together. As will be seen, these three attributes strongly facilitate competent teamwork.
To work well together, team members need to know who they are. Difficulties are sure to develop if there is so much ambiguity about membership that neither members nor outsiders can distinguish between the people who share responsibility for the team product and others who may help out in various ways but are not on the team.
Having a clearly bounded team does not mean that members must do all their work in the same place at the same time, or that membership cannot change when circumstances change, or that members can draw only on other members’ own expertise. It merely means that members know who their teammates are—a seemingly simple matter, but one that trips up a surprisingly large number of teams. Organizational psychologist Clayton Alderfer uses the term underbounded to characterize social systems whose membership is uncertain or whose boundaries are so permeable that there is a constant flow of people in and out. Alderfer finds that such groups risk becoming “totally caught up in their environmental turbulence and [losing] a consistent sense of their own identity and coherence.”2 It is nearly impossible for an underbounded team to develop and implement a coherent performance strategy.
The reverse state of affairs also can occur. A team with tight, impermeable boundaries is what Alderfer calls an overbounded system. Members of overbounded teams typically have a clear team identity and often develop into a highly cohesive unit. High cohesiveness generally is viewed as a positive state of affairs that helps a team achieve its purposes. That view is quite understandable: when one experiences the co-incidence of high cohesiveness and team effectiveness, it is tempting to assume that the former is responsible for the latter. But the opposite actually is just as likely—that is, that performing well engenders team cohesiveness.
Moreover, there is a downside to cohesiveness. Without question, cohesive groups are better able to coordinate and control the behavior of their members than are underbounded groups. They typically generate strong pressures for member conformity, and members are disposed to comply because they place a high value on their teammates’ approval. That combination can result in energetic, focused team behavior. But highly cohesive groups can become so inwardly focused that they overlook potentially significant contextual changes, and they tend not to engage in the kinds of cross-boundary exchanges that can be critical in intelligence work. And, perhaps most important of all, high cohesiveness sometimes inhibits team learning and the correction of errors, resulting fiascos of the kind described by Irving Janis in his research on groupthink.3
In fact, cohesiveness is neither as pernicious as the groupthink model posits nor as generally advantageous as lay persons and some scholars occasionally have suggested. Is it possible to harvest the benefits of cohesiveness without falling victim to its potential dysfunctions? The key may have to do with the basis of a group’s cohesiveness. If what holds members tightly together is a shared wish to maintain harmony and good interpersonal relationships, the risks of dysfunction are high. But if cohesiveness stems from a shared commitment to accomplishing the team’s task, it can unleash members’ energies and talents to generate synergies that never would be seen in a loosely bounded group.
Managing team boundaries well, therefore, requires maintaining a balance between loose and tight. Too little boundedness, and chaos can result. Too much, and the team can develop an inward focus that blinds members to external realities and opportunities. Teams with the right balance have sufficient cohesiveness to sustain members through tough times without focusing so much on internal harmony and uniformity that team performance is compromised.
Members of real work teams combine their efforts and talents to achieve some common purpose. This feature sharply distinguishes real teams, such as the exfiltration team and the red teams in the PLG simulations, from coactors who are merely performing their own tasks in parallel, such as the emerging threats group. What is critical is not whether members are interdependent for obtaining some reward, as would be the case if recognition were given to every member of a work unit contingent on the simple sum of their independent contributions. Instead, it is that the task requires members to rely on one another in generating a collective product, service, or decision.4
The benefits of interdependence are clearly seen in Michael O’Connor’s and my study of 64 analytic units in six different U.S. intelligence organizations. We observed each participating group, collected survey data from members, and constructed a multi-attribute measure of each group’s performance. Each group in the study was identified as either a coacting group or a work team, depending on whether individual members or the group as a whole had primary responsibility and accountability for performance outcomes. Of the 64 groups we studied, 59 percent were coacting groups and 41 percent were work teams.5
Figure 4-1 shows that work teams significantly outperformed coacting groups on our composite measure of performance effectiveness. What surprised us was the reason why the work teams performed better. Members of work teams engaged in significantly more peer coaching—that is, teaching and learning from one another—than did members of coacting groups. And peer coaching was more strongly associated with performance effectiveness than any other factor we assessed in the research.6 Clearly, interdependent responsibility for a common outcome provided both an occasion and an incentive for team members to coach one another, which is consistent with other research showing that task interdependence fosters mutual learning among group members.7
Real work teams can be small or large, can have wide-ranging or restricted authority, can be temporary or long-lived, can have members who are geographically co-located or dispersed, and can perform many different kinds of work. But if a team is so large, or its life is so short, or its members are so dispersed that they cannot work together interdependently, then prospects for team effectiveness are poor.
FIGURE 4-1 Performance Effectiveness of Work Teams and Coacting Groups From Hackman & O’Connor (2004)
Conventional wisdom about groups whose members stay together for a long time is pessimistic about their viability and performance. Although teams may become better at working together in the early phases of their lives, the argument goes, the improvements soon plateau and then, at some point, members become too comfortable with one another, too lax in enforcing standards of behavior, and too willing to forgive teammates’ lapses. It is better, therefore, to have a continuous flow-through of members to keep teams fresh and sharp.
Conventional wisdom is wrong. Research findings overwhelmingly support the proposition that teams with stable membership have healthier dynamics and perform better than those that constantly have to deal with the arrival of new members and the departure of veterans. An analysis of National Transportation Safety Board (NTSB) records, for example, showed that 73 percent of the incidents in the NTSB database occurred on a crew’s first day of flying together, and 44 percent of those took place on a crew’s very first flight. These findings were extended in an experimental simulation in which fatigued crews that had flown together for several days caught and corrected more errors than did well-rested crews who were just starting out. Similar results have been obtained for teams as varied as coal miners and construction crews, and these field-study findings have been affirmed in controlled laboratory experiments.8
There are many reasons why stable teams perform better. Members who are familiar with one another and with their work context are able to settle in and focus on the work rather than waste time and energy getting oriented to new coworkers or circumstances. They develop a shared mental model of the performance situation and, with time and experience, one that is more integrative than the individual models with which they began.9 They develop a shared pool of knowledge, including who has special skills for which aspects of the work, thereby building the team’s capability to actually use what members know and know how to do. And, once the team gets moving down a good track, members gradually develop a shared commitment to the team and a measure of caring for one another.
Research and development teams appear to be an exception to the findings just summarized. Organizational researcher Ralph Katz found that the productivity of such teams peaked when members had worked together for about three years, and then began to decline.10 For teams that perform scientific and technical work, a moderate flow-through of new members does help, probably because the new arrivals bring with them fresh ideas and perspectives to which the team might not otherwise be exposed.
Stable teams also need to be aware of the risk that they will become too insular and rely too much on habitual routines in executing their standard tasks. When performance becomes automatic, which can happen in long-tenure groups, the benefits of team longevity can be negated by unexpected and unnoticed contextual changes that render the team’s standard performance strategies irrelevant or inappropriate.11 Even so, the main finding—which merits considerably more attention by those who design intelligence teams than it typically gets—remains valid. Keeping team members together almost always brings nontrivial benefits—in team performance, to be sure, but also in a team’s capacity to learn from its experiences and in the ongoing professional development of individual team members.
As intelligence organizations increasingly come to operate more like networks than stovepipes, it is becoming ever more challenging to maintain team stability. It can seem as if people are flowing by a team in boats on a rapid river, with some hopping off to join the team and others hopping on to leave it, both at hard-to-predict times. One way that some teams deal with this phenomenon is by creating a core team, a set of people who can be counted on to remain in place for a while. Core team members take responsibility for managing the flowthrough of others and for making sure that team purposes, values, and work strategies do not get lost in the continuous shuffle. No doubt other ways of dealing with the increasing fluidity of team membership will be invented as network technologies continue to evolve. But no matter what comes to pass, it will continue to be the case that team effectiveness greatly depends on some set of people having enough time together to learn how they can best work together.
Creating a real work team requires thought, effort, and time, as was evident in the investment DevOrg staff made in putting together the exfiltration team. Is there not a simpler way to capture the benefits of teamwork that also will minimize the chances that a team will go sour? Among the possibilities that have attracted widespread interest and attention in recent years are crowdsourcing and collective estimation, described in Chapter 2. These two techniques, as well as the many other problem-solving and decision-making tools proliferating these days, have a common aspiration: the emergence of a high-quality group product from the independently made contributions of separate individuals.12
The logic is seductive. If ants and bees can achieve collective outcomes without having to attend endless meetings, which they assuredly can, then surely humans can do so as well.13 So we have bestselling books such as The Wisdom of Crowds, which show that the collective judgments of regular people can be more accurate than careful assessments made by highly trained experts.14 And we have vast numbers of prediction markets springing up to generate forecasts that, at times, both diverge from and outperform the judgments of trained professionals.15 It does not appear to be a good time to be an expert.
But let’s take a closer look to see how this kind of thing actually works. The iconic example of the wisdom of crowds is estimating the number of beans in a large jar in a booth at the county fair. The person whose guess is closest to the actual number of beans wins, but the jaw-dropper is how close the average of the judgments of all passersby is to the actual number of beans in the jar. Bean-estimating is an instance of what psychologist Ivan Steiner has called a compensatory task. For compensatory tasks, individuals’ errors in one direction (for example, estimates that are too high) are compensated for by others’ errors in the opposite direction (estimates that are too low). So long as the pool of respondents is large enough, individuals’ judgments are entirely independent of one another, and each judgment contains at least a kernel of truth, the overall average will be remarkably accurate.16
When these conditions are not met, group estimates can go badly awry. Rather than capturing the wisdom of crowds, one may get the pooling of ignorance or bias. For example, imagine that you have asked members of an ethnic group that is in conflict with another group how likely it is that the other group will commit a hostile act. The average of respondents’ estimates will be inflated well above reality. Why? Because when the dynamics of intergroup conflict are in play, members of each group share and reinforce negative views about the intentions of the other group. So the great majority of respondents’ errors will be in the same direction (estimates that are too high), which means that those errors will cumulate rather than cancel each other out. Because the assumptions of the compensatory model have been violated, collective estimation will generate systematic error rather than truth.
And then there are mobs, panics, and riots—social forms that would seem to express the madness of crowds rather than their wisdom. Once set in motion, the forces of social and emotional contagion reinforce rather than correct for biased individual perceptions and actions, which generates collective disasters. Prediction markets, for example, are no less vulnerable to bubbles than are financial markets. And when a bubble develops, the assumptions of the compensatory model are violated and the model fails.
State-of-the-art techniques for making predictions and solving problems assuredly deserve a place in the toolkits of intelligence professionals. But they should supplement rather than substitute for teamwork and collaboration. As was vividly demonstrated by the red teams in PLG simulations (Chapter 1), well-designed and well-led teams of experts really can generate outcomes that exceed in insight and applicability anything that could be produced either by putting the problem out for bid or by mechanistically combining individuals’ contributions. There is much to be said for thinking deeply about intelligence problems in the company of others.
Fortunately, there is a middle ground between excessive reliance on intact teams of in-house experts on the one hand and the relatively mechanical use of technological or procedural aids on the other. For example, both open source programming and wikis take full advantage of information technologies to help professionals assemble, refine, and share knowledge. They engage, coordinate, and weight properly the contributions of people who have the knowledge or skill required to accomplish a collective task. And they do so in ways that minimize the chances of pooling ignorance or diffusing shared errors or biases. Although sometimes viewed as individualistic free-for-alls, open source programming, wikis, and even online games actually invite active collaboration among participants and quite frequently result in the development of informal groups of participants.17
Technology-intensive tools such as these avoid the extreme versions of both the “cathedral” model of collaboration (which risks insularity because participation is restricted to designated insiders) and the “bazaar” model (which invites the chaos that can come from throwing the doors wide open to anyone and everyone, expert and amateur alike).18 Instead, these tools make it possible for people who actually know something about an issue to work productively with others to create and refine real collective products.
Real teams—whether face-to-face or distributed, and whether member interaction occurs in real time or asynchronously—work best for tasks that require members to work together interdependently to create a collective product or service. In both the exfiltration team described earlier in this chapter and the red team in the PLG simulation described in Chapter 1, a successful solution required a diverse set of high-level experts to draw extensively upon one another’s knowledge, skill, and experience to generate an original idea that no single member could possibly have come up with.
Many intelligence teams have tasks that require the use of high-level expertise; when they do, it assuredly is worth the trouble to create a real work team and support it well. This involves much more than just assigning people to a team and, in the words of one manager, “letting them work out the details.” As will be seen in the chapters that follow, giving a real team a reasonable chance for success also requires careful attention to specifying team purposes, to selecting the right team members, to establishing the norms of conduct that will guide team behavior, and to providing the organizational and leadership supports the team will need.
Creating a work team is akin to laying the foundation for a building. If the foundation is well conceived and solid, the builder can proceed to erect the rest of the structure with confidence. If it is not, the building will never be as sturdy as it could have been, and it may become necessary to find an alternative way to accomplish the function it was to have served. The same is true for work teams. If organizational or political realities make it impossible to establish a solid foundation for a work team—that is, to create a reasonably stable, well-bounded unit whose members are interdependent for some shared purpose—then you may want to find some other way to get the work done. It usually is wiser to forgo any hoped-for benefits of teamwork than to risk the dysfunctions that so often develop in poorly designed groups.
The intelligence community poses special challenges in creating well-designed work teams. These days, intelligence work increasingly is performed by professionals who are dispersed across geographies and time zones. This means that face-to-face interaction among team members is being supplanted by the use of electronic technologies for communication and coordination. The “sneaker net” that merely required an analyst to go down the hall for consultation with a colleague is being replaced by e-mail, chat rooms, videoconferencing, wikis, and other new technologies still on the horizon. Competently managing team processes when members are widely dispersed, even with the aid of sophisticated electronic technologies, can be quite a challenge.
Another challenge is to find ways of sustaining a reasonable level of team stability in increasingly fluid organizational circumstances. Consider, for example, what it would take to maintain stability in a fully integrated counterterrorism team whose responsibilities extend all the way from collection through analysis to intervention—activities that now are dispersed not just across teams but across whole organizations. How could such a team be composed and managed so it had available precisely those capabilities that it needed for each stage of the work without also requiring some members to sit on their hands during those times when their particular contributions are less essential?
More generally, what is to be done about the fact that human resource policies and practices in some intelligence community organizations generate a near-constant flow-through of team members? How should a team that has a relatively long-term assignment deal with the transfer, promotion, or reassignment of members who were centrally involved in planning the work? Would the idea of having a “core” team, discussed earlier in this chapter, be organizationally and politically feasible? What about sand dune teams, described in Chapter 2, in which group purposes and operating routines are established and maintained at the unit level, with specific teams forming and re-forming within that unit as circumstances change? Would that kind of team be useful in settings where personnel fluidity is more the rule than the exception? Or what about giving team leaders explicit training on how to conduct efficient, informative briefings of new members and competent out-briefs of those who are departing? Could such training help teams achieve the benefits of stability even in circumstances where the dominant reality is change?
As the intelligence community continues to evolve from reliance on coacting groups that operate within functionally defined organizations to real teams that cross functional, disciplinary, and organizational boundaries, it will become increasingly important to find ways around organizational policies and practices that currently get in teams’ way. One hopes that eventually community leaders will develop not just skill in creating and supporting real teams but also the capability to form and launch them extremely quickly—because that is what is needed most of all when a team must come together on the spot to deal with an emergency or crisis situation.