Something’s up in Boston. In a few weeks, the World Ecumenical Council will host a conference at which several extremely high-profile religious leaders from around the world will give addresses. The Rockwell Front, a fringe neo-Nazi group named after George Lincoln Rockwell, has issued a credible threat that it will do something highly dramatic to disrupt the conference. Meanwhile, a vial of the deadly hantavirus, which compromises the respiratory system and can cause renal failure, has been stolen from a research laboratory at the Massachusetts Institute of Technology (MIT). There are reasons to believe that members of the Rockwell Front may have been involved in the theft.
Various data have become available that, if properly interpreted and integrated, might make it possible to figure out what is going on. Several cryptic e-mails among Rockwell Front members in the Boston area were captured by a law enforcement electronics team. The e-mail messages clearly include planning details, but they are so riddled with code words as to be impossible to interpret from plain-text reading. Fortunately, a list of word pairs on a personal data assistant (PDA) recovered from a Front member provides a key that might allow the e-mail messages to be decoded (for example: Bug Dust = Diversions, Crabs = Explosives, Annexia = HazMat Lab). The PDA also contains what appear to be reconnaissance photographs of a building. If matched up with the architectural plans of the five buildings that are the most likely targets (the Hyatt Regency Hotel, the World Religions Center, the Federal Reserve Bank, One Financial Plaza, and the St. Paul’s Church meeting annex), it may be possible to pinpoint the intended location of the attack. Finally, two sets of photographic material may be helpful. There are poor-quality photographs (as well as sketchy biographical data) of those Front members who have some kind of connection to the MIT hazardous materials laboratory. And there is security camera footage of people entering and leaving that lab in the last few weeks.
A four-person team has been created to analyze all these data and determine what is about to happen. It may be nothing. Or it may be something quite devastating. To figure it out, the team will have to draw upon each member’s special expertise and then competently integrate members’ separate contributions into a collective product. And they will have to do it quickly.
As you may have surmised, the situation just described is not real. It is, instead, a simulation we created in our laboratory to test the extent to which member capabilities shape team performance and, importantly, to identify what it takes for a team to recognize and make good use of its members’ resources.1 The previous chapter showed how important it is to have the right people on a team. But is good team composition sufficient? Or, perhaps, is it also necessary to create team norms (that is, agreement about those behaviors that are valued and those that are unacceptable) to help members use their collective expertise well? Indeed, might well-considered team norms compensate even for less-than-ideal team composition?
To answer the questions just posed, we designed a study that experimentally manipulated both member capabilities and team norms about collaborative work, and then assessed the effects of these two factors on team performance. We created several experimental conditions. One set of teams was stacked for success. Based on tests members had taken some weeks prior to the experiment, each team in this set had one person with very strong verbal memory (critical for decoding the e-mail exchanges) and another who had exceptionally high face recognition ability (critical for analyzing the degraded photos).2 All members received their scores on the ability tests when the simulation began, although it was not revealed that verbal memory and face recognition ability were key to cracking the plot. In addition to good composition, teams in the stacked-for-success condition also received a social intervention intended to establish a norm encouraging members to actively assess task requirements and members’ capabilities. Only after that assessment would the team decide which members should focus on analyzing which subset of the available data, and then devise its overall performance strategy.
In a second experimental condition, teams were well composed (that is, they had both a verbal memory expert and a face recognition expert) but they did not receive the social intervention. A third set of teams was the opposite of the second: they had members with only average abilities but they did receive the social intervention. And a fourth set of teams neither had exceptionally qualified members nor received the social intervention. Each team’s measured performance was simply the degree to which its analysis was objectively correct—the right suspects, the right target, and an accurate rendering of the plan of attack.
Figure 7-1 shows how teams in each condition performed. Teams that had members with the right capabilities and that also received the social intervention exhibited far and away the best performance. They did much better than teams that had no intervention at all, and also better than teams that received only the social intervention—apparently good team norms cannot by themselves compensate for mediocre member capabilities.
Here is the surprise. The poorest performance of all, again by a wide margin, was turned in by teams whose members had exactly the right task capabilities but that did not receive an intervention to help them use those capabilities well. Even smart teams, it appears, do poorly when they are not explicitly encouraged to develop task-appropriate strategies for coordinating and integrating members’ contributions.
The same phenomenon occurs in other domains as well, as is illustrated by the surprising outcome of a freestyle chess tournament in which chess masters and amateurs competed with the assistance of chess-playing computers. The winner was neither a state-of-the art computer nor a grandmaster assisted by such a computer. It was, instead, a pair of amateur players who developed a strategy for obtaining the maximum benefit from simultaneous use of three personal computers. In the words of former world chess champion Garry Kasparov, “Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.”3 Expertise alone, clearly, is not sufficient to guarantee success.
FIGURE 7-1 Team Performance by Condition Adapted from Woolley, Gerbasi, Chabris, Kosslyn, & Hackman (2008)
It is noteworthy that in our simulation the condition for which performance was poorest closely mirrors the way intelligence teams often are composed in practice—that is, individuals who have the special expertise the task requires are put onto teams, and sometimes even assigned to the specific roles for which their abilities are most germane.4 But teams are left to their own devices to determine how best to use that expertise. Such groups risk relying so extensively on the inputs of their expert members that they overlook the potential contributions of other teammates.5 The best way to avoid this problem, we have found, is to establish team norms that explicitly foster collaborative planning. The sections that follow explain why this is so and show how such norms can be created and sustained.
Behavior in a group always has to be managed. Otherwise, members will head off in their own preferred directions and the team as a whole may not accomplish much of anything. One way to manage team behavior, of course, is through continuous multilateral discussion and negotiation. But that is terribly inefficient—members can wind up spending nearly as much time debating what they will do as they spend actually doing it.
The more efficient, more powerful, and more common way of managing team behavior is through the creation and enforcement of group norms. As noted earlier, norms are shared agreements among members about what behaviors are valued in the group, and what behaviors are not. They refer only to behavior, including things members say, not to unexpressed private thoughts and feelings. Although most norms develop gradually, team members can shortcut that process by “importing” them from similar teams they have worked on in the past, or even by just declaring that a particular norm now exists. Someone might say, for example, “Let’s hold off submitting RFIs [requests for information] until we all agree about what we most need to find out.” If other members accept that proposal, one should see fewer RFIs submitted spontaneously by individual team members.
Because most people care a great deal about what their teammates think of them, members generally comply with agreed-upon group norms. When they do not, their teammates bring them back into line, initially gently but then more forcefully if the out-of-line behavior persists or becomes egregious. How vigorously a team enforces a norm depends both on how strongly members feel about compliance with it (the intensity of the norm) and on their level of agreement about what specific behaviors are desirable (the crystallization of the norm). Highly intense and well-crystallized norms have what sociologist Jay Jackson calls normative power.6 Deviations from such norms occur rarely and when they do they are swiftly corrected.
But what if a norm is intense but poorly crystallized—that is, members care a great deal about some matter but disagree about what behaviors are desirable? Conflict is likely to develop in such circumstances and to persist until members achieve consensus about what they expect of one another. Finally, how about a norm that is highly crystallized but low in intensity—that is, members agree about something that nobody thinks is very important, a state that Jackson calls vacuous consensus. Behavior may be reasonably orderly because everyone knows what is expected. But since nobody cares much about that behavior, deviations from the norm are likely to be corrected gently if at all.
Reflect for a moment on the norms of some team on which you presently serve. Which norms powerfully control behavior in the team, which spawn conflict about how members should behave, and which reflect a state of vacuous consensus? In general, the most common and powerful group norms are those that bring order, predictability, and comfort to the interactions among members. Less frequently seen are norms that explicitly guide how member expertise is used or how the team’s performance strategy is invented and implemented. Among the most constructive initiatives a team leader can take, therefore, is to help a team develop norms that help members competently address such matters. Indeed, our research on leadership teams found clear norms to be more strongly associated with team effectiveness than any other factor we measured.7
Consider a Joint Terrorism Task Force (JTTF) composed of representatives from several disciplines and agencies. Because members’ home organizations have different cultures and operating procedures, there is some risk that the team will become incapacitated by intergroup rivalries or that it will fragment, with subgroups of like-minded members proceeding in whatever directions they prefer. An alert JTTF leader might encourage the team to develop a norm that minimizes the team’s exposure to these risks—for example, a norm of hearing out the ideas of others without interruption or contradiction in hopes of capturing and using all potentially valuable ideas and perspectives. Such a norm would be especially valuable if the JTTF were analyzing an immediate and serious threat, since when the stakes are high members tend to rely on already well-known and well-practiced operating procedures (the habitual routines discussed in Chapter 3).
Two particular types of norms can be especially helpful to intelligence teams. Norms of the first type help a team identify and use well members’ knowledge, skill, and experience. Norms of the second type help a team develop and implement a task performance strategy that is fully appropriate for its particular task and situation.
The normative intervention used in our experimental simulation of a counterterrorism team could not have been simpler. We merely instructed members to take a few minutes to review task demands and member capabilities and then, based on that review, to decide how they should proceed with the work. The research participants were cooperative and did what we asked. As will be seen, things are not so simple for intelligence teams operating in organizational contexts that can be complex, fluid, and politically charged.
There are innumerable ways a team can get off track, some of them truly amazing—such as the team charged with setting a new strategic direction for its unit whose members spent the entirety of their initial meeting deciding where to take out-of-towners for dinner. More common and consequential are the three problems addressed below: over-reliance on information that all members share, failure to get beyond members’ stereotypes of one another, and dealing with members’ anxieties about their differences. Well-chosen norms can help a team deal constructively with each of these issues.
Shared information. Perhaps the greatest advantage of teamwork is that team members have diverse information and expertise that, if properly integrated, can produce something that no one member could possibly have come up with. It is ironic, therefore, that teams typically rely mainly, and sometimes exclusively, on information that is shared by everyone in the group. Information uniquely held by individual members may not even make it to the group table for review and discussion.8 For decision-making and analytic tasks, that can significantly compromise team performance.
The new kinds of groups that are emerging throughout the intelligence community—larger groups with a greater diversity of membership, groups whose composition shifts continuously over time, and groups whose geographically dispersed members rely mainly on electronic technologies for communication—are likely to find managing member information and expertise especially challenging. The larger the group, for example, the greater the chances that worthy individual ideas and insights will be overlooked. The greater the diversity of group membership, the more likely that intergroup tensions will limit the utilization of member resources. The more frequently group composition changes, the harder it becomes to keep track of which members have what task-relevant information or expertise. And the more a group relies on electronic technologies for communication, the more challenging it is for members to coordinate their activities.9 Nontraditional team norms are needed to deal with these new realities.
Group stereotypes. The credence a team gives to a member’s contribution can depend as much on who made it as on its actual worth. When a team consists of people who have different personal identities (e.g., race or gender), different professional orientations (e.g., engineering or the law), or different organizational homes (e.g., military, criminal justice, or intelligence), members may be disposed to give special weight to contributions made by other members of their own groups and to discount those made by people from other groups.
An unpublished study conducted as part of our Group Brain project showed how powerful intergroup forces can be. Shortly before a national election, we invited people who strongly identified themselves as either Democrats or Republicans to answer a set of factual questions on topics for which the two parties had contrasting positions (for example, gun control, abortion, and so on). Participants had the opportunity to seek advice from another group member before giving their answers. Some of the people who could be consulted were of the same party as the participants, but others were not; and some of them were subject matter experts for particular questions, but others were not. Participants tended to rely more on people from their own party for help, even when someone from the other party had greater subject matter expertise. It turns out that such intergroup dynamics not only shape who has influence on a team but also can result in gross misperceptions of member expertise. In one case, female members of a decision-making team who had high expertise actually were perceived as less expert than their non-expert female colleagues.10
What has been found in the experimental laboratory also is seen in intelligence teams such as the JTTF briefly described earlier. Indeed, such teams can become arenas in which larger intergroup dynamics are played out, with members interpreting others’ ideas and opinions as primarily representing the interests of their home disciplines or organizations. It can take quite strong norms to alter patterns of within-team behavior that are driven by intergroup forces.
Personal anxieties. Intelligence team members sometimes are reluctant to appear uncertain about something their team is addressing, to publicly acknowledge that they do not know how to do some aspect of the work, or to ask another member to help out by providing some needed expertise. In all of these cases, the risk of personal or professional embarrassment is sufficiently high that members may make do with what they already know or know how to do, even at some cost to the quality of the team’s performance.
One root of these difficulties is the absence of a psychologically safe climate within the team. Psychological safety is a shared belief that the team is a place where one can take personal and interpersonal risks. Members of psychologically safe teams are better able to admit mistakes, more likely to ask for help from teammates, more open about what they do and do not know, and more likely to learn from the expertise of others.11 Norms that actively support learning and experimentation are one way to create a climate of psychological safety within a team.
We have seen that good use of team members’ information and expertise does not happen automatically. Although a simple instructional intervention did the trick in the terrorism simulation described at the beginning of this chapter, those teams were not much impaired by the three vulnerabilities just described. It is much harder to get members to draw on the expertise of people from other groups when even modest intergroup forces are at play, as we found in a follow-up to the Democrat-Republican study. Having established over-reliance on similar others in the initial study, we decided to demonstrate, if we could, that a simple cognitive manipulation would erase or even reverse that phenomenon. Specifically, we thought that framing the use of others’ expertise as something that could help participants avoid mistakes would significantly increase their willingness to seek input from those people who, regardless of political affiliation, actually had the greatest expertise. All we managed to accomplish, however, was to replicate the biases we had documented in the original study.
What is required, it appears, are team norms that actively promote helping and sharing among members. The aspiration would be to have what Stephen Kosslyn calls a social prosthetic system (SPS). When an SPS is operating, team members draw upon the capabilities of others in carrying out the work, in effect borrowing parts of others’ brains to assemble the full set of mental resources that they need.12
Kosslyn suggests that such systems are shaped both by differences among individuals (some people are more willing than others to turn to teammates to compensate for their own limitations) and by situational demands (some situations require more than any one brain can supply, whereas others do not). The key feature of the model, however, is that a system develops in which it is routine for members to rely on others for things they do not know or know how to do. SPS norms are nearly the opposite of what happens when the three vulnerabilities discussed above are operating—that is, when members discuss only that information that everyone shares, over-rely on contributions from same-group members, and behave in ways that keep anxieties as low as possible. Instead, the norms that develop in SPS systems actively encourage team members to draw upon their peers’ unique capabilities and perspectives.
The question is how to create an SPS system. One strategy is to turn typical team start-up processes on their head, focusing members’ attention on what others have to bring to the work rather than on their own capabilities and experiences. In a blue (analytic) team in a PLG simulation, for example, members typically begin their work quite casually. They go around the room introducing themselves, invariably mentioning the discipline in which they have been trained, the organization they represent, and their role in that organization. By the end of the first half hour, each member has picture of the team and its members that makes teammates’ coming-in group affiliations highly salient. As was seen in Chapter 1, that kind of start-up can set the stage for subsequent disagreements and conflicts that are driven by the groups from which members come. It is a shaky platform from which to launch work on a team task.
Would it be possible to do better, to launch a team in a way that focuses members’ attention on their common purpose rather than on their intergroup differences? To find out, we obtained permission from PLG organizers to try out an alternative launch at one of the simulations. Here is what we did. After making a few opening comments, we gave members a short form on which they were to jot down their own understanding of the main purpose of their team. The form also asked for their views about the consequences of accomplishing (or failing to accomplish) that purpose. Then we asked members to pair up with someone on the team they did not already know.
Each pair’s first task was to make sure that the overall purpose was clear to both members, which they did by comparing their independently written descriptions of the team’s main purpose and resolving any differences. Once that was done, Member A interviewed Member B for about 15 minutes to learn what knowledge, skills, and experience Member B brought to the team that would be especially valuable in accomplishing the team’s purposes. Then they switched roles and repeated the process. We planned to reconvene the group when half an hour had elapsed and ask each member to introduce his or her partner, emphasizing the unique resources the partner brought to the team’s work. No one would be asked to talk about his or her own background or expertise.
We did not get to hear from all the pairs. Members became so deeply involved in their interviews that they paid little attention to the time and, just as the team was getting into the pairs’ reports, a PLG organizer arrived to take everyone to lunch and everyone went: lunch trumped launch. Even so, we detected in the subsequent life of that particular team signs that the exercise may have made a difference. Relative to other blue teams we had observed, this one had developed a norm that encouraged members to actively seek out the expertise or experience of their teammates. And, refreshingly, the team heard fewer confident assertions about how something “is supposed to be done” from individuals who, in fact, knew only the policies and practices of their own organizations or disciplines.
Although orienting the attention of blue team members’ away from their own groups did help the team get off to a good start, organizational psychologist David Berg suggests that it often is unrealistic to ask people to transcend their group memberships. It may be better, he proposes, to “let people have their groups” so long as norms support the active discussion of individuals’ memberships and representational concerns.13 In Berg’s view, a norm that advocates speaking and acting as if one is considering only the big picture can have perverse effects because other team members are still likely to assume that one is promoting the interests of his or her constituency. By making explicit the interests of one’s own group, those matters become something that the team can discuss. Although Berg’s proposal assuredly is worthy of further exploration, it may be realistic only for groups that can stay together long enough for members to fully work through their intergroup dynamics.
The little launch experiment we did was so informal as to be no experiment at all. But what happened was sufficiently different from what we usually observe in blue team interactions that we came away more convinced than ever that nontraditional norms really can be created in intelligence teams—and that they can be of considerable help in overcoming the vulnerabilities with which such teams commonly have to contend.
The enemy of a good task performance strategy is mindless reliance on a team’s habitual performance routines (see Chapter 3). The strategy that is most familiar or obvious to an intelligence team is not necessarily optimal, and a that’s-the-way-it’s-always-been-done strategy virtually never is. Only by taking thoughtful account of both the team’s resources and the opportunities and constraints in its context is a team likely to come up with the best way of going about its work. A norm that promotes active strategy planning, therefore, can be especially valuable in helping a team figure out how best to proceed.14
Analytic teams have many carefully developed structured techniques from which to choose, as well as alternatives such as crowdsourcing and prediction markets that can be adapted for the team’s work (see Chapter 4). Several of these techniques undoubtedly would work better than a lowest-common-denominator approach in which all data that anyone would like to see are scooped up, generating a large, undifferentiated, and potentially overwhelming pile of information. The same logic applies to other kinds of intelligence teams. For example, a counterintelligence manager described to me how attempting to identify and plug all possible leaks actually can lessen the likelihood of achieving overall counterintelligence objectives.
As an alternative to dealing with everything that can be gathered up and examined, a team might consider inventing an entirely new strategy, one uniquely suited to its particular objectives and circumstances. Or it might adapt to its own purposes a strategy previously developed for use in other contexts. Consider, for example, the two analytic strategies described next—constrained brainstorming and cognitive reframing. These strategies emerged from analyses of the difficulties encountered by the blue teams in PLG simulations, but they appear to have applicability far beyond that particular setting.
This strategy can be useful for teams that must make sense of very large quantities of information, much of which is likely to be irrelevant to team purposes. The challenge for the team, which begins its work essentially in the dark, is to come up with a way of reducing the number of possibilities it must consider. Constrained brainstorming involves generating and evaluating hypotheses about the courses of action that are most likely being contemplated by adversaries, and then seeking further information mainly about those particular possibilities.
Specifically, the lessons learned from PLG suggest that a team might begin its work by concentrating on two distinct kinds of data. The first is biographical data about known or suspected adversaries, with special attention to each person’s academic training, professional expertise, and employment experience. The second is the network of relationships that adversaries have established with others who have related expertise or who have access to resources that would be needed to achieve the adversaries’ objectives.
With these data in hand, the team could then focus its brainstorming on the most probable courses of action, given the adversaries’ expertise, networks, and available resources. In many cases, only one or two possibilities will turn out to merit serious attention, a development that can both guide and greatly increase the efficiency of a team’s subsequent information-gathering activities.
The natural way to frame the work of many intelligence teams, especially counterterrorism teams, is as a defensive activity—that is, to figure out and head off whatever the adversaries might be up to. But, as noted in Chapter 1, defense is almost always harder to play than offense. Therefore, a counterterrorism team might be well advised to shift its perspective from “How can we determine specifically what they are planning?” to “What would we do if we had their configuration of capabilities and resources?” In effect, the team would reframe its work from a defensive to an offensive activity.
Like constrained brainstorming, cognitive reframing is considerably aided by knowing something about potential adversaries’ biographies, networks, and resources. Having such data reduces the likelihood that team members’ existing stereotypes will result in either over- or under-estimation of what the adversaries might be able to do. Moreover, members are likely to discover that they need certain specialized knowledge or expertise if they are to view the situation from the perspective of their adversaries. And that, in turn, can motivate them to draw more fully on their teammates’ knowledge and skills, as well as to seek information and expertise from external sources, as they work through their reframed task.
The particular norms that will be most helpful for any specific intelligence team will of course vary depending on the team’s task and circumstances.15 What is critical for any team is to have some shared norms that bring order and focus to members’ actions and interactions, thereby avoiding the aimless wandering about that characterizes teams whose behavioral norms are weak, irrelevant to the work, or nonexistent.
We have seen that constructive, task-appropriate norms rarely appear spontaneously. One of a team leader’s highest-leverage activities, therefore, is to establish (or help the team establish) the handful of must-always-do and must-never-do norms of conduct that are most consequential for team performance. An especially good time to do this is very early in the life of a new team, or at the beginning of a new task for an existing team. But that is not the end of it. The best team leaders follow up by watching for opportunities to reinforce and sustain those norms—perhaps by noticing and commenting favorably when a team is using member talents well, or when a team has come up with a creative way of proceeding that appears especially well suited to its task and situation.
Establishing constructive team norms can be difficult when the broader organization is constantly throwing up obstacles to teamwork rather than smoothing the team’s path. The next chapter focuses on the context of intelligence teams, with special attention to those organizational policies and practices that can spell the difference between teams that work and those that get stuck through no fault of their own.