NOTES

Introduction. The Challenge and Potential of Teams

1. For reviews of research on the problems and the potential of teamwork, see Hackman & Katz (2010), Heuer (2008), Kozlowski & Ilgen (2006), Larson (2010), Salas, Goodwin, & Burke (2009), and Straus, Parker, & Bruce (in press).

2. Reported on MSNBC.com on 13 July 2009 (see http://www.msnbc.msn.com/id/31800954/ns/business-careers/).

3. IC Annual Employee Climate Survey, Office of the Director of National Intelligence, March, 2007.

4. The budget figure for 2010 is as reported by CNN (see http://www.cnn.com/2010/US/10/28/us.spy.spending). The number of intelligence professionals and intelligence organizations is as reported in Top Secret America, a two-year investigation by the Washington Post published in July, 2010.

5. See, for example, Treverton (2008).

6. Cooper (2008, p. 3).

7. For details, see Medina (2008).

8. For details about the scientific findings from which the six enabling conditions are derived, see Hackman (2002), Hackman, Kosslyn, & Woolley (2008), and Hackman & Wageman (2005b).

Chapter 1. Teams That Work and Those That Don’t

1. See, for example, Barnes (2007) and Culpepper (2004).

2. The analyses that follow rely heavily on observational data collected during PLG simulations by Beth Ahern, Rob Johnston, Anita Woolley, and a superb cadre of observers provided by the MITRE Corporation.

3. The cognitive, affective, and behavioral dynamics of an offensive vs. defensive orientation are empirically explored by Förster, Higgins, & Bianco (2003) and by Woolley (in press). Förster and his colleagues show that tradeoffs between speed and accuracy are differently managed by people who have a “promotion” orientation (i.e., a focus on aspirations and accomplishments) than by those with a “prevention” orientation (i.e., a focus on safety and responsibilities). Woolley shows that teams with a defensive orientation emphasize details and information gathering from external sources, whereas those with an offensive orientation focus on higher-level outcomes and on information held by team members themselves.

4. Weinberg (2010).

5. For a review of this research, see Hackman & Katz (2010, pp. 1228–1229).

6. This issue is explored in depth by van Ginkel & van Knippenberg (2009).

7. See, for example, Coll & Glasser (2005).

8. For a review of the research literature on this point, see Caruso & Woolley (2008).

9. For a review of the research literature on task and relationship conflict, see De Dreu & Weingart (2003).

10. See, for example, Banaji & Greenwald (in press), Brown (1986, Chap. 17), and Slavin & Cooper (1999).

11. See, for example, Richards Heuer’s classic book on the psychology of intelligence analysis (Heuer, 1999), his more recent work on small group processes in intelligence analysis (Heuer, 2008), and the considerable research literature on the tendency of teams to over-rely on information that all members share (Stasser & Titus, 2003; van Ginkel & van Knippenberg, 2009).

12. See, for example, Heuer and Pherson’s (2010) review of structured analytic techniques, as well as recent work on strategies for improving individual decision making that also can be used by decision-making teams (e.g., Gigerenzer, 1999; Milkman, Chugh, & Bazerman, 2009).

13. Empirical research on the effects of process interventions is mixed: Sometimes the interventions help a team but often they do not. For an overview of this research, see Hackman (2002, Chap. 6); for an extensive literature review on the relationship between group interaction and analytic team outcomes, see Straus, Parker, & Bruce (in press).

Chapter 2. When Teams, When Not?

1. Leavitt (1975); Locke, Tirnauer, Roberson, Goldman, Latham, & Weldon (2001).

2. Hot Groups: Lipman-Blumen & Leavitt (1999); Wisdom of Teams: Katzenbach & Smith (1993); Group Genius: Sawyer (2007); use of teams in knowledge production: Wuchty, Jones, & Uzzi (2007).

3. For groupthink, see Janis (1982). For free riding/social loafing, see Karau & Williams (1993) and Mas & Moretti (2009). For group brainstorming, see Dugosh & Paulus (2005) and Nijstad & Stroebe (2006).

4. Scholars vary in how they use the terms team and group, sometimes making definitional distinctions between them. I use the terms interchangeably in this book.

5. For an analysis of international collaborative online networks, see Sanderson, Gordon, & Ben-Ari (2008); for a review of the implications of social networks for national security, see Drapeau & Wells (2009); for a discussion of the relationship between network size and viability, see Shirky (2008, Chap. 2).

6. Portions of this and the following section draw on material developed by Hackman & Wageman (2005b) and Woolley & Hackman (2006).

7. For details, including a discussion of the principles of good work design, see Hackman & Oldham (1980) and Chapter 5 of this book.

8. From an entry in the blog “Kent’s Imperative” (http://kentsimperative.blogspot.com/2007_05_01_archive.html). The anonymous blogger goes on to note that many analysts who have experienced such moments of creativity struggle mightily, and usually unsuccessfully, to find words that would convey the nature of the process to outsiders.

9. Author Ann Brashares, quoted by Mead (2009, p. 70).

10. For details, see Goncalo & Staw (2006).

11. Bennis & Biederman (1997, pp. 6-7).

12. For details about crowdsourcing and many examples of its uses and successes, see Howe (2008); for a discussion of how human-computer networks increasingly are being used to find solutions to hard scientific problems, see Hand (2010).

13. For a discussion of the conditions under which the presence of coworkers facilitates and impedes individual performance, see Feinberg & Aiello (2006) and the classic contribution on this topic by Zajonc (1965).

14. Latané, Williams, & Harkins (1979).

15. For an overview of this research, see Hackman & Katz (2010).

16. See, for example, Kirkman, Rosen, Tesluk, & Gibson (2004) and Townsend, DeMarie, & Hendrickson (1998).

17. For ways in which computer-mediated communication can help a group deal with issues of size and diversity, see Lowry, Roberts, Romano, Cheney, & Hightower (2006) on size and Krebs, Hobman, & Bordia (2006) on diversity. For a review of research that compares computer-mediated to face-to-face communication, see Baltes, Dickson, Sherman, Bauer, & LaGanke (2002).

18. For discussions of the dynamics of distributed and virtual teams, see Cummings (2007), Gibson & Cohen (2003), Hertel, Geister, & Konradt (2005), and O’Leary & Cummings (2007).

19. For details, see Woolley & Hackman (2006).

20. For a description of this unit’s work, see Davis-Sacks (1990a, 1990b).

Chapter 3. You Can’t Make a Team Be Great

1. These examples come from an empirical study that Michael O’Connor and I conducted of analytic teams in several U.S. intelligence community organizations, which is reported in detail by Hackman & O’Connor (2004). Some details about the teams described here have been altered or omitted to disguise their identities. For further discussion of the three criterion dimensions, see Hackman (2002, Chap. 2).

2. See Janis & Mann (1977) for a discussion of how this measurement strategy can be used to assess the quality of decisions whose eventual consequences cannot be known until considerable time has passed.

3. Business Executives for National Security (2007, p. 3). Similarly, a directive from the DNI regarding analytic standards (Directive 203, June 21, 2007) includes a mix of outcomes and methods: (1) objectivity, (2) independence from political considerations, (3) timeliness, (4) uses all available sources of intelligence, and (5) adheres to proper standards of analytic tradecraft (further defined by eight specific and detailed attributes of good tradecraft).

4. For discussions of learning in groups, see Argote, Gruenfeld, & Naquin (2001) and Edmondson (2002).

5. For details about process losses and team synergy, see Hackman & Wageman (2005b), Larson (2010), and Steiner (1972).

6. This account is adapted from Gersick & Hackman (1990). For a full analysis of this accident, see the report of the National Transportation Safety Board (1982).

7. Thomas-Hunt & Phillips (2003).

8. Reviews and descriptions of specific studies are provided by Hackman & Wageman (2005a), Kaplan (1979b), Woodman & Sherwood (1980), and Woolley (1998).

9. For details, see Staw (1975).

10. For further discussion of the leader attribution error, see Hackman & Wageman (2005b). This error does appear to be stronger in Western cultures than in more group-oriented Asian cultures (Zemba, Young, & Morris, 2006).

11. Specifically, structural conditions controlled 42 percent of the variation in self-managing behavior, compared with less than 10 percent for leaders’ coaching activities, and they accounted for 37 percent of the variation in team performance, compared with less than 1 percent for leaders’ coaching activities. For details, see Wageman (2001).

12. For details, see Wageman, Nunes, Burruss, & Hackman (2008).

13. Note, however, that there were some differences in the specific conditions assessed across the several studies cited above. Also, because none of these studies experimentally manipulated the enabling conditions, it is not possible to make unambiguous attributions about causality from the findings. It is at least possible that some unknown and unmeasured third variable affected both the presence of the conditions and team performance, although this is unlikely since the studies were conducted in several different organizations using somewhat different measures and methodologies.

14. Heuer (2008). Also see Straus, Parker, & Bruce (in press) for a detailed review of research findings about the relationship between group interaction and team performance outcomes.

Chapter 4. Create a Real Team

1. For details about the Nominal Group Technique, see Delbecq, Van de Ven, & Gustafson (1975); for the Delphi method, see Linstone & Turoff (1975) or Rowe & Wright (1999); for an overview of structured techniques useful in group analytic work, see Heuer (2008) and Heuer & Pherson (2010).

2. Alderfer (1980, p. 269).

3. Janis (1982). For a skeptical summary assessment of evidence bearing on the validity of the groupthink hypothesis, see Baron (2005). For reviews and analyses of group cohesiveness more generally, see Beal, Cohen, Burke, & McLendon (2003), Casey-Campbell & Martens (2009), Hackman (1992), and Mullen & Copper (1994).

4. For a discussion of the separate and joint effects of reward and task interdependence, see Wageman & Baker (1997).

5. For details, see Hackman & O’Connor (2004).

6. The correlation between peer coaching and the composite effectiveness measure was .84, which approaches the reliability of the criterion measure and therefore is about as large as can be obtained.

7. Wageman (1995).

8. For details about the NTSB study, see National Transportation Safety Board (1994); for the experimental simulation involving fatigued crews, see Foushee, Lauber, Baetge, & Acomb (1986); and for a review of other research evidence on team stability, see Hackman & Katz (2010).

9. See, for example, Lim & Klein (2006).

10. For details, see Katz (1982).

11. For an analysis of the benefits and liabilities of group habitual routines, see Gersick & Hackman (1990).

12. A comparative review of all these tools is beyond the scope of this book. Moreover, by the time the book appears some currently popular tools no doubt will have been eclipsed by a fresh crop of techniques that have different names and procedural details but that share the same general aspiration.

13. See, for example, Milius (2009) on “how bees, ants, and other animals avoid dumb collective decisions.”

14. Surowiecki (2004).

15. For an overview of how prediction markets operate, see Wolfers & Zitzewitz (2004); for a comparison of prediction markets and the Delphi technique for eliciting forecasts, see Green, Armstrong, & Graefe (2007).

16. For details about how compensatory tasks work, including technical requirements regarding the distribution of errors, see Steiner (1966). To put compensatory tasks in broader perspective, here are the other task types that Steiner identifies: disjunctive tasks, for which the group operates at the level of its best-performing member (e.g., a team of mathematicians that succeeds when any member comes up with a proof that works); conjunctive tasks, for which the group operates at the level of its least competent member (e.g., a roped-together team of mountain climbers); additive tasks, for which group performance is the sum of members’ contributions (e.g., a tug-of-war in which the group’s “pull” is the sum of the pulls of all its members); and complementary tasks, which can be divided into subtasks that are assigned to different members (e.g., a research project that requires different activities for which members are differentially skilled). Although some intelligence community tasks are of the compensatory type, many are not. Simply averaging members’ inputs, as is appropriately done for compensatory tasks, would generate gross errors for some other types of tasks.

17. For a discussion of how teams naturally emerge in open source software development, see Hahn, Moon, & Zhang (2008); for an analysis of how collaboration develops among Wikipedia editors, see Gorbatai & Mikolaj (2010); and for a description of how groups develop in massive multiplayer online games, see the talk by John Seely Brown on the emergence of “guilds” in World of Warcraft (http://ecorner.stanford.edu/authorMaterialInfo.html?mid=2432).

18. The tension between top-down and bottom-up design gained wide attention with the publication of Eric Raymond’s book The Cathedral and the Bazaar in 1999. For an engaging and informative account of how the two models fared in a competition among bakers to create the ultimate cookie, see Gladwell (2005b).

Chapter 5. Specify a Compelling Team Purpose

1. For a practitioner-oriented discussion of these attributes, see Hackman (2002); for a conceptual analysis, see Hackman & Wageman (2005b); for application of the material specifically to leadership teams (i.e., those whose members are themselves significant organizational leaders), see Wageman, Nunes, Burruss, & Hackman (2008).

2. For informative discussions of the dynamics of sense-making in organizational life, see Maitlis (2005) and Weick (1993).

3. See Atkinson (1958) for a summary of this research and a discussion of the psychological processes involved.

4. For details about the firefighting simulation, see Clancy, Elliott, Ley, Omodei, Wearing, McLennan, & Thorsteinsson (2003). For details of the study of creative teams, see Woolley (2008), who also provides an extensive review of the research literature on team purposes that focus on outcomes vs. processes.

5. For details, including comparison of the Afghanistan campaign both with Vietnam-era decision making (with which it had many commonalities) and with the conduct of Desert Storm (with which it did not), see Arkin (2002).

6. For an analysis of how authority is distributed in organizational teams, see Hackman (1986, pp. 90-93). Manager-led teams have authority only for actually executing the task; managers monitor and manage team processes in real time. Self-managing teams manage work processes as well as execute them. Self-designing teams have the additional right to modify team composition, task structure, or aspects of the organizational context if needed. And self-governing teams, in addition to all of the above, have the authority to alter the team’s main purposes. For a discussion of the roadblocks that self-governing teams often encounter and the strategies they can use to overcome them, see Wageman, Nunes, Burruss, & Hackman (2008).

7. Precisely because so many senior professional teams perform so poorly, there is a widespread movement to significantly constrain their decisional authority. This movement is affecting not just intelligence professionals but also the decision-making latitude of physicians, pilots, judges, accountants, educators, and professionals in many other fields. For further discussion of this issue, see Hackman (1998) and Raelin (1989).

8. For a comprehensive and highly informative guide to structured analytic techniques, see Heuer & Pherson (2010).

9. Reported by C. P. Cavas in Defense News, 13 May 2010 (http://www.defensenews.com/story.php?i=4625011).

10. For a history of thought and practice about the design of work, see Hackman & Oldham (1980, Chap. 3).

11. Details about this model, including guidelines for using it in practice, are provided by Hackman & Oldham (1980). For a comprehensive overview of how work design research and practice have evolved in the years since the Hackman-Oldham work was published, see the special issue of the Journal of Organizational Behavior on the topic (2010, Vol. 31, No. 2-3).

12. The Team Diagnostic Survey is described in detail by Wageman, Hackman, & Lehman (2005). It is freely available for use in assessing teams in the intelligence community. Government, educational, and research users can access the Team Diagnostic Survey at the following website: https://research.wjh.harvard.edu/TDS. A commercial version of the instrument also is available, and can be accessed at http://www.team-diagnostics.com.

13. For an informative published debate on trade-offs between professionalism and responsiveness to clients, see Medina (2002) and Ward (2002).

14. Heath & Staudenmayer (2000) call this pervasive tendency “coordination neglect,” and show how it significantly compromises the ability of organizations to integrate the work their members perform.

15. Adapted from Hackman (2002, pp. 64-65).

Chapter 6. Put the Right People on the Team

1. See, for example, Bell (2007), Kozlowski, Gully, Nason, & Smith (1999), Larson (2010, Chap. 9), Moreland, Levine, & Wingert (1996), and Moynihan & Peterson (2001).

2. For an overview of research and practice on these topics, see the text by Spector (2008) and the analysis of organizational selection practices by Kehoe (2000). For the current state of knowledge about personality testing in employee selection, see Hough & Oswald (2008) and the commentaries that follow that article.

3. For a summary of the findings from this research program, see Hackman, Kosslyn, & Woolley (2008). Most studies of the effects of member abilities on team performance focus on either the average or the range of those capabilities (for a review, see Devine & Philips, 2001). By contrast, we assessed the complementarity of what members bring to the team. For alternative ways of construing the parallels between groups and brains, see Goldstone, Roberts, & Gureckis (2008) and Larson & Christensen (1993).

4. For details about the research, including description of the specific brain regions involved, see Woolley, Hackman, Jerde, Chabris, Bennett, & Kosslyn (2007).

5. When these conditions are met, a group can be said to have “collective intelligence,” which has been shown to predict performance on a wide range of team tasks (Woolley, Chabris, Pentland, Hashmi, & Malone, 2010).

6. See, for example, the findings of Devine & Philips (2001) and Ree, Earles, & Teachout (1994).

7. On the other hand, the bureaucratic and structural features of intelligence organizations often constrain the full utilization of individuals’ expertise in carrying out the work (Marrin, 2003).

8. This is especially consequential for “disjunctive” tasks, described in Chapter 4, for which team performance is a direct function of the performance of its single best member. For other kinds of tasks, having a “star” performer is less consequential than is the mix of members’ capabilities, as Robinson (2004) shows for athletic teams.

9. See Felps, Mitchell, & Byington (2006) for a description of how this happens.

10. For details and discussion, see Sutton (2007).

11. For other examples and further discussion of strategies for dealing with members who derail their teams, see Wageman, Nunes, Burruss, & Hackman (2008, Chap. 4).

12. The MBTI is described in detail by Myers & McCaulley (1985). For discussions of emotional and social intelligence intended for general and managerial readers, see Albrecht (2006), Goleman (1998), and Goleman (2006); for a more scholarly treatment of emotional intelligence by the psychologists who developed the concept, see Mayer, Salovey, & Caruso (2008).

13. For an assessment of the utility of the MBTI, see Pittinger (1993). For an analysis of the predictive validity of emotional intelligence measures, see Bastian, Burns, & Nettelbeck (2005) and Newsome, Day, & Catano (2000).

14. For details about this study, which also documents the importance of the launch of a team once members have been selected, see Ginnett (1993).

15. For an informative discussion of how and why this happens, see Smith & Berg (1987).

16. For details about this study and its findings, see the Group Brain Research Project (2010).

17. For details about this study, see Caruso & Woolley (2008).

18. See LePine (2003), who finds member attributes to be especially consequential for the performance of teams that operate in dynamic contexts.

19. See Dunbar (1992).

20. The formula for the number of links (l) among members in a group of size n is

image

21. For further discussion of the dynamics and dysfunctions of large teams, see Hackman (2002, Chap. 4). These findings are reinforced by an analysis of government decision-making groups, which shows that groups larger than 20 become highly inefficient (Klimek, Hanel, & Thurner, 2008).

22. Brooks (1995, p. 25).

23. For careful and informative reviews of the research literature on compositional diversity, see Horowitz & Horowitz (2007), Larson (2010, Chap. 9), Mannix & Neale (2005), Phillips (2008), and van Knippenberg & Schippers (2007).

24. For an account of this study, see Shaw (2009); for a more general analysis of group polarization processes, see Brown (1986) and Isenberg (1986).

25. This has been demonstrated in both survey and experimental studies; for details, see van Knippenberg, Haslam, & Platow (2007).

Chapter 7. Establish Clear Norms of Conduct

1. For a full report of this study, including details about the simulation, see Woolley, Gerbasi, Chabris, Kosslyn, & Hackman (2008).

2. A pretest affirmed that these two cognitive abilities did predict performance on the corresponding subtasks.

3. Kasparov (2010, p. 18).

4. To see if the effect of the social intervention was due merely to the fact that it resulted in the right members being assigned to the right subtasks, we appended one additional condition to the study design. In the appended condition, team members who had the key capabilities were explicitly assigned to those subtasks that optimized the match between their abilities and their task responsibilities. That generated only a marginal improvement in team performance. Receiving optimal role assignments from the experimenter apparently eliminated any felt need for members to discuss their relevant expertise and experience, which led them to plunge immediately into actual task work without first reflecting on the best way to go about it.

5. For further discussion of the challenges analytic teams face in using member expertise well, see Johnston (2005, Chap. 5). For an analysis of what it takes for “virtuoso teams” (i.e., teams whose members all are highly expert) to succeed, see Fischer & Boynton (2005).

6. Jackson (1966) has provided a formal model that can be used to generate quantitative measures of these and other properties of group norms. For further discussion of how norms shape group behavior, see Feldman (1984), Hackman (1992), and Jackson (1975).

7. See Wageman, Nunes, Burruss, & Hackman (2008, Chap. 5).

8. For additional information about this phenomenon and the reasons it occurs, see Larson (2010, Chap. 6), and van Ginkel & van Knippenberg (2009).

9. For pointers to the research findings on which each of these assertions is based, see Hackman & Katz (2010).

10. For details, see Thomas-Hunt & Phillips (2004).

11. For further discussion of the dynamics and effects of psychological safety in teams, including its effects on individual and team learning, see Caruso & Woolley (2008) and Edmondson (1999, 2003).

12. For a detailed discussion of how social prosthetic systems work, see Kosslyn (2006).

13. Berg (2005).

14. For research evidence, see Hackman, Brousseau, & Weiss (1976) and Woolley (1998).

15. The norms discussed in this chapter have mainly to do with the work of various kinds of analytic teams, for the simple reason that those are the teams about which I have the most data. But my experience with operational, administrative, and science and technology teams, although more limited, suggests that the issues discussed here may be just as salient for them.

Chapter 8. Provide Organizational Supports for Teamwork

1. This account draws on my own observations, pilot and air traffic controller reports submitted to the National Aeronautics and Space Administration’s Aviation Safety Reporting Service, and National Transportation Safety Board accident investigation reports. This particular account is an amalgam of materials from these sources.

2. Subsequent investigation showed that corrosion in the right main gear retract assembly had allowed the gear to fall freely rather than gradually, resulting in the thump and yaw. Although the gear was locked in place, the free fall was so forceful that it had disabled the microswitch that normally changes the indicator light from yellow to green.

3. For further discussion of these four contextual features, see Hackman (2002, Chap. 5).

4. For research on how groups draw on external informational resources, see Haas (2006, 2010) and Haas & Hansen (2007).

5. Because there is a constant stream of tools being developed and deployed throughout the community, any review of them would be out of date almost instantly. For examples of the kinds of tools that presently are available, see Gorman (2009) or Yang (2008).

6. For details, see Hackman (2002, Chap. 5).

7. For a review of the efficacy of different team training technologies, see Salas, Nichols, & Driskell (2007).

8. For a comprehensive overview of the theory and practice of Crew Resource Management, see Wiener, Kanki, & Helmreich (1993); for a description of how the same principles have been applied in hospital operating rooms, see Gaba, Howard, Fish, Smith, & Sowb (2001).

9. Vohs, Mead, & Goode (2006).

10. This is not an uncontroversial position. See, for example, the article titled “Goals gone wild: The systematic side effects of overprescribing goal setting” by Ordonez, Schweitzer, Galinsky, & Bazerman (2009) and the rebuttal, “Has goal setting gone wild, or have its attackers abandoned good scholarship?” by Locke & Latham (2009).

11. For details, see Dunnigan & Nofi (1999).

12. For informative analyses of the dynamics of groups’ interactions with their contexts, see Ancona & Caldwell (1992), Haas (2010), and Wageman (1999).

Chapter 9. Provide Well-Timed Coaching

1. This account is a disguised amalgam and elaboration of two actual cases.

2. After thinking it over, Rhonda decided against taking any of the five actions she generated in the lunchroom. Instead, the team put its collective head down and, with considerable effort and some additional pain, generated an assessment that its customer viewed as adequate although not exemplary.

3. For a review of research and practice on team coaching, see Hackman & Wageman (2005a) and Kozlowski, Watola, Nowakowski, Kim, & Botero (2009). For more information on executive coaching, see Peltier (2010) or Underhill, McAnally, & Koriath (2007).

4. As noted in Chapter 6, group members often deal with ambivalence about how things are going by “splitting” their conflicting feelings, viewing one member as the “problem” and another as the “hero” (Smith & Berg, 1987). Splitting occurs without conscious awareness, as do a number of other seemingly mysterious aspects of group dynamics. For an informative and provocative discussion of group phenomena that are driven by nonconscious forces, see Bion (1961).

5. For a summary of findings from the orchestra study, see Allmendinger, Hackman, & Lehman (1996).

6. Matthew Dine’s comments are from the PBS documentary “Orpheus in the Real World,” produced and directed by Allan Miller (Four Oaks Foundation, 1997).

7. For research evidence on this point, see Homan, van Knippenberg, van Kleef, & De Dreu (2007) and Nemeth & Owens (1996).

8. For details, see Wood (1990).

9. Staw (1975); see also Guzzo, Wagner, Maguire, Herr, & Hawley (1986).

10. Kaplan (1979a); see also Woodman & Sherwood (1980).

11. See Kernaghan & Cooke (1990), Salas, Rozell, Mullen, & Driskell (1999), and Woolley (1998).

12. For details about the specific process losses and gains that are characteristic of effort, strategy, and knowledge and skill, see Chapter 3. For general reviews of research findings about process losses and gains, see Hackman (2002, Chap. 6) and Straus, Parker, & Bruce (in press).

13. For details, see Fisher (2007, 2010).

14. For details about Gersick’s findings and their implications, see Gersick (1988, 1991). For a review of the traditional models of group development that her findings call into question, see Tuckman 1965).

15. The power of guided reflection in improving a team’s performance strategies is demonstrated for a simulated military air-surveillance task by Gurtner, Tschan, Semmer, & Nagele (2007).

16. For details, see Woolley 1998). For further discussion of the midpoint as a time when simple interventions can prompt team members to consider ways of improving their work processes, see Okhuysen & Waller (2002).

17. For details about operations at this plant, see Abramis (1990).

18. See Okhuysen & Eisenhardt (2002) for a discussion of the ways that simple interruptions can create occasions for knowledge integration among team members.

Chapter 10. Leading Intelligence Teams

1. It can be instructive to invite team members and leaders to complete this checklist and then to compare and discuss their ratings. A more systematic assessment of these conditions (as well as other aspects of team functioning) is available online at no charge to government users: http://www.team-diagnostics.com/.

2. For details about the study of intelligence analysis teams, see Hackman & O’Connor (2004); for the study of senior leadership teams, see Wageman, Nunes, Burruss, & Hackman (2008).

3. Heslin, Vandewalle, & Latham (2006) show that leaders who view subordinates’ attributes as innate and unalterable are less likely to coach them than those who believe that personal attributes are open to change. The same may be true for team coaching. Leaders who view the attributes of teams as malleable may be more likely to engage in team-focused coaching than those who view them as fixed. One of the aims of this book has been to show that features of teams that often are taken as given actually can be altered and improved.

4. This account is adapted from Wageman, Fisher, & Hackman (2009).

5. This figure is adapted from Wageman (2001), with permission from the Institute of Operations Research and Management Science.

6. For discussions of the functional approach to leadership, including how it differs from trait- and style-based approaches, see Hackman (2002, Chap. 7), Morgeson, DeRue, & Karam (2010), and Nye (2008).

7. For discussion of the competences that are most critical for team leadership (as well as for educational strategies that help leaders develop them), see Hackman & Walton (1986) and Wageman, Nunes, Burruss, & Hackman (2008, Chap. 8).

8. For further discussion of the liabilities of co-leadership, see Hackman & Wageman (2005b). For details about how co-leadership worked at Los Alamos, see Rhodes (1986).

9. The one condition that did not much differ between real teams and coacting groups was the supportiveness of the organizational context. That was not a surprise, since all units in an intelligence organization, whether real teams or coacting groups, generally have similar organizational contexts. For details, see Hackman & O’Connor (2004).

Chapter 11. Intelligence Teams in Context

1. For details, see Hirschman (1989).

2. For details, see Heuer (1999) and the informative collection of papers on the roots and manifestations of cognitive biases edited by Gilovich, Griffin, & Kahneman (2002).

3. See IARPA BAA-10-05(pd) and RFI-10-01, both published in 2010.

4. Sanderson, Gordon, & Ben-Ari (2008); see also the Newsweek article on the “revenge of the expert” cited in the CSIS study (Dokoupil, 2008).

5. Gigerenzer & Brighton (2009).

6. For a full discussion of this point of view, see Marrin (2007).

7. As Rieber & Thomason (2005) note, however, merely knowing and using standard techniques provides no guarantee of success. Indeed, certain commonly used procedures, such as appointing someone as the “devil’s advocate” to prevent groupthink, have unintended and sometimes dysfunctional consequences. Rieber and Thomason make a strong case for scientific research on intelligence methods and tools to correct such misconceptions.

8. Gladwell (2005a) provides numerous examples of this phenomenon, as well as some informed speculations about how it happens, in his book Blink: The Power of Thinking Without Thinking.

9. It is well established that competition heightens participants’ psychological and physiological arousal. When people are aroused, they do better on what are called “performance” tasks—those for which one’s dominant response is what is needed. But people who are aroused perform more poorly on “learning” tasks—those for which a new or unfamiliar response is required (see, for example, Zajonc, 1965). For a review of research on cooperation and competition more generally, see Johnson, Maruyama, Johnson, Nelson, & Skon (1981).

10. For details, see DeVries & Slavin (1978) and Slavin & Cooper (1999).

11. After the tournament, groups were disbanded and differently composed groups were created to reduce the likelihood of persisting intergroup rivalries.

12. For contrasting assessments of Goldwater-Nichols, see Bourne (1998) and Locher (2002).

13. Rob Johnston, personal communication.

14. Johnston (2005, pp. 11-13) points out that the trade-off between secrecy and efficiency varies across intelligence community activities: Efficiency in analytic work requires low secrecy, whereas efficiency in counterintelligence work requires high secrecy.

15. For an analysis of how this way of thinking has shaped what is expected of chief executives, see Khurana (2002).

16. For details about different types of leadership teams and the conditions that are most critical to their success, see Wageman, Nunes, Burruss, & Hackman (2008).

17. Also see the related concept of the “constellation” team described by Stephen Lisio in a paper written for the CIA’s Galileo Award competition in 2004. The constellation consists of multiple teams from multiple organizations that are brought together to tackle an intelligence issue of great importance, drawing on the existing expertise and resources of the intelligence organizations from which the component teams are drawn.

18. For additional details, see Milius (1999).