When intelligence teams are well structured, supported, and led, they can do very well—they accomplish their mission, they grow in capability over time, and they contribute to the personal learning and professional growth of their members. These benefits do not automatically appear, however. As has been seen throughout this book, mere exhortation to collaborate and team-building exercises intended to promote harmony and trust are insufficient to produce results. Teams have to be thoughtfully designed and supported if they are to be an effective means of engaging individuals’ resources in pursuit of collective purposes.
The bad news is that the institutional contexts within which intelligence teams operate often place serious obstacles in the paths of those who seek to properly design and support them, obstacles so daunting that more than a few leaders have decided that intelligence teams are more trouble than they are worth. The good news is that within each obstacle to teamwork also lies an opportunity for constructive change. This concluding chapter is organized as a series of assertions, things I have heard in the course of my travels around the community, each of which points simultaneously to an obstacle and an opportunity.
When something unfortunate happens, the most natural thing in the world is to do whatever one can to keep it from happening again. It is a powerful impulse. Somebody tries something to take down an airplane, a security hole is revealed, and the hole is plugged. We feel safer because shoes are being inspected and water bottles banned—and now, as I write this, one hears that blankets may be kept off laps. It is an unending series of small fixes.
The doors of barns where horses once lived are being closed, one after another, whenever another horse disappears—or almost does. The same thing happens in operational work, in information security, and in any other area where we want to make sure that something bad does not recur. The urge to fix what went wrong is especially strong if what happened is publicly known and politically interesting, as is the case for many security failures and more than a few intelligence missteps.
The experience of the aviation community with this kind of thing is instructive and worrisome. Consider what happens following an aircraft accident or serious incident. The National Transportation Safety Board invariably identifies one or more proximal causes of the event and recommends changes to keep it from happening again. The changes typically involve introducing a technological safeguard (such as a warning signal or a guard on a switch), a new component of initial or recurrent training, or an additional procedure that crews subsequently are required to follow. Each of these actions, by itself, is a good idea. But no one ever examines their collective impact on crews and their work.
There is at least a possibility that all the well-intentioned additions to procedure manuals, together with all the automated devices that have been introduced into cockpits and all the management directives intended to promote efficiency and safety have significantly eroded flight-deck crews’ latitude to do what is needed to achieve those same aspirations. This phenomenon has much in common with what scholars of public policy call perverse effects. Perverse effects can emerge when one tries too hard to accomplish something that is, on its own merits, entirely worthy. Public programs to reduce poverty, for example, sometimes spawn policies and practices that unintentionally encapsulate poor people in a state of poverty rather than help them work their way out of it.1
There is much to be said, therefore, for a fundamental change of mindset, from plugging holes to envisioning and heading off that which has not yet happened. Recall from the first chapter of this book the futility of playing defense against an inventive, adaptable adversary. It is true that any reasonably competent official can issue a directive that would decrease the chances that some specific bad thing will happen again. But no one individual, no matter how smart and experienced, is likely to come up with what a group of adversaries is likely to invent.
To head off what is coming next, it appears, takes a team. Not just any team, but a team that mirrors what we are up against—one that has members with the same mix of training, technical skills, network links, educational backgrounds, and work experiences. Then the question can be posed: “How would we do the maximum amount of damage, given what we know, whom we know, and what we know how to do?” What such a team comes up with has at least a chance of suggesting how we might head off something that we do not know is coming—but might be. It is a strategy that does not look much like fighting the last war or keeping last year’s flu or hurricane from doing as much damage as it did before. The use of simulated offensive teams actually can give operational meaning to the old saw that the best defense is a good offense.
Expertise, a traditional and deeply held value in the intelligence community, has been getting a little tarnished lately. It is not merely the “how-come-they-didn’t-see-that-coming?” complaint that inevitably follows some unexpected and unfortunate event. It is that the value of expertise itself is coming into question. Research has shown that even well-trained intelligence professionals are vulnerable to systematic biases in evaluating evidence, estimating probabilities, and identifying the probable causes of observed events.2
Putting people in teams does not solve the problem. As has been seen in this book, teams are just as prone as individuals to underuse or misuse expertise, if not more so. Recall the experimental simulation described in Chapter 7: Unless teams composed of highly expert members received an intervention to help them use that expertise well, they actually performed more poorly on an intelligence task than did average-ability teams. One cannot simply toss a problem to a team of experts and rest assured that members will properly assess the situation and take the most appropriate action.
In the analytic community, there has been a noticeable increase in reliance on structured methods of various kinds for helping teams avoid group process problems—especially in the ways teams identify, elicit, and weight their members’ contributions. Tools such as crowdsourcing, prediction markets, and collective estimation are increasingly popular. And, as this is being written, the Intelligence Advanced Research Projects Activity (IARPA) is requesting proposals for additional research in the same direction—specifically, for studies of “aggregative contingent estimation” (the use of mathematical techniques for weighting and combining a large number of individual judgments) and for the development of online forecasting methods that will be “more accurate than individual expert judgment and group deliberation by experts” while not requiring that users have high-level statistical expertise.3
One would have to conclude that it is not a great time to be an expert. The pendulum has swung so far away from reliance on expertise that more than a few highly trained intelligence professionals are wondering what the future will hold for people like themselves. Yet there are some signs that the pendulum may have reached the end of its arc and begun its swing back in the other direction.
A report from the Center for Strategic and International Studies (CSIS), for example, finds increasing disillusionment with content generated by amateur users and a shift back toward expert opinion, as seen in the increasing numbers of users who are forgoing open access websites in favor of those whose content has been created or edited by professionals.4 Even heuristics-driven biases, whose potency and pervasiveness have fueled concerns about expert judgments, have a sunny side, as is shown by psychologists Gerd Gigerenzer and Harry Brighton in a provocative essay titled “Homo Heuristicus: Why Biased Minds Make Better Inferences.”5 Maybe we really do need what smart, knowledgeable, experienced professionals have to offer in generating analytic conclusions; in deciding how best to execute a field operation; or in choosing the most productive avenues for scientific, technical, and educational activities. What we also need, however, are structures and supports that enable experts to collaborate efficiently and productively—which, of course, is what this book has mainly been about.
Discussions about the role of expertise in intelligence work sometimes devolve into debates between those who favor a structured, deliberative approach and those who would rely heavily on the intuition of seasoned professionals. It is a false dichotomy. Intelligence is both an art and a science that almost always requires a blend of intuition and structured methods.6 The proper weighting of the two approaches, however, also depends on whether those who are performing the work are novices, experts, or what I will call masters.
Novices have to be shown what to do and how to do it. As learners, they have not yet logged enough experience to correct the lay theories that guide their intuitions, and they are especially vulnerable to heuristics and biases that can lead them astray. Form novices into a team and they risk falling victim to the kinds of group process problems that have been documented throughout this book—perhaps overweighting the contributions of high-status members, or those who talk the most (or the loudest or the most persuasively), or, worst of all, those who are most similar to themselves. Novices need to spend some time as apprentices to more senior professionals to gradually build their competence and confidence.
Experts are those who know just what to do—the right way to organize the work, the methods and tools that can leverage their own talents and efforts, and ways of avoiding biases that can distort their judgments.7 They can tell you exactly what they are doing and why, they execute their work with steady competence, and they are invaluable in helping apprentices learn the trade. When experts make an intuitive leap, as they occasionally do, they subsequently inspect any data they can find to make sure they are not fooling themselves. The best teams of experts are those whose members have logged some time together, who are aware of both the process losses and synergistic gains that can affect team performance, and who have jointly developed strategies for minimizing the former and exploiting the latter.
And then there are the masters. These are people who somehow come up with exactly the right idea or insight, often ignoring generally accepted principles and procedures as they do so. Masters can look briefly at just a few features of a situation and then make remarkably accurate assessments of what is going on or what needs to be done—better, in many cases, than the results of careful deliberative or scientific analyses.8 But they typically are entirely at a loss when asked to explain how they did it. “I don’t really know,” they say. “It just seemed right at the time.” Masters are a rare breed, but when we have one in our midst we should be just as attentive to his or her intuitions as we are skeptical of the novice who studies a situation and then ventures, “Well, it seems to me that…. ” Although the word seems serves masters well, its use by a novice generally serves as a warning that the person is having difficulty making a defensible connection between data and conclusion.
Is there such as thing as a “master” team—one whose members work together so naturally and well that they need not concern themselves with plans or processes? Although it probably is rare for masters to team up, it does sometimes happen. A few chamber music groups, for example, occasionally transcend the notes in the composer’s score and deliver an interpretation and performance that is literally awesome. It would be wonderful to know more than we currently do about what might help teams of intelligence professionals begin to approach that level of collective mastery.
It’s true. We not only kick into high gear when we personally are in a competition, we also love to watch others compete. Who will be the American Idol, the next top model, the poker player who clears the table, the iron chef who dispatches the challenger? I’ve always known the thrill of a hard-fought athletic contest, but it is only recently that I have come to realize that cooking a meal also can be a competitive sport. Competition really does spur motivation.
But motivation to do what? The answer is obvious: it is to win, to beat the other player, to get the psychological kick and the tangible reward that only one of us can have. But what if the objective were to learn rather than to win? Then competition might not be such a good idea.9 For one thing, competitors assuredly do not share with each other what they know or know how to do. Moreover, they keep their competitive strategies private. When an opportunity presents itself, they bluff and then surprise their competitor with an unexpected move. Over time, doing work that involves an unending series of small competitions (think, for example, of a trader in a financial services organization) alters participants’ personal preferences—perhaps increasing their interest in new competitive opportunities or altering how they interpret ambiguous performance situations. Indeed, life at work for experienced competitors can come to be defined in terms of winning and losing, a state of affairs not uncommon in political organizations.
And motivation to beat whom? The answer again is obvious: the opponent, the one who must lose so I can win. But if that person and I are on the same team, things can get grim. Reflect on the relationships among players on professional basketball teams, for example. The team is supposed to defeat its opponents but sometimes the most energizing competition is for prominence or dominance within the team itself. Or consider office politics in some businesses. The business is supposed to outperform its competitors but sometimes the most salient competition is among staff members as they jockey for position and promotion. That is why it is such a bad idea for managers to provide rewards and recognition to individual members of a team on the basis of their personal contributions to team outcomes (see Chapter 8). If a leader wants to foster sharing and learning among members, it is important to keep the motivational focus on the collective task, on the real competition, not on the internal pecking order.
To illustrate, consider a motivational program called Teams-Games-Tournament (TGT), developed by psychologists David DeVries and Robert Slavin for use in elementary school classrooms.10 It is not uncommon for schoolchildren to informally sort themselves by gender, race, and academic performance, generating homogeneous groups that do not have much to do with one another. Children in one group often stereotype those in another, and there is little or no cross-group learning. How would you reverse that state of affairs and get children sharing with and learning from one another?
DeVries and Slavin formed students into temporary groups that were mixed in race, gender, and achievement in some subject area such as spelling. These groups then prepared for an upcoming competition with the other groups. Each group’s best speller would compete against another group’s best speller, the second-best spellers would do the same, and so on. Competition did its motivational magic, but it was aimed right—that is, at beating the other groups in the spelling bees. Within each group, students exhorted one another to study hard, the better spellers actively coached their less accomplished teammates, and the teacher offered encouragement and support to all. Students learned more, friendships developed among students who previously had ignored one another, and intergroup stereotypes diminished markedly.11
TGT raises the optimistic possibility that it is possible to harvest the very real motivational benefits of competition and, at the same time, foster sharing and learning among team members. The lessons learned from TGT are just as relevant for teams of adults who have hard problems to solve. But two things are required. One, the competition must be between the group and an external adversary, not among team members themselves. And two, the group must be well designed and supported. Those conditions were present for the TGT teams, and surely they can be put in place for intelligence teams as well.
One of the main messages of this book is that one can have motivation and learning at the same time. When the six enabling conditions are in place, member motivation is focused squarely on achieving the team’s overall objective—an aspiration that requires not just that members work hard but also that they share with one another and learn from one another.
If something is really important, we don’t put all our eggs in any one basket. That is one of the first principles of system design when high reliability is needed—for example, in designing an aircraft’s hydraulic systems or a mission-critical computer system. It applies as well to social systems, including teams. The analysis of intelligence data assuredly can benefit from having two separate teams examine the data and then come together to explore and learn what each team has come up with. That can protect against the possibility that the one team on which everything depends will overlook a critical piece of information, use flawed logic in drawing its conclusions, or give insufficient attention to data that are inconsistent with members’ emerging hypotheses.
It is one thing, and a very good thing, for teams to make independent assessments and then juxtapose and discuss them to learn from the differences. But it is a quite different thing, and a dangerous thing, for teams to be placed in direct competition for the ear of a policymaker or war-fighter whose actions will be informed by their findings. One sees at the intergroup level the same dynamics that develop when individuals compete with one another for something that only one of them can win. Competition does indeed provide a significant motivational boost. But it also fosters performance strategies that keep information private and focus members’ attention more on winning than on learning. These dynamics are stronger for intergroup competitions than for those between single individuals because group processes amplify “us versus them” dynamics. The boundaries of both groups become more salient and less permeable, pressures on individuals to conform increase, and members become more willing to set their individual judgments aside in favor of group-defined realities.
Here is the kind of talk one sometimes hears within competing groups: “They don’t need to know that,” or “Let them go ahead and pursue that strategy, they’ll discover soon enough that it’s a dead end,” or “We can get that from the deputy’s assistant but nobody has to know,” or “We have to be completely together on this, no individual agendas.” And sometimes competitive dynamics escalate from a disinclination to collaborate and share to strategies that actively undermine the groups with which one’s own is competing. As someone who grew up during the Cold War era, I have always been concerned about the proliferation of nuclear weapons. For the reasons just summarized, I now occasionally find myself nearly as worried about the proliferation of competing counterterrorism centers throughout our own government.
It is depressingly easy to set intergroup competition in motion. Only two things are needed: (1) that the groups be readily distinguishable so everyone can tell who is a member of which group, and (2) that the groups be parties to a zero-sum game and/or that there be a strongly imbalanced power relationship between them. That’s it. And once some trigger (such as a hostile act or a betrayal) gets the conflict going, it escalates and becomes self-sustaining.
What can be done to keep from falling victim to the social dysfunctions of intergroup competition? We know that mere exhortations by leaders for groups to cooperate do not help. The forces of intergroup competition, even among groups that are on the same side of the real competition, overwhelm the rhetoric of collaboration. Fostering interdependence among the competing entities does offer one intriguing possibility, however.
We saw in the previous chapter that interdependence among team members engenders peer coaching which, in turn, fosters team performance effectiveness (see Figure 10-5). Might the same thing happen at the organizational level? That is, if intact teams were made explicitly interdependent for accomplishing the overall mission of an organization, might sharing and coaching come to replace competition as the dominant character of intergroup relationships? That possibility, depicted in Figure 11-1, probably is more idealistic than realistic given the history and political dynamics of the intelligence community. Still, it may be worth a moment’s consideration because it offers at least the possibility of focusing groups’ competitive energies on our true adversaries, free from the distraction of trying simultaneously to beat out other groups that are on our own side.
FIGURE 11-1 At the Organizational Level
When the kiddies are not playing well together, our instinct is to provide some adult supervision. That is also what we do in government when things are not going well. Important work is falling into the cracks between different agencies. Turf battles are making it nearly impossible to get anything done. Several different organizations are doing their own things with no coordination whatever among them. So we appoint a czar or create an umbrella organization to manage it all, to provide the focus, coordination, and efficiency that we need but do not have.
As is seen in the experiences of the Department of Homeland Security and the Office of the Director of National Intelligence, umbrella organizations do not provide an automatic fix for the problems that prompted their creation. Indeed, umbrella organizations sometimes actually increase squabbling among the organizations they are supposed to coordinate. They add a layer of bureaucracy that can slow down the very activities that they were supposed to have speeded up. And they can further diminish the autonomy of those on the front lines to do what needs to be done to achieve their parts of the overall mission.
By contrast, consider the Goldwater-Nichols Act of 1986, which unified and clarified the military chain of command under the Chair of the Joint Chiefs of Staff, thereby reducing inter-service rivalries that previously had compromised both the military’s efficiency (for example, in procurement) and its ability to conduct well-coordinated operations. Although analysts differ in their assessments of the impact of Goldwater-Nichols, the Act surely provides lessons that could be helpful in identifying the circumstances under which umbrella organizations are appropriate, as well as the conditions that must be in place for them to achieve their intended objectives.12
Conventional wisdom specifies that, when things are not going well or when an unfolding crisis must be managed in real time, control should be centralized. All information is fed to, and orders for action flow from, the person in control. That, it is believed, increases both efficiency and coordination. In fact, central control may be the opposite of what is needed. Recall from Chapter 5 that the best statements of team purpose are clear and insistent about the ends to be achieved, but leave to the team decision making about the means by which those ends are pursued.
When quick and well-informed responses to developing situations are required, autonomy and accountability should be pushed downward, not gathered up. Combat commanders in the military know that, as do those who run tactical law enforcement and clandestine intelligence operations. It may be time to apply the same lessons to the rest of the intelligence community: Set overall direction centrally, but then provide each unit within the larger organization the resources and the latitude to do whatever needs to be done, within broad limits, to achieve those aspirations. It is, indeed, hard to dance well under an umbrella that someone else is holding.
One does not have to spend much time in the intelligence community to hear comments such as these: “You don’t get in trouble by overclassifying, you get in trouble by underclassifying.” “You don’t get in trouble for sharing too little, you get in trouble for sharing too much.” And, perhaps most bothersome of all: “If it’s not secret, it’s probably not very important.” It is the ultimate paradox of intelligence that teams charged with collecting, analyzing, and using information often cannot themselves get the information they need to do their work (see Chapter 8).
Long-standing community policies, practices, and norms not only get in the way of the work itself, they also impede the diffusion of lessons learned from intelligence successes and failures. There exist in the intelligence community a number of units whose explicit purpose is to harvest lessons from on-the-ground experience. Historians and anthropologists at the Center for the Study of Intelligence (CSI), for example, prepare highly informative reports about what went right, and what went wrong, in a wide diversity of intelligence activities. According to one professional at CSI, many of the lessons learned relate directly to the operation of intelligence teams of various kinds.13
The challenge—and it is a big one—is how to make what is learned from such studies available to other teams for use in planning and executing their own work. Feedback usually is provided mainly to those who were involved in the activity studied, with further diffusion occurring mostly through informal networks. So those who already know much of what happened learn a little more than they knew before, but that’s about it. As this is being written, CSI is experimenting with strategies for aggregating findings from multiple studies of the same general topic, the first topic being factors that enable or hinder collaboration among community members. That approach has considerable promise for making the lessons learned from experience more widely available and, perhaps, for the eventual inclusion of those lessons in intelligence community training programs.
Is it time for some fresh thinking about classification and compartments? For starters, how about doing away entirely with the term open source and maybe even with organizational units whose work is defined by the requirement that they look exclusively at data from publicly available sources? The culture of secrecy that pervades the intelligence community is not going to change in the short term. But policies and practices can be changed, and in ways that recognize that it is the value of a piece of information that is important, not whether that information came from a public or secret source.
How about taking full advantage of already-existing technologies for sharing information across groups and organizations? The technical capability for what sometimes is called trusted information sharing already exists and allows content to be shared without compromising information about methods and sources that must be protected. Moreover, the community has only begun to tap the full potential of Intellipedia, A-space, social networking, and other technology-enabled means of radically expanding the sharing of information and expertise.
How about explicitly encouraging community members to come up with new ideas for fostering information sharing across unit and organizational boundaries—and then providing public recognition and reinforcement to those who do? The CIA’s Galileo program has generated many provocative ideas for improving how things are done in the intelligence community. Could there be a similar program that specifically seeks ideas for increasing the degree to which intelligence professionals can obtain, and can share, mission-relevant information?
And, finally, how about breaking through many of the compartments that now exist, moving to the low side much of what is now routinely classified—and, at the same time, significantly strengthening the level of protection for the relatively small number of things that absolutely must be kept secret?14 This surely sounds like heresy, but it is something that one hears from veteran intelligence officers who have spent too much time in their careers trying, often with limited success, to get the information they and their teams most need to do their work.
Some years ago when I was doing research on flight-deck crews, I was in the room when an airline’s senior managers were debating precisely this issue. “We cannot possibly tell our people about such sensitive matters,” one executive declaimed. “Everybody around here has a neighbor who has a brother-in-law who works for [a competitive airline]. We tell our employees, and it will be in their executive offices within a day.” After a moment’s reflection, another manager responded, “That may be true, but which would be worse, for [the competitor] to know or for our own people not to know?” Which, indeed?
Like managers in other organizations, intelligence community leaders sometimes find it hard to resist the onset of cynicism as they wait for the January program of the month to be supplanted by the February program of the month. If you just wait, you won’t have to deal with what they are asking you to do. She really will be gone soon, and she will take her programs and preferences with her or, perhaps, leave them behind to be filed away and forgotten.
Community veteran Mike Mears tells of one intelligence agency in which half of the unit leaders changed within a six-month period. That rate of flow-through may be unusual, but the frequent movement of community leaders from position to position is unlikely to change in the foreseeable future. The political appointee departs when the administration changes. Or she has done well and gets promoted. Or her two years are up and she is rotated to a different organization. Or she chooses to move to a different agency to get her ticket punched in hopes of a future promotion. Or things did not go well in her organization and she is scapegoated and moved to the sidelines. There are lots of possibilities but they all have the same outcome.
How can one foster sustained development of a team and organization under those circumstances? It is a significant issue because many intelligence objectives cannot be accomplished with a quick in-and-out hit; they require instead sustained effort over a considerable period of time. A scientific or technology development program can continue for years. The cultivation and support of an agent can take just as long, as can the accumulation of all the knowledge that is needed to perform analytic work at the highest level. And so can the design, development, and implementation of a first-class training program for intelligence professionals.
One strategy for dealing with the tension between short tenure and long tasks is to create a multiyear plan that will survive personnel changes. Although such plans are a common feature of the government landscape, I have yet to hear anyone describe how helpful it is to have one. A better strategy, perhaps, would be to focus human resource management somewhat less on the roles and careers of individuals and more on the development of teams that have greater continuity than could reasonably be expected of any one member. Two special kinds of teams that might be worth considering are described briefly below, one for organizational leaders and the other for front-line professionals.
As has been noted previously in this book, when we think of leadership we often have in mind one individual who sets the direction for an organizational unit and coordinates unit members’ work. In this “heroic” model of leadership, the leader gets the credit if the unit does well and takes the blame if it does not.15 As the pace and complexity of organizational work have escalated in recent years, however, more and more organizations are moving from the heroic model to the establishment of leadership teams whose members share responsibility for collective directions and outcomes.
Leadership teams are headed by the chief executive of an organizational unit, and are composed of people who lead subunits within that organization. Because such teams have a rich diversity of knowledge and experience, they provide opportunities for all members to learn from their colleagues as they work together on consequential organization-wide issues. And, of course, they provide continuity of leadership even when one or more members depart. Although leadership teams would seem to have considerable value in the fast-changing world of intelligence organizations, they are less commonly seen there than in the private sector.16
Let us now turn from teams of organizational leaders to those that carry out front-line work in intelligence organizations. One type of team that can provide continuity as individuals move through their intelligence careers is what was described in Chapter 2 as a sand dune team. A sand dune team has fluid rather than fixed composition, with members coming together in various configurations as task demands change—just as sand dunes do when winds change. Such teams typically operate in a moderate-size organizational unit (perhaps two dozen members), and their overall missions and norms of conduct are established at the unit level. Like leadership teams, sand dune teams can maintain their momentum even as individual members come and go. They can make it possible to efficiently manage limited resources in rapidly changing environments. And they provide a level of flexibility and adaptability that is highly advantageous in intelligence work but that is not feasible in a traditional one-person, one-job work structure.17
For all their advantages, these two types of teams—leadership teams and sand dune teams—require just as much attention to how they are purposed, structured, and supported as do more traditionally designed teams. As by now should be clear from all that has been discussed in this book, there is nothing automatic about teamwork, no kind of team that can be formed up on the fly and then left alone to carry out its good work. Indeed, because of their special features and their fluidity, it may be even more critical to have the enabling conditions in place for leadership and sand dune teams than it is for single-purpose teams whose membership is relatively stable.
As we have seen throughout this book, teams are a popular means for accomplishing intelligence work, perhaps more than ever before. It is not just that they bring more resources and more diverse perspectives to the work than could any single individual. It also is that more and more tasks these days pretty much have to be performed by a team, tasks that are simply too large for any one leader or officer to handle alone. Indeed, if some task can be performed satisfactorily by one person working entirely on his or her own, it may not be of the greatest consequence. Teams also provide a means for dealing with, or at least circumventing, some of the problems that have been addressed in this chapter—such as insufficient sharing of information across disciplines, functions, and organizations; instability and uncertainty spawned by the flow-through of team leaders and members; and over-reliance on competition to sustain motivation.
Intelligence community managers sometimes turn to teams too quickly, however, mindlessly creating them for work that actually would be better performed by an individual. Consider, for example, the difference between framing a problem and solving it. Framing a problem is a creative act that is more appropriate for a single talented individual than for an interacting group (see Chapter 2). Leaders who overlook the distinction between designing and executing a task risk using teams more often and less appropriately than those who think carefully about whether using a team is actually the best way to carry out a particular piece of work. The same considerations apply to the use of techniques such as crowdsourcing and prediction markets. Such techniques can indeed generate valid estimates. But how well they work depends heavily on how well the question to be addressed has been structured—and structuring the question is better done by an individual than by a group.
Wise intelligence managers also give careful thought to the type of team they create. Should it be a face-to-face interacting group, or a distributed group that operates asynchronously, or some other structure that would be especially appropriate for the particular task to be performed? Finally, as has been seen throughout this book, a team’s effectiveness is powerfully shaped by its design, by the organizational supports available to it, and by the quality of the coaching it receives. Unless a team can be structured and supported well, it almost always is better to find an alternative way to get the work accomplished than to push the work off to a team that has little chance of success. A team gone bad is far worse for everyone—team members as well as those the team serves—than no team at all.
A key responsibility of the intelligence community is to keep us all from becoming sitting ducks for those who would do us harm. There may be a lesson to be learned from actual ducks about what it takes to accomplish that. Ducks have a problem when it comes time to settle in for the night. Because they sleep in the open, they have to be alert to possible attacks from predators. But they also have to get some rest.
They accomplish these conflicting objectives by exploiting a special feature of the duck brain—the ability of its two hemispheres to operate independently. One hemisphere of those ducks that are situated on the periphery of a group of sleepers also is asleep, but the other hemisphere is fully alert. If there is a sign of trouble, a duck on the periphery will catch it and sound the alarm, and the whole group will take to the air. Then, when the flock returns to ground level, different ducks take over the peripheral positions, allowing those who previously were in the warning position now to go fully to sleep.18
Groups of ducks draw on their members’ special capabilities to keep the flock safe. Although the two hemispheres of human brains do not have the ability to operate independently, we have something even better—a wonderfully evolved prefrontal cortex that opens possibilities for collaboration about which ducks could only dream. Surely that capability should allow us to work together to achieve our collective aspirations at least as well as groups of ducks coordinate to accomplish theirs. This book has sought to identify what it takes to do that, to increase the chances that teams whose responsibilities include the collection, analysis, and use of intelligence data are fully alert and ready to do what needs to be done to keep the rest of us safe and secure.