You’re only as good as the people you hire.
—Ray Kroc
When we think about recruiting, we instinctively focus on who might have the expertise to submit a response to solve our problem. But when we take this approach, we miss an important opportunity to let valuable talent participate in smaller but still useful ways like rating a solution or offering feedback. Like supporting roles in a movie, these roles are easy to overlook; but movies would not work without them.
We have identified three crowdstorm patterns to reflect different roles so we can better understand who we might recruit and what we will expect them to do. For the moment we are just focused on recruiting, but we will continue to revisit these patterns as they impact many aspects of the crowdstorm process: from measuring contributions enabling fair reward to managing much larger numbers of contributors when we look to add more supporting roles.
We will refer back to the patterns in Figure 6.1 throughout the rest of the book.
Figure 6.1 Crowdstorming Interactive Patterns
The patterns reflect roles and interactions between the people who are both inside and outside an organization. As more roles are added, the number of participants and interactions rises sharply. The key attributes of these patterns are as follows:
Let’s look at the implications of each of the patterns a little more closely to see how they impact recruiting.
Recruiting for domain knowledge is the sole focus of the search pattern.
When we put together traditional teams, we are usually seeking a core group to generate something—a concept, a prototype, a plan, or new business. Therefore, uncovering a domain expert is usually the primary focus.
As organizations have called upon larger groups of participants to solve problems using crowdstorming, they’ve made some interesting discoveries with respect to domain expertise. While expertise, of course, remains critical, increasingly, people from outside the task domain have actually solved many problems.
Recruiting for key contributors becomes not only a hunt for sameness (this may include more experts or specialists from outside), but also a call for likeness that can be applied. Let’s look at an example from the P&G Connect and Develop project to illustrate this concept.
Over the past few years, the San Diego Zoo has been developing a specialty in biomimicry, a discipline that tries to solve problems by imitating the ingenious and sustainable answers that nature provides. Writers Dan and Chet Heath describe how P&G leveraged talent from a different but like discipline to solve a problem:
Let’s say you’re looking to create a detergent that works superbly in cold temperatures. This would seem to be a chemical engineering problem. But, as the zoo’s scientists tell us, it’s also an Antarctic icefish problem. When the icefish eats other fish, it has to digest [its prey’s] oils—[a] process [that’s] remarkably similar to what happens in the wash with the oily taco stains on your T-shirt. Furthermore, the icefish typically dines in water as cold as minus 2 degrees Celsius. (Try that, All-Temperature Cheer!) So, thanks to this cold fish, you have a working model for an ultra-low-temperature detergent—and it’s a solution that would have never occurred to an expert.1
P&G found that the goal was to uncover “that elusive person who’d find your problem trivial.” Their search wasn’t random or arbitrary; at the same time they wanted to cast a net wide enough to find resources that could deliver the unexpected solution. A crowdstorming environment, in fact, lends itself to helping companies find analogous solutions and resources by the mere fact of being open to participants who see the opportunity of applying solutions they have worked on in new ways.
Recruiting for domain knowledge then entails finding participants with expert knowledge, as well as those who have the advantage of not being burdened with assumptions—individuals who can bring new perspectives and heuristics to solve a given problem. We will discuss where and how to find these people later in the chapter.
But before we go there, let’s look at the other interactive patterns to see what other potentially new roles organizations need to consider when recruiting.
Beyond the core contributors there are also supporting roles. These participants have an interest in the outcome and spend some time injecting comments, critiques, and inspiration in response to a submitted idea, in addition to offering evaluation. They differ from core participants primarily based on the amount of time they are willing or able to commit to the challenge. Since they’re commenting, editing, and evaluating rather than creating, their time commitment is often lower—despite their passion or knowledge. Yet, depending on how you structure the challenge, their contribution is critical.
These roles take shape based on the amount of time invested. For instance, you expect traditional core team members to contribute 10 hours per week. Crowdstorming environments make it feasible to recruit team members who contribute, for example, 1 hour per week on much smaller tasks like voting, sharing, or commenting.
In fact, Dennis Wilkinson, of HP’s Social Computing Lab,2 looked at various participation opportunities across a number of different online communities. He found a recurring pattern, independent of the domain of expertise. There is a long tail—a small number of contributions from a few who do the work (idea submitters) and a lot of contributions from those who support the work (provide feedback and filtering). The HP research showed participation on simple tasks like voting was much higher than participation on tasks like commenting—a task requiring more time. And participation on commenting tasks was much higher than idea creation—the most intensive task.
This leads us to a discussion about where talent can be found. There are three basic places to tap: employees within an organization; existing communities based on skills, domain knowledge, professional interests/affiliation, and other criteria; and the public.
Once we know where we might find prospective participants, we need to set about convincing them to join our crowdstorming initiatives.
We focus so much of our marketing efforts today on selling—on moving people through an experience to arrive at a point where they want to purchase something. Since crowdstorming requires that we sell people on an experience, we can learn from these successful marketing approaches. Our aim is not to rewrite the book on marketing tactics, but simply to emphasize a few critical steps in the participant journey.
You may recall that we introduced this in Chapter 2 as a way to understand how trust and information exchange can grow, as participants move from awareness to making the decision to participate in a challenge. The participant journey also highlights critical decisions for participation during the recruiting process.
Figure 6.2 highlights the critical points on the participant journey:
Figure 6.2 The Participant Journey—Recruiting
We’ve looked at the participant journey in general, but it is worth highlighting a few aspects of the journey (and therefore tactics) that might help with recruiting for specific crowds.
Getting participants to consider whether to participate is initially a function of how well those running the challenge communicate the brief, and what the incentives are to participate. Even more interesting is how participants contribute to recruiting during the process. Platforms that allow participants to share work and engage their own personal networks during a crowdstorm will help fuel the recruiting process by providing positive feedback
Organizations need to build the appropriate tactics into their recruiting strategy in order to find participants for the roles they need based on the crowdsourcing interactive patterns they elect to use.
Let’s look at a few examples of organizations that have done this successfully.
Our first example is a company that recruited its own employees, and that faced challenges in all three recruiting steps. Employees had to become aware of a fundamental new way of doing innovation; they had to be convinced that it was valuable to participate, and a process and platform had to be implemented that could make participation credible for people who did not know each other.
It is unlikely you have heard of CEMEX if you work outside the construction business. However, you have very likely benefitted from their products. CEMEX is one of the leading global building materials companies. They trade in more than 100 countries and have close to 47,000 employees located around the world.
In 2010, CEMEX set out to deploy SHIFT, a new platform to encourage idea sharing across the company. Their aim was to encourage collaboration to drive innovation—specifically, to develop new products and reduce time to market. To do this, they had to move from relatively independent global operations to becoming a more tightly integrated global company, and they needed to recruit platform participants from across their employee pool.
CEMEX had to figure out how to encourage adoption of crowdstorming in order to share ideas in an interactive way across multiple cultures, languages, and modes of problem solving. They answered this challenge, and went from a core group of 2,000 people in April 2010 to over 13,000 active users by February 2012.3 They did this by following some of the approaches we have discussed thus far.
For CEMEX, the transition to internal crowdstorming was a success story based, in part, on how they handled the recruiting process. They changed their innovation culture from hierarchical to engaging talent from far and wide by using tactics to build awareness and consideration—then communicating the value of the participant’s experience.
In Chapter 1, we introduced examples of how LEGO puts crowdstorm processes to work. Their most recent efforts are focused on getting help to identify the most promising new LEGO designs using the LEGO Cuusoo platform. While they have an active fan base, their fans are distributed across multiple interest groups. LEGO needed a way for them to share and respond to ideas in one place to better organize interactions.
Here is how they did it:
If you invented the Internet, how would you celebrate its fortieth birthday? For the United States Defense Advanced Research Project Agency (DARPA), the answer was simple—let’s see what this baby can do. In Chapter 3, we talked about DARPA’s experience in using open calls and contest structures to get new results and learning. But on the fortieth birthday of the Internet, DARPA outdid itself and in the process provided us all with a very useful head-to-head comparison of different recruiting strategies for online collaboration.
The agency wanted to understand how they could use the latest generation of online technologies and behaviors to solve broad scope, time-critical problems. While this book is focused on a specific type of problem related to quickly finding the best ideas and talent, DARPA took a broad view on what they were seeking—and, as a stand-in for their research agenda, they turned to weather balloons.
On a Saturday in December in 2009, the agency released 10 red weather balloons across the continental United States and then waited to discover if and how teams would be able to locate them. In the end, over 4,000 teams participated.
Here is how DARPA posed the challenge:
To mark the fortieth anniversary of the Internet, DARPA has announced the DARPA Network Challenge, a competition that will explore the roles the Internet and social networking play in the timely communication, wide-area team-building, and urgent mobilization required to solve broad-scope, time-critical problems. The challenge is to be the first to submit the locations of 10 moored, eight-foot, red, weather balloons at 10 fixed locations in the continental United States. The balloons will be in readily accessible locations and visible from nearby roads.5
The prize for the winning team was $40,000.
The DARPA team already sensed that Internet tools and connections were enabling new ways to find and organize talent. They wanted to create a challenge that would demonstrate how people could do this on a grand scale. To date, this remains one of the largest scale head-to-head tests of different online crowd recruiting strategies that we have been able to find.
DARPA discovered, via the interviews they conducted following the challenge, that the most successful responses employed a number of similar approaches. The winning team, a group of MIT students, employed a recursive strategy for recruiting; that is, people who joined the team were rewarded not just for finding balloons but also for finding people who found the balloons. This team devised incentives that motivated people to forward to others its message about the challenge and the need for help. They promised $2,000 to the first person who submitted the correct coordinates for a single balloon, and $1,000 to the person who invited that person to the challenge. Another $500 would go to the person who invited the inviter, and so on. The system quickly took root, spawning geographically broad, dense branches of connections.
After 8 hours and 52 minutes, the MIT team identified the correct coordinates for all 10 balloons, doling out prize money to people in the winning recruiting chains and donating the rest to charity.
This outcome is worth reflecting on for two reasons. First, as we discussed at the beginning of this chapter, our instinct is to focus recruiting on those people who are likely to submit responses to our brief. But it is easy to forget that there are other roles that might be useful to us. For example, what is the value of the people who might introduce us to the people with ideas or IP?
The second important aspect is related to the media. There were many ways for people to discover the DARPA challenge. Stories about it were featured by organizations able to reach large number of readers or viewers, like the New York Times and CNN. And, if you searched online for the challenge, you would also find multiple teams trying to recruit you to their cause.
The winning team—as well as many of the other successful teams—was smart enough to realize that while many people might not be able to find the balloon, they might still be able to respond to the media coverage and play a useful role. If, for instance, you found your way to the MIT site, you could participate not only by finding the balloon but also by finding more people who could find the balloons—and you would still stand a chance of benefitting.
A final comment on the use of media in this DARPA challenge: many of the teams employed strategies that would look very familiar to any senior marketer today. They made use of broadcast media; however, they did not pay for media but earned it by offering interesting stories and incentives—all familiar, efficient awareness strategies.
DARPA tracked contest site traffic to understand rapidly increasing interest, and then tracked teams as they launched their various recruiting websites. Many of the teams used their recruiting sites in conjunction with search advertising or search engine optimization to get people to join their teams. Still others made use of their social networks, as conversations on Facebook and Twitter trended toward the Network Challenge in the days running up the launch. The broadcast phase proved useful to many teams in helping to build awareness and consideration.
Recruiting the desired participants for your crowdstorm is a function of various elements: finding talent with the domain knowledge needed to solve your task; accessing talent with like knowledge from another domain that could solve your problem; and optimizing participation from talent who may not be able to devote the time to create ideas, but who can provide feedback on and help you filter ideas. Understanding the type of roles you want for your challenge—as well as the type of interaction that works best for your community—is crucial when choosing your recruiting strategy. In addition, it is critical to develop tactics to deal with the key touch points along the participant journey—awareness, consideration, participation, and experience.
Managing and monitoring the participant’s experience during crowdstorming is not only important for recruiting. It can also help improve the outcome of challenges, the experience, and then the resulting advocacy.
We now turn our attention to how you manage and monitor participation to deliver the best results.
Notes
1. Dan & Chet Heath, “A Problem-Solver’s Guide to Copycatting,” Fast Company (November 1, 2009), www.fastcompany.com/1400929/problem-solvers-guide-copycatting.
2. Dennis W. Wilkinson, “Strong Regularities in Online Peer Production,” Social Computing Lab, HP Labs, www.hpl.hp.com/research/scl/papers/regularities/regularities.pdf.
3. For details on CEMEX see Jesus Gilberto Garcia, “Shift Changes the Way Cemex Works,” Management Innovation Exchange (September 2, 2011), www.managementexchange.com/story/shift-changes-way-cemex-works.
4. Ibid.
5. Darpa Challenge archives, archive.darpa.mil/networkchallenge.