Any sufficiently advanced technology is indistinguishable from magic.
—Arthur C. Clarke
A rocking chair that charges smartphones. A new kind of cardboard packaging that includes instructions on how to turn it into something else when you’re done with the packaging. An innovative mapping service that lets cyclists in Berlin pick the best bike routes. These are some of the ideas that are regularly sent out to Springwise subscribers.
Springwise collects promising entrepreneurial ideas from over 15,000 spotters, its community of innovators spread around the world. Each day, spotters send in their ideas via e-mail or online forms. Then a core team at Springwise’s London headquarters compiles the ideas, which an editorial board assesses to determine which ones will make it into their e-mail newsletters.
Springwise uses its website and e-mail list of 155,000 subscribers to generate awareness—the first step in our Participant Decision Journey for engaging with potential participants. Each newsletter contains a call to action to join the network. But this organization has other ways to generate awareness—including their more than 50,000 Twitter followers. Additionally, Springwise.com generates a good amount of search traffic thanks to their publicly accessed content (about 15 percent of their traffic is based on publicly available data). And because Springwise.com provides a constant source of unique entrepreneurial ideas, it is often linked to other media sites—thereby spanning a range of interested communities from general news to entrepreneurship and design.
Once participants decide to join, they will discover the Springwise crowdstorming space that manages the later parts of the Participant Decision Journey. Because Springwise is focused on receiving ideas (consistent with the search pattern we described), the crowdstorming space is simple—an online form (and an optional e-mail) to submit ideas. As Springwise decides what ideas they will use, participants can see which of their submissions have been accepted—and ultimately how many points they have accumulated to use toward redeeming prizes.
Springwise provides a good way to understand how different spaces are combined to achieve recruiting and crowdstorming objectives. Figure 10.1 shows some of the spaces used by Springwise.
Figure 10.1 The Participant Journey—Springwise recruiting and crowdstorm spaces
Looking back on the cases throughout the book, we can see similar uses of spaces. Recruiting takes place across any number of online spaces, while crowdstorming spaces differ based on the pattern they need to support.
American Idol takes place on TV in our living rooms, but also via auditions in stadiums, through online voting and text messaging, and in front of a live studio audience. People discuss the show wherever they can connect, from coffee shops to Twitter.
Most of DARPA Network Challenge participants discovered the opportunity via CNN or social networks. They created proprietary software and used social media channels, mobile phones, and search engines; some even called local businesses to peer out of their windows to confirm balloon locations.
Giffgaff interacts with and recruits their community members through multiple channels—Twitter, Facebook, e-mail, and even in person at Meetups. However, most people who want to participate and be rewarded for participation will interact on giffgaff.com, via a platform created by Lithium.
Many of the recruiting spaces are now very familiar to us. In fact, you likely have personal experience with many of the spaces from social networks like LinkedIn, Facebook, or Twitter to more traditional media spaces like online news sites or blogs. Marketers teams continue to test and refine the best approaches to emerging channels while optimizing what works via traditional media channels.
However, much less is known about crowdstorming spaces. Springwise has created their own tools to measure and reward participants. American Idol pioneered voting via SMS to make their process work, and DARPA created a process to collect and verify balloon locations. Giffgaff relies on a relatively new enterprise platform to enable there comprehensive monitoring of community contributions. It is unlikely that many of us will encounter all of these spaces, so we will allocate more space to their discussion.
But let’s first take a look at recruiting spaces.
The DARPA Network Challenge research results support what leading advertisers will likely confirm: traditional media channels are far from dead. So finding ways to get your story told in these spaces is essential. But searches and social media are critical too, particularly as people move through the consideration phase of the Participant Decision Journey. As we discussed, the winning team in the DARPA Network Challenge was covered on CNN Headline News, with the runner-up receiving coverage on National Public Radio. It also helped that the winners’ brands—MIT and Georgia Tech—differentiated them from others who were not really connected in any way to existing brands. Importantly, however, a number of teams that placed in the top 10 drew on existing communities. These communities were not formed for the purpose of crowdstorming, but they proved very helpful as a way to drive awareness, consideration, and ultimately participation. For instance, George Hotz (who untethered the iPhone) engaged his then more than fifty thousand followers on Twitter to place third. And a community of geocachers engaged their community of several hundred thousand members (geocaching is an activity in which the participants use a Global Positioning System—GPS—to play hide and seek).
Beyond digital and traditional media channels, we can also look to packaging and retail environments for recruiting. Not many of us are likely to know who designed the products and services we use. But, as we noted earlier, both Quirky and LEGO Cuusoo use their product packaging to highlight their designers. And when Threadless teamed up with GAP, they were able to make use of in-store signage to generate awareness.
The right kinds of questions, incentives, and coalitions pave the way to find people in spaces that already share your interests and passion. The background material for the brief offers interesting conversation starters and content opportunities for publishers. In fact, one of the most effective awareness strategies used for the Betacup Challenge was done using a series of articles on Core77.com exploring the challenge of designing coffee cups for recycling or reuse. Not only did this content generate awareness; we observe many new participants who signed up to the Betacup after reading content on Core77.
Our ability to understand how people moved from awareness to participation on Core77 is not unique. Many touch points provide increasingly detailed analytics that enable you to understand how people respond to your content and messages—Facebook, Twitter, LinkedIn, and YouTube all provide details similar to what was once only available on your own website. Additionally, coalition partners can share detailed understanding of how people are responding to content, from on-site analytics to off-site monitoring. Beyond these methods, social media monitoring tools like Salesforce Radian6 provide ways to track responses to content—from tweets to blog posts. No matter the channel, you should expect a response in the form of some conversation; and, analyzing, understanding and joining this conversation is nearly impossible without monitoring tools.
As prospective participants move from awareness to consideration, we need to anticipate what is important to them. We can view those who are going to be submitting proposals in the way we might think of job candidates: these people will want to know what it might be like to work with this organization. They wonder: Does the organization sponsoring the crowdstorm deliver on promises? Can I find examples of people who have worked with this organization and who have gone on to receive compensation—from financial rewards to attention?
It is no surprise that you will find success stories featured prominently on sites like P&G’s “Connect and Develop,” or LEGO Cuusoo, or Threadless. It is critical to ensure that success stories can be found—whether on your own site or via others. These stories might take the form of featuring community members whose creations have gone on to some acclaim, or a new product that has recently been launched via the site. A site that supports good stories effectively connects with participants and promises that possible outcomes are achievable. It goes beyond “we want you” to “this could be you.”
Your approach to using recruiting spaces requires you to be aware that people are going to be searching for additional information about your crowdstorming project. Recruiting spaces need to make background information about the brief, the role of a particular coalition partner, or clarity on the process and terms easily accessible. For example, they should clarify how much information people need to share or whether or not a prototype is required for evaluation.
It becomes easier for prospective participants to understand what is happening as organizations move from search to collaboration and integration patterns. Online forums and support sites are where participants reveal what is and is not working and are important tools to use for communicating with participants. Participants discuss openly what they feel is unfair or even better in other communities. They will cite the reasons they enjoy working with a particular community—everything from the tone of interaction to compensation.
Customers and employees from many organizations need to actively advocate and find online spaces to share their experiences. But the space is already there for many communities engaged in crowdstorming; it is the support and discussion forums. Why do organizations allow this? As many will tell you, these conversations are going to happen anyway. So they would rather that they take place in an environment that they can monitor and where they can engage with participants.
Almost all the cases we have discussed so far have one thing in common. They have created an online space and a set of tools where people can learn more about how, why, and where they might participate in a given challenge. Up until now, the participant journey has taken place in other people’s spaces using existing technology. But now we are in a position to create our own spaces and we are free to optimize this space around our specific needs.
Usually prospective participants can see some or all of the brief and review the details of the terms and conditions associated with the available incentives. The Threadless site has a lighthearted tone, while P&G’s is more serious. LifeEdited and Betacup were built on social good and sustainability themes, while XPrize invoked science fiction to challenge prospective participants. But something that is not as immediately apparent about these sites is that they were optimized to ensure that people who were interested in participating took the important first step of deciding to do so.
This should not be a surprise. Our online experiences have been constantly optimized. Take online shopping for example. After more than a decade, Amazon.com continues adjusting product pages, search results, checkout flow, and carefully tests each element of each page to impact our purchasing behavior. The same is true of the ads we see from search engines to social media—our responses are constantly evaluated to determine what content is best to elicit the desired response—clicking, purchasing, or, in our case, choosing to participate. From barely perceptible changes in product headlines to the use of different images or video, the site uses each interaction we have with it to learn a little more about what will cause us to purchase more.
To get a feel for the range of possibilities in optimizing online experiences, you can visit www.abtests.com—a site that collects a seemingly endless stream of before and after results of optimized web pages. These sites are trying to encourage a range of behaviors—from signing up, to test driving a new car, to encouraging registration for online games. The tests yield improvements that can sometimes double conversion rates.
Once we have convinced people to join our crowdstorm, its time to turn our attention to the space that will enable us to work together.
Many of our examples have spaces that you can join, where everything is already up and running—the infrastructure, people, and community management processes. Organizations like the City of Boston, GAP, Starbucks, and Victorinox have engaged outside talent by working with an existing online space and their communities using InnoCentive, Threadless, and jovoto.
Other organizations like LEGO, GE, CEMEX, and Giffgaff have determined that they want to take on the role of organizing communities. And so they need their own infrastructure and tools to enable the patterns we have discussed, in particular for collaboration and integration. The infrastructure players we referenced are in the business of building software tools. Lithium is focused on a wide range of social business interactions that impact almost all conceivable touch points across the inside-outside boundary we cited. The Cuusoo Social Creation Platform and Brightidea are specifically focused on how groups of people work together to create.
Finally, there is proprietary crowdstorming infrastructure—the platforms that organizations like Springwise, Goldcorp, and American Idol are developing and using. While these particular examples may not offer ways to work together, we can still learn from what they have done.
Table 10.1 provides an overview of the landscape of crowdstorming spaces we have discussed in our examples.
Table 10.1 Online Work Environments (Exemplary Only Based on Cases Discussed)
Whether you plan to partner with a crowdstorm platform or a crowdstorm community provider or if you decide to build your own infrastructure, the remainder of the chapter will help you review and evaluate elements of spaces that enable the various crowdstorming patterns we have discussed.
As we have discussed, the main focus of the search pattern is recruiting. Since we introduced recruiting spaces and tools above, let’s focus on what technology is needed once people decide to participate.
As we look back at the case studies that have used search patterns, it is clear that online spaces need to primarily enable the ability for participants to submit a proposal or idea and learn the results of the challenge. Sequoia Capital, for instance, asks people to connect with them via e-mail or LinkedIn. P&G participants need to complete an online form. Tricorder XPrize has a voluntary “intent to compete” option that enables entrants to signal their interest, but also to receive ongoing communications from XPrize. The first round of qualification requires the submission of proof that the team has created designs that will be able to compete in a final prototype evaluation; to this end, teams can submit data and media to make their case.
Since search patterns focus on receiving proposals or testing prototypes, the primary objective of these spaces is to enable participants to send content. The content might take any form and usually includes some indication that participants have agreed to certain terms and conditions—like contest rules or non-disclosure agreements. The evaluation processes drive the exact form of the submission. In other words, it asks: What do experts judges need to see in order to best evaluate submissions, and how are they going to receive this content?
While we have seen that it is simple to accept large numbers of ideas, it is often more complex to evaluate them. Recall that Netflix and the city of Boston needed a way to test software to understand the resulting performance against benchmark data, whereas DARPA required the most qualified participants to show up in a desert. It is hard to anticipate all the ways in which testing might occur. However, frameworks for testing need to be designed and built ahead of the contest, so that the challenge organizer can include specific relevant terms in the submission guidelines.
The main consideration for expert evaluation is how to receive and organize ideas. In some formats, submissions are distributed to individuals who then meet to discuss and deliberate in much the same way a jury does in a legal trial. But before that can happen, submissions are usually reviewed to create a shortlist. It is simply not viable for juries to meet and review hundreds or thousands of proposals. The groups responsible for creating these shortlists receive criteria and are entrusted with providing the first level of filtering. In the American Idol example, these are the groups of judges who help with all of the in-person auditions for about ten thousand people in the United States. Goldcorp is on the opposite end of this scale; they received 50 submissions, which they reduced to 25 semi-finalists. But these submissions still contained over 600 pages of detailed information. And it ultimately took four months to decide on the prizewinners.
Participant support is the final consideration when evaluating the right space for the search pattern. Participants who are considering committing a significant portion of time to create and share a submission are going to have questions—and will certainly have even more questions once they begin working on proposals. The simplest way to manage this is to enable questions via one or more channels from e-mail to phone. You should be monitoring social channels anyway as part of your recruiting toolset—so this is also a simple starting place to understand and respond to potential questions. You should expect questions wherever people think you might be listening, from Facebook pages to Twitter.
While understanding and responding to questions is important, you can do even more. Providing a simple forum enables participants to help one another. It also ensures that they can easily and quickly reference any questions you’ve already answered—thereby reducing the amount of work you have to do. For instance, during the Netflix Prize, people used forums for everything from understanding the contest rules and data to getting specific help on algorithms they were working on for solutions.
Great support environments make it easy for people to find and avoid re-asking a question that has already been answered. But it is equally important to quickly establish who needs help by highlighting recent questions or those that have not yet received responses. The best support environments result in quick, helpful replies. In the cases we have discussed, organizations used a wide range of tools, from enterprise-hosted solutions like Lithium or Get Satisfaction to smaller scale applications like Tender, and even open source applications like PunBB.
Moving from search to collaboration patterns marks an increase in complexity because we expect a dramatic increase in participants—an order of magnitude increase. As such, we need online spaces that will let participants easily contribute feedback. And we need to expand our support capabilities. We also want to make it easy for people to discover ways to participate—for example, by providing notifications when there are new ideas to view. And participants need to know as soon as possible when someone responds to their feedback or comments, or when their idea is doing well in a challenge.
The spaces enabling the collaboration pattern are also very different from search spaces where only ideas are submitted. Since collaboration participants will receive ongoing feedback, they now need a way to organize it.
The core of good collaboration environments, therefore, has some similarities to social feeds like Yammer or Salesforce Chatter in the enterprise environment, or Facebook and Twitter for our personal communication. Ideally, the feed will include actions that are of particular interest to us, such as when there is a new opportunity to participate—for example when a new brief has been created or when someone has posted a new idea and is looking for feedback or a vote.
While the social feed is useful, participants will also want some way to control what they—and others—notice. They will need a private layer in order to interact with people in a way that nobody can see—like e-mail. And they will also want to be selective about the individuals with whom they share their actions. So it is no wonder that many of the collaboration spaces look a lot like our social networks. Platforms like jovoto, Quirky, Giffgaff, LEGO Cuusoo, and CEMEX Shift all have some variation of an activity feed to keep their participants informed about what is going on around them. And because we have so many personal and business feeds, there is often an additional level of messaging that allows us to determine what part of our feeds will show up in our e-mail. Sometimes these come in the form of a weekly summary; other times, every action can find its way into our inbox.
The feed is fleeting. So participants also need some way to keep track of what they have been doing as a means to establish their reputation. In some cases, it might simply be for bragging rights. In others, it might be so that we can identify the people who are making the most important contributions to the community. And sometimes this is going to also be used to determine compensation. As we will see below, keeping track of what people have contributed is the cornerstone of every collaboration platform.
In its simplest form, monitoring can give us a basic idea of how people are contributing. We can count basic actions, like how many times people have submitted an idea or how many times they have commented. While these actions don’t tell us anything about quality, per se, they do provide a useful starting point to separate those who are very active from those who are not.
However, if we want to use monitoring to do more—to determine compensation, for example—we need to do much better than approximate measures. And we must also be prepared for the inevitable attempts to manipulate the measurement system. If our participants don’t see the system as robust, then the best, most honest contributors will quickly stop showing up. Because so much value comes from smaller contributions in the collaboration pattern, monitoring is a core enabler of value exchange. So, our monitoring approaches must allow us to assess and reward high quality—and punish poor quality or actions.
At a minimum, a monitoring system must have ways for community managers to compare individual behaviors against norms. For example, how does a participant’s voting behavior compare to others? Do they tend to disagree with consensus opinions? Do the votes tend to favor certain people and not others? Can we establish any relationships between these people and those that they favor? It’s somewhat of an arms race—like catching people cheating on taxes or school tests. One side finds an approach that the other hasn’t detected until the other works around it. So while it’s desirable to have a current process in place, it is just as important to have a roadmap to show how the monitoring process has evolved—and what we anticipate happening next.
The other part of monitoring has less to do with individual contributions and more to do with issues that might be endangering the community environment—anything from inappropriate content to potential intellectual property infringements. As with voting and ratings, online spaces need a way to signal when a problem is identified. There also must be a way to prioritize. For example, if someone new to the community identifies an issue (which nobody else has), it is very different from ten established members who have pointed out a problem. This again goes back to the importance of measuring contributions, and how it enables us to use reputations to sort through the issues that require the most urgent attention.
Think about your e-mail inbox—all the messages that require your attention. Many messages are from people but many others are triggered by actions such as purchases, or social media activity. Now imagine a community manager trying to respond to messages generated by community members and their actions—trying to decide, among an increasing number of inbound signals, what requires immediate attention. Community managers need some way to prioritize.
Finally, there is the link between picking ideas and understanding feedback. We know that a peer filter that is working well can do a lot to reduce the number of submissions that expert judges must review. A reasonable goal seems to be eliminating 80 to 90 percent of the submissions from consideration—thereby cutting the work of experts by a factor of 5 to 10. Because evaluation criteria can have multiple dimensions, there should be the means to enable ratings according to these dimensions. Voting systems should then enable the means to interpret results. For example, if an individual is found to display unacceptable behavior, their votes should not be counted.
There should also be a way to select who will be allowed to vote. We have seen in multiple cases that it is often desirable to see opinions of both peers and specific stakeholders—like current and prospective customers or specific experts. Choosing from whom we want to receive feedback leads us to the more general question of enabling people to participate in specific workflows in the crowdstorm process.
We want to be able to control who gets access to more detailed information. We may also want to limit who can submit ideas. Perhaps we have a group of customers with a specific user experience—for example, professional downhill skateboarders or doctors who use a particular medical device. We might also want to limit participation to employees only for confidentiality reasons. Or, we might want to reward our community’s most active members with unique projects—say the top 5 percent of participants based on their overall contributions (revealed by our monitoring system).
Whatever the reason for doing so, we need a way to designate who can participate to enable us to move from public crowdstorm processes (everyone can do everything) to very selective participation (for example, only C-level executives can vote). As we get to the level of managing access for individuals and groups, we begin to approach the complexity of core enterprise software. And this is particularly true of the spaces used in the integration pattern.
The primary difference between the collaboration and integration patterns has to do with the extent of interactions between insiders and outsiders. It is not simply that they interact more frequently; they interact across more processes. For example, where Victorinox or the GAP might work with a community on only a few projects over a year, Giffgaff and Quirky are constantly collaborating with their communities across multiple projects. And these projects span processes at the fuzzy front-end of the product development and communications lifecycle all the way to sales and support.
In our discussions on community management we introduced the idea of nesting. We can break down crowdstorming into specific projects or steps in a design process—which has implications for all of the stakeholders. It also introduces more complexity for participants by giving them more places to participate. For instance—there might be two calls for new idea submissions while two other products have reached a point where prototype feedback is needed at the same time.
Nesting creates new challenges for community managers. Will community managers specialize by project or by particular type of contribution? This means that the online space needs to keep track of more information and share more feedback between community managers. Taking a simple example: if a participant is misbehaving in one project, how do you signal to other community managers that this person has been sanctioned (or why they were sanctioned)? Similarly, as community managers specialize in the area of support, they will need to be able to direct support requests to specialists—as interactions between participants and the organization become more complex, it becomes harder to direct support requests to the right people.
This probably sounds familiar, since it is similar to issues that customer relationship management systems encounter as they scale to address interactions with large groups of customers. In fact, we should not be surprised to find some of the platforms that were used in this book’s case studies—such as Lithium—in the emerging area of social customer relationship management. However, there are many others we haven’t discussed—Yammer, Get Satisfaction, Salesforce Radian6, to name a few—that are worth looking into.
As the integration pattern evolves, we expect to see more internal systems add the ability to measure outside contributors’ performance. At the same time, we also expect scoring and measurement from outside systems to be tied into internal employee evaluations. As the integration pattern implies, it is often hard to tell who is inside versus who is outside the organization. We might therefore expect the platforms designed for inside to reach out more—and those that are designed for outside to play a greater role for those inside.
There are many online spaces to choose from. We anticipate this space will grow as broader enterprise systems evolve.
For search patterns, the good news is that you can learn from what your marketing and communications teams are doing. Since much of the process is focused on recruiting, the considerations for a crowdstorming space are simple—with one exception—the evaluation process should enable a team to review and create a shortlist of the best ideas. The most complex aspect of the collaboration pattern—whether you use a technology platform or partner with an existing online community—is ensuring that you have mechanisms to understand contribution. If you are considering spaces for integration patterns, crowdstorming is going to be at the core of your business, so it is worth considering the limits of social enterprise tools like Lithium or Yammer against the benefits of more focused crowdstorming tools like Brightidea or Cuusoo. Just like the collaboration pattern, assessing and understanding contributions, in detail across a large group of participants, remains one of the big challenges along with systems to support idea selection.