We know nothing about motivation. All we can do is write books about it.
—Peter F. Drucker
They filled a stadium, but they were not there to watch. They were auditioning, joining the roughly 10,000 people around the United States.
Since 2002, these massive tryouts had become the norm because of a simple promise: a fast track from obscurity to fame and fortune. For over a decade, American Idol demonstrated that it could discover talent from a seemingly endless stream of aspiring performers and set them on a track to stardom. American Idol winners have succeeded in almost every way that matters for musicians: they have gone on to win Grammys, made it to the top of Billboard charts, and even garnered Academy Awards—attaining success seemingly overnight.
American Idol has tapped into an age-old promise: all things are possible if you have the talent. While the motivation for some may have been sex, drugs, and rock and roll, for others it was the drive to build massive personal brands that could lead far beyond music to everything from fashion lines to sports franchises.
But first, you have to be a star.
In the past, talent discovery in the music world was largely hidden from public view. It took place in small clubs and behind the closed office doors of a select few king- and queenmakers. Music industry insiders were constantly seeking out and promoting new talent.
But American Idol turned this whole process inside out.
The program is unlike any other music talent finding process to date because those who buy the music (the fans) also get to vote for those who will ultimately win the prize. Experts publicly share feedback and advice—sometimes offering inspiration but often judging with brutal honesty.
Each year, as contests move through the final rounds, fans can phone in or text their votes for their favorites. Most people are familiar with this process: those with the least votes are eliminated, while those with the most votes go on to the next round. Eventually, a winner is crowned during the final showdown. The process ensures that those who don’t win the ultimate prize will still net enough TV airtime and media from the show to receive a variety of benefits. In the past, participants who failed to win the grand prize still won tours and signed record deals. American Idol has also ensured that this is well understood—so while contestants may not like the odds of winning it all, they still have a shot at some major fame.
How well does the American Idol model work? It is one of the highest ranked US television shows of all time.1 The winners receive recording deals and management contracts as well as other contracts. The top 10 participants get to go on tour and earn six-figure sums for their participation. And many of the winners are now household names.
American Idol achieves a range of outcomes that we discussed in the previous chapter. They find talent, get feedback about the music products, and create a great deal of conversation (and entertainment). And they get much more than a potential star: by the end of the process, they have developed a brand name—a style, a persona, fans—and they can set about selling tickets and records with little additional development risk and effort.
What is at the heart of American Idol’s success? It seems that the show’s producers understand the key elements to motivating a range of participants. It is easy to understand why participants endure the various knockout stages of the competition: this process is a fast track to fame and fortune. And the system has shown that it works; each year, the show proves that it can create superstars in full view of the public.
But the participants are just the start. The audience is doing more than simply watching; they are taking part in a process from which they had always been excluded. They have never before had a say in deciding which musical acts are picked, nor had they had an easy way to show or mobilize support for their favorite performers. But American Idol has tapped into a desire to participate, to own part of the process, to have a say in what happens. Long before social media enabled anyone to share their thoughts and opinions, this show’s producers understood this important need.
By providing a mechanism for audience members to participate, American Idol succeeds in some important ways. They are getting feedback about what the audience likes as well as encouraging viewer commitment. This is more than market research; it is an expression of support and a sign that there is an intention to buy music, merchandise, or concert tickets.
This is how you beta-test superstar potential.
InnoCentive was one of the first firms to provide organizations with a platform to post some of the most challenging problems for experts to solve. Far away from television studio environment of American Idol, InnoCentive quietly brings together a different type of superstar—scientists, economists, biologists, or engineers. But, like American Idol, InnoCentive has been addressing challenges with crowds for more than a decade. On any given day you might find issues on the InnoCentive platform that range from materials science to economic policy. In most cases, the format is simple—a question is posed, and solvers submit responses to the questions. Unlike American Idol, there is usually no audience or voting. The responses are reviewed by the organizations that ask the questions—the seekers, as InnoCentive calls them.
It may be easy to understand the allure of superstardom in the world of American Idol—but why do problem solvers solve problems at InnoCentive?
In 2006, Harvard Business School professor Karim Lakhani2 first surveyed solvers who participated in InnoCentive challenges. He found three main motivations:
This mix of incentives makes sense if we think about traditional employment. For solvers—those who submit ideas in open challenges—the effort and time commitment is often significant; so it is no surprise that the incentive mix is close to what we have come to expect for full-time employment. At work, we might have one colleague who is focused on financial rewards and another who just wants to work on the most meaningful projects. And depending on their mission and culture, different companies might emphasize one facet over another.
But there is still much we do not understand, even about compensation for full-time employment.
In the late 1990s, Daniel Pink began to explore an interesting trend in the United States. More and more people were choosing to work for themselves. The notion of free agents comes from sports in the United States—players who do not have a contract to play for a specific team, and are therefore free to pick new teams. Pink noticed that an increasing number of people were electing to be free agents. His work toward understanding the Free Agent Nation (the eventual title of his 2002 book) would lead him to delve into what motivates this new breed. After all, why take the risks of striking out on your own if there are good-paying jobs to be had?
In his 2009 work, Drive: The Surprising Truth About What Motivates Us,3 Pink reviewed decades of scientific research on human motivation. While Lakhani had addressed the importance of incentives such as money, working on socially worthy problems, and receiving recognition, Pink identified three additional motivating forces:
Autonomy fits perfectly with what we see as a driving force for involvement in crowdstorming—in general the crowdstorm process is opt-in. Unlike many full-time jobs, participants can choose whether or not to participate or at what level to join in—from creating to offering feedback. The simple option of choosing to participate is very powerful; in fact, this is something that makes open challenges so attractive in the first place.
Purpose certainly overlaps with Lakhani’s findings, because it is often associated with doing something that will contribute value to society. Great examples of meaningful work have been gathered by Adam Grant, a Wharton School professor who has shown that people are likely to perform much better when they understand who they are working for.4 For example, Grant conducted a series of tests with a group of school fundraisers. He was able to double the number of weekly calls—and ultimately increase their weekly revenue—for one of them by introducing the fundraisers to a single scholarship student who had previously benefited from their efforts. In another situation he showed radiologists pictures of the patients whose X-rays they were reviewing. Radiologists usually view X-rays in disconnected circumstances; they might see a name, but never a picture, so they aren’t as connected to their patient. The consequence, when they saw a picture of the person they were treating, was better diagnostic results. Meaning and purpose are powerful motivators. Recall our discussion about context and storytelling in the briefing process? Professor Grant’s research confirms why this is so important.
Mastery involves opportunities to get better, to improve and, as such, focuses more on the process of participation than the end result. Mastery is a function not only of providing opportunity based on the task but of working with other people to improve skills. In a simple contest structure, people work independently, then submit their responses. Winners are announced, but usually with little feedback. The American Idol process turns this structure on its head by exposing participants to ongoing feedback, and offering an opportunity to learn from the judges and the audience responses.
Opportunities for mastery depend on lots of interactions—receiving a great deal of feedback to learn and improve. As we will discuss later, some crowdstorm formats allow participants to gather feedback from multiple stakeholders—celebrities, domain experts, their peers, or potential customers. As feedback increases, complex work environments evolve; in fact, many online environments begin to mimic familiar office environments with a mix of work and social, formal, and informal interactions.
Beyond the work that Dan Pink covered, there is another critical factor: connectedness. This is perhaps one of the trickiest factors to pin down because it is a function of how others respond to you; that is, it has most to do with the other people who are also participating. Fortunately, we now have many online, large-scale examples to help us understand connectedness. We can look to one of the most successful of these to provide a useful example of what happens when we adjust critical connectedness variables.
In 2009 a number of sources began to report on a problem at Wikipedia, the online “encyclopedia that anyone can edit.” Despite increasing growth, editors were leaving. More specifically, new editors were electing not to stay longer than a year. It had been typical that 40 percent of editors would continue, after one year of work, but researchers found that this number was dropping at an alarming rate. Only 10 percent of editors were choosing to continue participating after one year, signaling an alarming fourfold decrease.
Wikipedia found that they had a problem with connectedness between participants in the Wikipedia environment. When editor retention was high, activities on the Wikipedia platform included lots of feedback between participants of two main types, teaching and expressions of appreciation. When the retention rate faltered, researchers found that positive feedback had largely evaporated, replaced by lots of criticism and warnings. This created a clear line between insiders and outsiders—between the core editors and those that might become editors. The core editors had made it difficult for new editors to connect—and in doing so they threatened the growth of the Wikipedia community. Wikipedia has taken the lesson to heart and now fosters connectedness as a core part of the ongoing strategy. We will discuss this further in Chapter 8.
With the above research in mind, we reviewed successful crowdstorming activities. We found that they generally provide some mix of the following elements: Good, Attention, Money, and Experience:
Figure 4.1 Understanding Commercial Expectations
The GAME framework highlights the different ways people get value from their participation. We can find more than one of the elements in most of the cases we explore in this book.
Our discussions so far have focused on a specific type of crowd: external talent defined largely by the fact that they are not employees of the organization initiating the crowdstorm process. However, larger organizations are often able to tap into hundreds or even thousands of their peers.
We know that motivating external audiences is a balancing act between intrinsic and extrinsic incentives. Inspiring participation across internal groups has some additional complexity. Extrinsic rewards play a somewhat less pronounced direct role where financial rewards are certain (for example, salary structures, commissions, and so on). However, participants still want to understand how participation and the associated recognition will work alongside their traditional compensation and evaluation process—and this in turn links directly to future opportunities and compensation.
Intrinsic motivation is essential for internal crowds. While focusing on good, attention, and experience yields the best results, organizations need to recognize contributions alongside traditional job performance so that the established hierarchies’ needs do not crowd out the incentives to participate across a network (in other words, outside the traditional hierarchy).
Apple and Google worked with two groups of outside participants: they relied on software from the open source community for the core of their respective mobile operating systems and, at the same time, established different relationships with app developers. While their compensation structures are very different, both Apple and Google had to constantly check to ensure that they rewarded each group fairly.
Put another way, anyone working with outside talent has to show an absence of bias and a healthy dose of balance between the needs of the organization and external talent. There are a few critical elements needed to ensure that the value exchange is fair:
Without these elements, there can be no trust that promised incentives will be delivered. And even if they are delivered, the process will likely remain in question if the terms have been unclear—or if there was insufficient visibility into the process to deem it fair.
Let’s look at each of these in a bit more detail:
A track record of delivering on promises is a critical characteristic of a brand. When organizations deliver on promises, we happily commit and remain loyal. The same is true for brands that work with outside talent. No matter what the incentives, their ability to deliver on the promises they make ensures ongoing access to talent and ideas—while a failure to deliver can cut you off from talent.
The many elements of incentive schemes remind us of chemistry—many combinations are possible and they each attract different combinations of participants.
It is not quite a science, but hopefully Figure 4.2 serves as a helpful reminder of what elements to consider in your incentive scheme.
Figure 4.2 The Table of Incentives
Let’s look at a few more examples of how organizations have motivated external talent through the chemistry of motivation.
One of the most successful brands leveraging crowdstorming to get innovative ideas is a United States government agency. They have demonstrated time and again that they are able to find and motivate the best talent and ideas in response to some of the most challenging problems.
On March 13, 2004, racers competing for the first time in a new race lined up in the Mojave Desert. The race organizers, DARPA (the United States Defense Advanced Research Projects Agency), were interested in understanding the current state of robotics or, more specifically, the possibilities offered by driverless cars. As the race got under way, some of the contestants did not get off the start line; ultimately, none of the competitors completed the race.
But this was DARPA—the agency that gave birth to the Internet. They have established a reputation: they are able to deliver on a promise. Doing good work for DARPA can pave the way to many great things. Therefore, it was no surprise that almost 200 participants registered when the race was run again in 2005. After a number of qualification events, this time 23 robot cars lined up to take up the challenge on the morning of October 8, 2005. The winner emerged almost seven hours later when Stanley, the robot car, successfully crossed the finish line. Developed by a team from Stanford, Stanley was awarded a $2 million prize.
Now this is where the story might end for most challenges; but for Stanley, this was just the beginning. What followed was an explosion of media interest, with Stanley’s team leader, Sebastian Thrun, at the center.
The winning robot car and the Stanford team were featured in Wired and Scientific American—among many others.5 Thrun was named to Popular Science magazine’s Brilliant 10 as one of the 10 best and brightest minds in all of science. He was interviewed by Charlie Rose, and the team was covered on CNN with documentaries produced by NOVA. When asked how the victory changed his life, he responded, “Oh, big time! It is even changing Stanford as a university.” Thrun cited the various collaboration projects that came out of the race, including new links with the automotive industry and plans for a new, on-campus, 8,000-square-foot research facility. After the win, Wired magazine declared Stanley the number one robot of all time. He beat out 50 other real and fictional robots for this title, ranging from Spirit, NASA’s Mars rover, to Transformers’ Optimus Prime. Stanley is now at the Smithsonian Museum of American History.
What Thrun did not anticipate was that winning the challenge would lead to him working for Google. A few years after the DARPA challenge—and after Thrun began at Google—the website revealed to the New York Times that its fleet of robot cars had driven 1,000 miles without drivers based on Thrun’s work.6 Thurn appeared at the annual TED conference (TED is a global set of conferences formed to disseminate ideas worth spreading) in early 2011 and described how the driverless cars had by then driven over 140,000 miles. In 2012, the Google team obtained the first license for a driverless car in Nevada—less than a decade after DARPA issued the first challenge.
Of course, a great deal of development has happened since the DARPA challenge. Even so, it’s hard to overstate the role that attention played in the winning team’s fortunes. DARPA delivered on its promise: participants can depend on them to generate attention through media and attract prospective funders for the challenge winners.
DARPA runs a particular type of crowdstorming project: they usually focus on one type of contribution that they can evaluate against a specific goal—for example, being first to complete a designated task.
In contrast, we have seen other cases—like LEGO Cuusoo, GE Ecomagination, and American Idol—that are based on more complex evaluations of ideas and people by other types of participants.
Incentives need to focus not just on the big time-consuming roles like idea submission, robot building, or singing, but also the supporting roles. The problem with motivating supporting roles has to do with scale. It is not uncommon to have a great number of participants fulfilling these roles. DARPA had 200 teams who wished to compete in the robot challenge. LifeEdited and Betacup received ideas that numbered in the hundreds, but participants who were offering feedback and rating ideas numbered in the tens of thousands. Ten thousand people vote on a single idea for LEGO Cuusoo.
We will discuss participation roles in detail later on; however, it is important to think about how to encourage these other contributions. And establishing a reward structure for them means measuring participation.
Quirky is a consumer products company that uses crowdstorming to find, evaluate, and refine ideas for new products.7
Consider a recent example: you excitedly remove a new gadget you’ve purchased from the box, then get ready to charge it. You have one more space on your power adapter, but there is no way to reorganize the current adapters and cords to let you access that free spot. If you are thinking, “but someone must have solved this by now,” you would be right. Or, you would almost be right. In fact, thousands of people submit ideas to solve problems like this every day at Quirky, and other people—including Quirky employees—vote on these ideas to influence which are produced. The idea to solve this particular problem was called the Pivot Power adapter—and it was developed via Quirky’s crowdstorming processes in 2011. In 2012, it was awarded the Red Dot design award, a prestigious international product design competition based in Germany.
But where did the Pivot Power come from?
It was invented by Jake Zien with the help of Quirky and 855 other people. How does Quirky know that this precise number of people helped to bring this idea to life?
Quirky has built a platform to measure how people contribute to the product development process. They then use these points to calculate how much to pay people each time a unit of that product—such as the Pivot Power—is sold. By June 2012, almost 250,000 units of the Pivot Power product were shipped and almost $300,000 had been paid out as a share of revenue to the 855 people who had contributed.
Unlike XPrize or GE Ecomagination, people grant all the rights to their ideas with Quirky in exchange for a minimum revenue share associated with being the idea inventor. Submitters have some ways to reacquire the rights if Quirky does not use the idea. And, unlike XPrize or GE, anyone can join in and try to make the product more successful in a variety of ways—from coming up with a better design to helping find a market.
So while some inventors might try to bring their own products to market and own it all, others will prefer Quirky’s approach. They’ll reason that owning 15 percent of the revenue is incentive enough—especially when coupled with the fact that they will get a lot of help with the design and have the opportunity to achieve much more success producing and marketing their idea. In fact, Quirky goes to great pains to enable people to calculate their likely income by giving some parameters—including how much they influence an idea and how many units are sold at the retail and wholesale levels.
There is also the matter of being credited as the inventor, and having not just your name but also your face on the packaging. So even if you don’t find the revenue share idea that compelling, there is the added benefit that you may become known in retail outlets and homes around the world.
Quirky’s model for motivating talent has succeeded—and is now even influencing mainstream retail. When Quirky was starting, items were sold online with modest product volume and revenue yields. Today, they count large traditional retail partners such as Target, Amazon, and Toys “R” Us among their retailers. And through this coalition, they can deliver on larger rewards to Quriky contributors.
Understanding what motivates your desired participants is the key to establishing the right incentives and setting up successful crowdstorms. The American Idol example shows how innovation is always possible—even in a well-established industry—if you can come up with an effective incentive formula for finding new talent and ideas. The GAME framework helps us imagine the mix of possible incentives.
But incentives alone cannot do the work. Participants need to believe that an organization will deliver on its promises; clear terms and transparency are critical in these kinds of interactions. Finally, it is easy to overlook the supporting contributions—comments, voting, recruiting—to name just a few. And, rewards require measurement in this area; so, as you add layers of contributions, you need to ensure that you can measure them.
The complex mix of incentives offers a first glimpse into the mix of skills and resources necessary to make crowdstorming work. Incentives often drive the need for coalitions to help plan, organize and operate crowdstorm processes.
Let’s see specifically how this works.
Notes
1. For more details see Wikipedia, en.wikipedia.org/wiki/American_Idol.
2. Karim R. Lakhani, Lars Bo Jeppersen, Peter A. Lohse, and Jill A. Panetta, “The Value of Openness in Scientific Problem Solving,” Working Paper, October 2006, www.hbs.edu/research/pdf/07-050.pdf.
3. Daniel Pink, Drive—The Surprising Truth about What Motivates Us (New York, NY: Riverhead Books, 2009); and Daniel Pink, Free Agent Nation: The Future of Working for Yourself (New York, NY: Warner Business Books, 2001).
4. Adam Grant, “How Customers Can Rally Your Troops,” Harvard Business Review (June, 2011), http://hbr.org/2011/06/how-customers-can-rally-your-troops/ar/1.
5. Joshua Davis, “Say Hello to Stanley,” Wired (January 2006), www.wired.com/wired/archive/14.01/stanley.html; and W. Wayt Gibbs, “Innovations from a Robot Rally,” Scientific American (December 26, 2005), www.scientificamerican.com/article.cfm?id=innovations-from-a-robot-2006-01&sc=I100322.
6. John Markoff, “Google Cars Drive Themselves, in Traffic,” New York Times, October 9, 2010, www.nytimes.com/2010/10/10/science/10google.html?_r=1.
7. For more details see www.quirky.com.