I’m the decider, and I decide what is best.
—G. W. Bush
Ninety percent of [science fiction] is crud, but then, ninety percent of everything is crud.
—Theodore Sturgeon
There is one problem with everything we have told you so far. If you succeed, you will find yourself with many, many ideas or proposals or even prototypes. We have taken to calling it the tyranny of ideas.
Organizational decision-making has not lacked approaches to selecting the best ideas; methods like SWOT (strengths, weaknesses, opportunities, and threats), cost–benefit analysis, or decision tree analysis might all sound familiar. But the output from crowdstorming is at a scale that these approaches never anticipated. Our concern, then, is not to review all decision-making frameworks, but simply to highlight what appears to be working best when dealing with large numbers of ideas.
Among the things we emphasized when asking for ideas was to make sure you knew how complete you wanted the answer to be. In other words—are you looking for a sketch on the back of a napkin, or a fully functioning business (or any number of stages in between)? When submissions take the form of working prototypes, you can directly measure the proposals’ merit to find the best—whatever it is in your case (for example, the fastest time, the least amount of error, or the lowest cost).
A good example of direct testing is Netflix, which asked data scientists to help them improve the way they made movie recommendations. Though the company had developed its own recommendation system, Cinematch, they thought there was an opportunity for improvement. They believed that by sharing their data and asking external talent for help, they would be able to come up with a better algorithm—thereby improving their ability to help customers find the movies they would love. They knew the $1,000,000 prize they were offering would garner a lot of responses—so they needed a robust approach to fairly and accurately evaluate them.
Their criteria was straightforward:
We provide you with a lot of anonymous rating data, and a prediction accuracy bar that is 10 percent better than what Cinematch can do on the same training data set. (Accuracy is a measurement of how closely predicted ratings of movies match subsequent actual ratings.) If you develop a system that we judge most beats that bar on the qualifying test set we provide, you get serious money and the bragging rights. But (and you knew there would be a catch, right?) only if you share your method with us and describe to the world how you did it and why it works.1
Core to the success of the Netflix challenge—asking for ideas that could be tested and having a system to evaluate lots of ideas.
Another example of direct testing comes from the Defense Advanced Research Projects Agency (DARPA), whose Red Balloon Challenge we discussed in Chapter 6. To recap: DARPA wanted to explore the roles that the Internet and social networking play in real-time communications. During the challenge, teams had to locate 10 red balloons placed around the United States and then report their findings to DARPA. The difficulty in this scenario was the distributed nature of the contest that required teams to be smart about using online resources. The genius of DARPA’s approach to their challenges was that they did not need to consume resources to identify the best ideas. They structured the challenge in a way that allowed them to easily evaluate responses to determine which teams arrived at the right answers first.
How did they do it? Though DARPA had more than 4,000 entrants, they managed to create a system that let them easily identify the best submissions. Participants had to submit answers in the form of coordinates for the location of the distributed balloons over the course of the day. They could quickly check these coordinates against the answers; you either had one, or you did not. So, even if each team submitted multiple guesses for the 10 locations—thereby rocketing the number of submissions to 400,000—the evaluation remained simple. DARPA was able to identify the top 10 winners out of 4,000 in short order. They then conducted debriefs with the top teams, offering an opportunity to further understand if there was any rule breaking.
The advantages of direct testing are clear. You do not need to have people evaluate each idea. Once you have a test process, you can easily scale to test many ideas. The downside of direct testing is that it does not work for lots of crowdstorming projects, since it requires that a “functioning” or working version of an idea be put into production to do the evaluation. And, as we have seen, the incentives to motivate participants to make this investment is generally high—and not always practical.
So, we are often dealing with submissions that cannot be directly tested in a significant number of crowdstorm projects. In these situations, people are required to decide. These include experts and crowds—and the processes they use to decide are many.
Many of our search pattern examples demonstrate the use of experts who assess the value of submissions. We have seen how organizations like InnoCentive and P&G solicit ideas, then work with domain experts to pick the best responses. In simple terms, the group that was concerned with ideas generated within the company is now responsible for reviewing additional ideas from outside. We know this approach works, not just for science and engineering, but also across a range of other domains—from advertising to investing. The main challenge—how do you make this approach work for many more ideas?
Victors and Spoils (V&S) is an advertising agency inspired by a simple idea: great advertising talent can be found all over the world. V&S believes that while traditional agencies might be able to assemble some talented teams, most of the talent is never going to work for you. The V&S model brings together their expert creative directors and strategists with thousands of talented advertising copywriters, graphic designers, art directors, interaction designers, account managers, and strategists.
The first proof of what might be possible came in 2009 when world-renowned advertising agency Crispin Porter + Bogusky convinced a client to crowdsource the design of a logo. The experiment was met with a mix of responses. On one end, some designers argued that the process exploited participants. Participants had to propose a logo design without compensation. Although requesting proposals is not unusual, it is unusual that the proposal was also the work product, often called spec work. On the other end of the spectrum, there was excited speculation about what this might mean for future advertising creative work. The experience confirmed for John Winsor, who cofounded V&S, that there may well be a very different way to do advertising work—that it could be possible to create an agency where much of the talent existed outside the core organization. And so V&S was born.
While V&S has experimented with a number of ways to work with the crowds of creative talent, they have relied on a decision-making and evaluation model that depends on the expert: their organization’s creative director. In fact, V&S cofounder Evan Fry had already won awards and was recognized as one of the most talented creative directors in the business. V&S designed their model around people like Evan, with deep expertise in working with different creative talent to produce great advertising. While V&S gets the best ideas from anywhere, the creative director is the decider.
While a head-to-head comparison with existing processes is difficult, there are positive signs for V&S’s pioneering process. A number of global brands signed on as clients, and within three years of building V&S, this new agency was acquired by one of the leading advertising agency holding companies—Havas.
While creative directors are concerned with evaluating creative ideas in the service of advertising, venture capitalists are concerned with the future prospects of businesses. Venture capitalists face a similar issue of trying to make complex judgments about an idea’s future potential. In most cases, their primary concern is deciding to make substantial investments in time and money to produce promising ideas.
One way to simplify the evaluation process is to let people know in advance what questions you will ask. As we noted in Chapter 3, Sequoia is one of the world’s leading venture capital investment funds. They are clear about what they expect to learn from entrepreneurs; their call to action frames the questions and establishes their evaluation criteria. To get a sense for the complexity of this evaluation problem, let’s explore some of their criteria.
One criterion that is often cited is market size or addressable market size. In other words, assuming all goes well, how big might a company turn out to be? The upper limit for a company’s size—and therefore the upper bound on returns for venture firms—is established by addressing addressable market size. But this is no simple question, because any number will be based on a variety of assumptions. For example: how would you look at a forecast for Google advertising revenue possibilities in a given year in the future? Or, how would you decide on the potential market size for new markets like Atari or Apple? Or what about the adoption of a new payment platform like Square. Sequoia invested in all of these firms before most of us had ever heard of them.
The same issue of assumptions plagues most criteria at the early stages. In many cases, just to evaluate some of the claims, venture firms will turn to a range of experts—from professors to industry leaders—to help understand specific assumptions. Through a series of conversations and analysis, the investment team is able to evaluate both the assumptions and, ultimately, the business potential.
Sounds like a lot of time-consuming, complex work? Yes indeed.
The number of hours required from even seasoned investors to evaluate business plans can be crushing. Consider the time and resources required if DARPA had asked for 4,000 business plans for the Red Balloon Challenge—rather than simply asking everyone to solve a problem and measure results.
As Andrew McAfee of the MIT Center for Digital Business has pointed out, many “established organizations still practice ‘decision making by HIPPO (Highest Paid Person’s Opinion).’”2 Google marketing evangelist Avinash Kaushik coined the term HIPPO. Since 2006, Avinash has helped organizations understand how they can use data to make decisions rather than relying exclusively on HIPPOs. Building on his ideas about the limitation of HIPPOs and the promise of data from large groups of people, McAfee notes that we can look to crowds during the decision phase in the same way that we look to them for ideas. So we can see a continuum emerging. On the one end is the traditional approach of leveraging experts to select the best; the other end employs the wisdom of crowds, which relies on analysis of data and feedback from multiple stakeholders about the proposed idea’s fitness.
Strong cases have been made for both the wisdom and madness of crowds. The issue with the wisdom of the crowds is that crowds can be useful on certain tasks. So which types of decisions are best suited to crowds? As David Leonhardt puts it,3 crowds or “markets are at their best when they can synthesize large amounts of information. . . . Experts are most useful when a system exists to identify the most truly knowledgeable. . . .”
If we think about Victors and Spoils or Sequoia, we have a good reason to believe that the people reviewing the ideas are some of the most knowledgeable in their fields. Crowds tend not to select the best results or the most feasible ideas when they have insufficient information for making a valid choice—or an inadequate ability to synthesize it. Having many people who know nothing about advertising or investing is not going to increase their chances of picking the best ideas.
But it’s even more complicated than that. It’s useful to ask a group of potential users for a new product or service because they have experience with the problem and with the solution. But asking this same group to weigh in on something like what new energy technology might win out in ten years is an entirely different matter. There is some level of expertise required just to understand the basis for competition among energy technologies—from the physics involved to the economics of moving from test-scale to production-scale energy production.
Beyond the issues of expertise and diversity, there is the question of how to best capture feedback from larger groups.
Most of us are familiar with simple up-and-down voting or, increasingly, liking or supporting. This type of signal provides a way to build a filter. At a basic level, gathering more likes means more people support an idea. But while the act of voting is simple, it can lead to biases.
One simple issue is that some people vote once, while others might vote much more often. The result is the more frequent voters will slant the result; it is a system that can quickly result in bias. A fix for this problem is simply to limit the number of votes to ensure that everyone is equally represented. But does the simple majority vote alone ensure that the voting is fair?
Let’s look at a couple of examples that demonstrate different approaches in order to answer this question.
In the United States presidential election, the so-called popular vote is the majority vote; it is the sum of votes cast across the United States for each candidate. However, this is not the math that matters for deciding the future president. Votes in each state count toward the Electoral College, and each state has a certain number of votes that they can cast. Once the majority vote determines each state’s winners, the votes for each state are assigned to the particular candidate (with a few exceptions). The candidate who receives the majority of these Electoral College votes wins.
The world of sports offers a different set of voting approaches. Participants in a knockout or elimination model compete to advance through a series of rounds. Losers are eliminated while winners go on to the next round. In a league model, teams are assigned points for different outcomes—win, lose, or draw. The team with the most points at the end of season is the winner.
Researchers at Xerox were curious about how different online voting approaches compared—specifically how one might compare the efficacy of simple majority voting versus tournament or elimination models. To do this, they started with known answers in a series of tests—so they wanted to see under what voting structures the crowd would reach the right conclusion. They found that although majority voting failed, tournament and elimination models worked.4
We can start to see that while the idea of majority voting is attractive in its simplicity, there is still much to consider to get the best from a crowd. Another limitation comes to light when we want to know how much someone likes something. In this situation, we want more than simple binary feedback; here we are looking for some detail. A very popular alternative to the simple like is a rating, which has now become a mainstay of consumer reviews on sites from Amazon.com to Yelp. In the context of crowdstorming, the reviews are simply responses to ideas, rather than existing products or services. Our case studies have shown that as reviews become more complex, we get a deeper understanding of how evaluators see an idea’s strengths or weaknesses. But the downside again is that this requires more work from participants. And, as we’ve learned, with increasing complexity and time commitment, participation drops off quickly, following a power law.
We find ourselves with an interesting trade-off. We can gather simpler feedback from more people in a majority vote mode, or we can collect more detailed—and potentially knowledgeable—feedback from smaller groups. We can also ask for multiple rounds of feedback using knockout or league models to improve the process; however, this creates the risk of having fewer participants since we have increased the work again. We can offer to compensate people for their feedback. As we noted in the last chapter, once we are tracking feedback, we are in a position to reward people for it. But there is still a limit to how much additional participation we can expect if we more than double the amount of work per person.
We know that current or prospective customers’ opinions are important. We also have an opportunity to find and listen to another group of potential decision-makers whom we have met in previous chapters—participants playing supporting roles like giving feedback or voting. Let’s look at this group first and then discuss how customers help us select the best ideas.
The Starbucks Betacup case study offers a window into a number of voting option combinations that provide different perspectives and lenses to evaluate the best results. The participating group was designed to involve members with domain knowledge as well as those with an interest in the challenge (prospective end users). All participants could submit ideas, collaborate by commenting on them, or team up and vote on ideas.
This approach provided three mechanisms to select the best ideas. First, it allowed for majority voting by the community (or peers)—idea submitters, collaborators, and participants who only cast votes. Second, the Betacup team invited an expert jury comprised of domain specialists from the product design, marketing, and environmental sustainability worlds.5 And finally, representatives from Starbucks planned to take these selections into consideration, but also had the option to select their own winner for possible licensing.
What was most interesting about the process is that it provided a way to contrast the expert process with the peer review process. If you recall, the contest received 430 ideas that were updated over 1,500 times in response to feedback based on over 5,000 comments and 13,000 ratings. The ongoing ratings were used to produce ongoing rankings for the ideas. This in turn helped participants understand how they were doing, and they were able to come back and refine their submissions.
In the final two weeks of the Betacup challenge, the contest went into a rating period. At this stage, no further idea updates were permitted and the expert jury and peers could focus on ratings. Because the contest was open to anyone, people could also invite their friends to rate and offer feedback. This is where we saw the issue of bias we discussed earlier in this chapter. The ratings process moved from providing knowledgeable feedback and peer review into a popularity contest—something that the data made very easy to see.
In the final days of the contest, the rankings began to move around quite visibly. Submissions that had done quite well throughout the contest moved down, while new submissions were appearing at the top. The issue became clear as we looked at the data: some of the friends and supporters that people had recruited were abusing the ratings system. This was not a surprise, really; in fact the Betacup terms and conditions contained a number of stipulations about the voting function, defining what they meant by fair.6 For example, people agreed not to create fake identities for the purpose of voting, and users agreed to not deliberately give bad ratings to improve their own ideas’ standing (what we have called bashing).
The platform used for the Betacup had a number of tools to evaluate individual rater performance. We were able to look at how far a participant’s behavior deviated from the mean. For example, if a person gave a relatively high vote for one submission and then relatively low votes to others, they would be designated a poor voter. Using this and other signals, jovoto was able to identify bashers and exclude these votes from the final count. This move helped ensure that rankings could be derived from peer ratings.
After accounting for biases, jovoto published the final ranking. The experts’ winner was ranked seventeenth out of 430 ideas in the popular vote. This seems like a substantial disagreement between the expert and peer review process. However, this depends on the expectation. Do we expect peer reviews to pick a winner? Or is there a benefit if we can filter out 90 percent of the ideas, so that we can focus on the top 10 percent?
Undoubtedly, if you were a participant in the Betacup, you likely would have experienced a range of emotions in the final days. You might wake up in the middle of the voting period to find your ranking had dropped or improved with no easy explanation. You would likely be angry, and rightly so. How could this process be fair when the ranking system appeared to be open to manipulation? To address this, jovoto held open discussions with participants.7 People made a number of interesting suggestions—one in particular that was easy to implement for future contests.
A system was proposed to identify people who had contributed above a certain level by using jovoto’s karma points system; their votes would count if they reached a certain threshold of points (demonstrating they were valuable participants). In other words, jovoto’s monitoring system was being used to sort peers from friends, the first steps to enable peer review versus a popularity contest.
We reran the voting using the contest data and included only the votes for people who had participated above a certain threshold—that is, those who had earned a certain amount of karma, identifying them as peers. We found that this subgroup did better at predicting the expert selection.
As discussed previously, it is important for those who participate in a community to be clear about who is part of the community—who can be part of processes like collective choice or decide on the best submissions. Measuring and monitoring contributions helps to identify peer groups for inclusion in the selection process. We have a way to select a large number of participants that can provide a good way to filter ideas. They can facilitate picking winners by helping to ensure that the best ideas are not lost in a sea of submissions and make it easier and more efficient for experts to focus on the most likely candidates.
Peer review has its place; but it also has its limits.
In September 2012, Quirky founder Ben Kaufman described how Quirky’s community feedback had changed over the past few years. When Quirky started, the community was very focused on particular types of products, including electronics accessories such as iPhones and iPads. But as the Quirky community has grown, so has its focus. As Ben put it, ideas “for a new type of baby bottle shouldn’t be shat on by a 43-year-old hardcore gamer who lives in his parents’ basement. Similarly, his Monster Energy coozie shouldn’t necessarily be reviewed by kindergarten teachers.”8
This is a real challenge for communities. On the one hand, members of a community want to participate in decisions about what the community does. It is part of the collective choice and membership institutions we discussed in Chapter 7. At the same time, it is necessary to qualify feedback on the basis of whether or not someone is a potential customer.
LEGO Cuusoo has a similar issue, but a different approach to the problem. Anyone can submit ideas to be reviewed by Cuusoo fans. In fact, they are free to invite people to vote for them. But this simple majority vote model is less focused on feedback and more focused on filtering. If an idea receives 10,000 votes, LEGO’s internal team will review the idea. Beyond this initial filtering, there will be many more opportunities for feedback and testing. For example, LEGO’s internal team will work with the fans who proposed and supported the ideas to come up with a final design to take to production. They can help to refine and test issues from price point to packaging.
The LEGO Cuusoo program is relatively new, so only a few ideas are in production. While LEGO has not shared how they are doing from a sales perspective, the process appears to be working from a few dimensions. First, one user has already had two ideas nominated; he is now on LEGO’s radar as someone they might want to work with more closely. In this way, the approach has helped LEGO discover new talent. It has also been very effective at generating conversations and ongoing awareness for LEGO, through ongoing content production to describe new LEGO products.
LEGO’s process is similar to crowdfunding processes, but with one very important difference. The most popular forms of crowdfunding ask people to prepurchase—not just to signal interest, but to commit to paying for something. Of course, the idea of agreeing to pay for something before it is delivered is not new. What is new is using it as a signal to understand the value of an idea before it is even produced.
The decision-making process using crowdfunding works like this: companies present ideas as if they are ready to be purchased, and people commit to purchase before production. We do not need the expertise of Sequoia Capital to try to judge the business’s future potential or help from Victors and Spoils to evaluate a design; we can bypass expertise, peer review, and voting if enough people are willing to buy an idea. Or can we?
The challenge with this approach to decision-making is that ideas need to be realistic—which means they need to be close to what the finished product might be like. And this can be tricky. In September 2012, the leading crowdfunding site, Kickstarter, posted guidelines to this effect: “projects cannot simulate events to demonstrate what a product might do in the future. Products can only be shown performing actions that they’re able to perform in their current state of development.” Also, “product renderings are prohibited. Product images must be photos of the prototype as it currently exists.”9
Asking prospective deciders who are also customers to prepurchase is a very powerful way to gauge which ideas will work. However, this approach is being refined to avoid the risks of offerings that have no hope of being realized. As Kickstarter’s new rules illustrate, presenting a vision of the future is likely too risky; however, presenting a vision of what one can achieve now will take a lot of work in the form of functioning prototypes. We still have a relatively small group of people who are capable of indicating how ideas might work when they are eventually realized.
Thomas Gegenhuber, a researcher in the crowdsourcing space, has been exploring how decision models might be combined and has offered the following input: “InnoCentive has a nominal setting—solvers create their ideas individually and cannot access the production of other solvers. The organization that seeks a solution assigns a group of experts with domain knowledge, presumably the researchers of the R&D lab, to select the best idea.”10 This describes our search pattern—where there is no interaction between participants—very well. Gegenhuber contrasts InnoCentive’s approach with consulting company Deloitte’s method based on its “Innovation Quest” process. In the Deloitte case, after “individuals created the ideas individually, a group of domain experts [from] the organization selects ideas for the next round. In the second round, the employees might team up with others to win the challenge. In the final phase, all employees are invited to vote and comment on the ideas, which is one criterion for selecting the winners.”11 Deloitte’s contest is run within the organization’s “four walls,” but the approach is hybrid: the process involves a mix of experts, employees (the “invited” crowd), and, finally, a broader “public” group of “all employees.”
We know that Quirky is doing something similar. They are asking for feedback from the community and then from their retail partners—who select what products are best for their channel—at different points in their process. We’ve seen Threadless doing the same in their work with GAP, where they add another level of selection beyond their community voting. And if we think back to Bobby Oppenheim, from Triple Eight, we know he took the peer review results and then discussed them again with his professional skateboarding team as well as his longtime retail partners.
The hybrid decision model is a little like a Swiss Army knife. So it is appropriate that our next example comes from the company that makes the original Swiss Army knife, Victorinox.
In 2012, Victorinox CMO Andreas Michaelis had been intrigued by the opportunities that the crowd seemed to present. However, he also realized a number of critical risks based on his close observations—not the least of which was the decision-making process. As users of an iconic product such as the Swiss Army knife, Victorinox fans and lead customers are constantly suggesting ideas; and like many of the cases we have discussed, they are subject to the tyranny of ideas. For a number of practical reasons, the company could only consider a few of these.
Michaelis wanted to ensure that a crowdstorming pilot would lead to ideas that his company could actually implement. So the team focused its challenge on answering a question that already had an established path to product—developing the graphic design for the surfaces of knives included in the limited edition Victorinox Collections. The production for this design was well understood, so the challenge to create the design had a good chance of being implemented.
With the production risks out of the way, Michaelis and his team took a very deliberate approach to how they were going to select the best designs. They ultimately decided that they needed input from four groups: the participants, an expert team, Victorinox employees, and Victorinox fans.
The first evaluation layer made use of the jovoto peer review process. We’ve discussed how this process requires that participants have earned a certain amount of karma—that is, they have participated and demonstrated enough value to qualify them to vote. The second evaluation layer was organized internally. It consisted of Michaelis’ expert team who reviewed the results from the peer review process to ensure that the designs could technically be produced, and that they were aligned with Victorinox’s values. (Nine out of 10 of the produced designs were selected from the top 10 percent of designs ranked by the community. This fact underscores the value of the peer review process in effectively and efficiently filtering the large number of submissions—which, in this case, numbered over 1,000.)
With the expert review complete, all Victorinox employees were invited to select their favorite design out of the expert internally reviewed selection. The company saw the employee vote as a crucial step in creating employee ownership. As we discussed in Chapter 7, membership and collective choice are essential parts in the power of creative collaboration (in this case, a community that was now comprised of Victorinox and jovoto designers).
The final evaluation layer leveraged thousands of Victorinox’s Facebook fans. They were asked to choose 10 designs from the 30 designs selected via the peer, expert, and employee decision processes. Each Victorinox fan was able to choose their favorite using a Facebook application that organized the voting process. This process was advertised via the Victorinox fan page, and led to high fan engagement, broad discussions, and a significantly increased reach besides the actual chosen designs Victorinox would generate.
Michaelis made this comment on the process: “We finally produced designs selected by our Facebook fans which, in other case, we would not have chosen ourselves. . . . These decisions have been proved right in terms of sales numbers.”
How effective was this hybrid decision model according to Michaelis? “The evaluation process resulted in a unique limited edition selling 20 percent better than all limited editions we created before through a standard design process.”12
We have looked at a number of mechanisms for dealing with the tyranny of ideas. There is a great deal of academic research and practical experimentation taking place in this area. Since it is not always practical from a time perspective to make a small number of experts responsible for reviewing hundreds of ideas, it is helpful to have some way to filter out the most promising ideas to create shortlists. Then there is a question of expertise and diversity. If people lack the know-how to evaluate risks associated with proposed approaches—such as new technologies or processes—then no amount of diversity will help. However, expert communities are prone to biases that adding diversity can help to overcome.
When we look at the choice of process, we notice that as one increases the number of people who can participate, new issues related to manipulation of voting or rating systems arise. Oversight is therefore required to counter biases and gaming.
To help decide the best way to select ideas, we have summarized the various approaches we have described (Table 9.1). While each has its advantages, hybrid models appear to produce the best results—albeit with the highest resource requirements in terms of time, infrastructure, and rewards.
Table 9.1 Decision-Making Approach Summary
Notes
1. For more details see “The Netflix Prize Rules,” www.netflixprize.com//rules.
2. Andrew McAfee, “Everything You Need to Know About Social Business and Enterprise 2.0 in Three Short Reports,” Andrew McAfee Blog, February 2, 2012; and Deb Gallagher, “The Decline of the HPPO (Highest Paid Person’s Opinion),” MITSloan Management Review (April 1, 2012).
3. David Leonhardt, “When the Crowd Isn’t Wise,” New York Times, July 7, 2012, docs.google.com/a/jovoto.com/file/d/0B-hrph5bhwUlRmEtQ3VIT09jdTQ/edit.
4. Yu An-Sun and Christopher Dance, “When Majority Voting Fails: Comparing Quality Assurance Methods for Noisy Human Computation Environment,” Proceedings CI2012, arxiv.org/pdf/1204.3516.pdf.
5. See details for expert jurors at jovoto challenge website, www.jovoto.com/projects/drink-sustainably.
6. See details for terms at jovoto challenge website, www.jovoto.com/information/terms.
7. See details at jovoto challenge website, support.jovoto.com/discussions/feedbacks/33-open-rating-in-public-contests.
8. Ben Kauffman, The Quirky Blog, www.quirky.com/blog/post/2012/09/what-raising-money-means-to-me.
9. For details see Kickstarte website, www.kickstarter.com/help/guidelines.
10. Thomas Gegenhuber and Marko Hrelja, “Broadcast Search in Innovation Contents: Case for Hybrid Models,” Proceedings CI2012, arxiv.org/pdf/1204.3343.pdf.
11. Ibid.
12. Based on conversations between Bastian Unterberger and Andreas Michaelis, September 2012.