Now that you have a potential solution for your problem, remember that it is only the starting point, or the 1% inspiration. What comes next is the core of innovation – the 99% perspiration that turns a theoretical concept into cold, hard results through experimentation and learning. This book builds upon the process for continuous innovation introduced by Eric Ries in The Lean Startup,1 which has helped companies “create long‐term growth and results.”
Lean Startup itself builds on the precepts of lean manufacturing pioneered by Taiichi Ohno and Shigeo Shingo at Toyota, which eliminates waste by empowering workers, accelerating cycle times, and delivering inventory just in time. Eric’s work was also heavily influenced by his mentor Steve Blank, whose Customer Development methodology is described in The Four Steps to the Epiphany.2
Before we dive into how we can employ these same techniques to deliver results in the form of social impact rather than profits in Part II of this book, let’s review the five basic building blocks of Lean Startup. We’ll explore each in a mission‐driven context and refer to them throughout the rest of the book.
In essence, Lean Startup applies the rigor of the scientific method to systematically test the biggest risk factors that might cause a product or service to fail. By doing so efficiently, you can deploy precious resources to their greatest effect. Don’t think of this as a linear process, but rather a set of techniques to employ, driven by the knowledge gained through experimentation. What’s important is to stay focused on your goal, be honest about what you do and do not know at any stage, and constantly find ways to learn and improve.
I know several mission‐driven organizations whose teams have read The Lean Startup and immediately embraced it. They have tended to be technology‐centric enterprises, social startups, for‐profit businesses, or a combination of all three. I’ve heard from far more who have been inspired by the concept, but who were unsure of how to proceed given the nature of their funding, culture, or area of focus. They all have one thing in common: a desire to make a bigger difference by moving the needle on a social cause.
In fact, a grassroots community called Lean Impact sprang up several years ago as an offshoot of the Lean Startup movement, bringing together hundreds of practitioners. What has been missing is a framework that answers the common questions that arise when theory meets reality. How do I experiment when my funding is based on activities and deliverables that are predefined? How can I create a feedback loop when it takes years for true impact to become evident? Is it responsible for us to experiment on people who are already vulnerable? Where do I find the resources to test and iterate when I can barely make payroll?
Lean Impact begins where The Lean Startup ends, introducing new tools, reframing the methodology for this new context, and addressing barriers unique to the complexities of social innovation.
To see Lean Startup in action, let’s take a look at Harambee Youth Employment Accelerator, an impressive social enterprise in Johannesburg, South Africa that I visited for a couple of weeks last year. Youth unemployment has hit crisis proportions in South Africa, with almost 40% of young people (officially defined as ages 14–35) not in employment, education, or training.3 This poses a dire threat to social cohesion, political stability, and an entire generation’s ability to lead productive and meaningful lives. To bridge the gap, Harambee seeks to match disadvantaged youth who have never held a formal job with employers seeking qualified talent.
Its CEO, Maryana Iskander, is a small woman who packs a punch. She lifts the office with her peppy enthusiasm and huge heart, yet can quickly zero in on the incisive question that needs to be asked. This rare combination makes her a compelling nonprofit leader, coming from prior stints as a consultant at McKinsey and COO at Planned Parenthood.
Maryana embraced a philosophy of experimentation from the start, with a requirement that every idea that is tested has a measurable target. Walking down the hallways, you can’t help but notice the walls are plastered with names, scores, and rankings. Everything is measured here from the moment a young person steps through the door. The data helps Harambee constantly improve its algorithms to select the best candidates for the jobs on offer.
Harambee began by talking to both youth and employers to understand their pain points. It turned out that many youth had neither the social networks to connect with job opportunities nor the soft skills – such as punctuality, teamwork, and self‐motivation – that were needed to succeed on the job. On the other hand, employers tended to hire the first person through the door, not the best person for the job, only to then suffer from high attrition. A clear opportunity. Harambee decided that its value proposition to employers could be to provide more work‐ready candidates, with higher rates of retention. It started with the obvious and did what employers do – administer basic assessments for math and English – and found that very few of its candidates could meet these requirements. A sad legacy of poor schooling.
So, it pivoted. Rather than continuing to test for school‐based knowledge, it instead focused on aptitude and sought providers who could assess learning potential. Armed with this new data, the team could select youth who had the personality traits and underlying ability to learn the necessary skills and then train them to fill any knowledge gaps. Employers were happy to receive more qualified candidates and hired many of them.
Harambee has continued to bring this rigorous focus on its goals, deep understanding of its customers, and appetite for constant experimentation and iteration to its work. To date, it has helped over 50,000 youth find their first jobs. We’ll return to Harambee’s story throughout this chapter to see how it has applied the Lean Startup to maximize its impact.
Most mission‐driven organizations work under conditions of extreme uncertainty. The existence of gross suffering, injustice, or basic unmet needs in a community usually means that both markets and government have failed. This is where the social sector and philanthropy come in.
We may have lots of ideas about what might work, but how do we determine what will work? In the face of complex problems and untested solutions in dynamic contexts, we can maximize our chance of success by systematically addressing risks. This begins with an inquiry, adapted from the well‐established scientific method, to tease out the key assumptions that are likely to make or break our solution. By testing these potential points of failure up front, we can increase our confidence before considering a greater investment.
For example, Harambee operates in a two‐sided market that matches unemployed youth with potential employers. One of its ideas was to offer a training program, called a “bridge,” to fill gaps in both soft skills and hard skills for job seekers. Among its key assumptions was that employers would be pleased with the resulting candidate pool and thus hire the candidates. Its first test included 43 job seekers, a single‐job family (financial services), and three clients willing to be early adopters. To limit its risk and upfront investment, Harambee outsourced the entire process, identifying candidates through a labor recruitment agency and hiring contractors to perform the training. It worked! 39 of the youth were placed into jobs, and the companies were delighted. With this basic hypothesis validated, Harambee was then able to proceed to testing additional assumptions related to job retention, lowering the cost per candidate, and working with other types of employers.
When we land on a promising solution, it’s natural to become emotionally attached. After all, we could have a way to alleviate enormous suffering or open the door to tremendous opportunity. On the other hand, naysayers may shoot you down, seeing your nontraditional approach as impractical or even crazy. The trick is to find the balance between faith and dismissal. Let’s call it cautious optimism.
One of the classic failures in global development was the PlayPump. The initial launch received big donations and numerous awards based on this creative idea to replace hand pumps in Africa by harnessing the energy of children playing on a merry‐go‐round to pump water into a storage tank. Too good to be true? It was. After installing 1000 PlayPumps, new deployments were stopped in the face of withering criticism. It turned out the extravagant claims of providing clean drinking water to 10 million people with 4000 PlayPumps by 2010 would have required a full 27 hours a day of “play.” And, kids lost interest quickly, given the force required to turn the merry‐go‐round, leaving the humiliating task to women in the village. Tens of millions of dollars were spent on this scheme.4
Before making a big investment in even the most exciting idea, it’s prudent to identify our underlying assumptions first. You might articulate these in the form of sentences that begin with “I believe.” Or, answer tough questions, such as:
This is the time to play devil’s advocate. The same team members and stakeholders who helped to brainstorm new ideas can also help to identify key assumptions. Bring on the skeptics and break out the sticky notes! They will see important angles that you might miss. Don’t see this as shooting down your idea, but rather making it stronger.
But don’t go overboard. It’s not necessary to delineate every last possibility that might cause a hitch. Most of those can be accommodated as they arise. What we are looking for are the biggest risk factors, or killer assumptions, that will determine success or failure. Start with the most obvious, continue to ask hard questions along the way, and allow others to emerge as you learn.
The Lean Startup points to the value hypothesis and the growth hypothesis as the two most important assumptions entrepreneurs need to validate. When it comes to social innovation, we also need to consider a third: the impact hypothesis. In other words, does it work? For a solution to produce a substantial social benefit, we have to deliver on all three. This means explicitly identifying the key assumptions for each, validating whether they are true or false, and continuously improving on our results.
Value, growth, and impact are the three essential pillars for a successful social innovation. A mission‐oriented solution must deliver value to its customers or beneficiaries so they will try it, embrace it, and recommend it to friends and family. Otherwise, trying to convince people to use a product or service they fundamentally don’t want will leave you swimming upstream. Additionally, without an engine that will accelerate growth over time, an intervention will likely fall far short of the need and remain expensive in the absence of economies of scale. Finally, our ultimate responsibility is to deliver social impact. Are we not only improving lives, but doing so to the maximum degree possible?
For example, Harambee started out by testing its dual value hypotheses of helping youth find job opportunities and employers identify work‐ready candidates. Its growth hypotheses might include one about the willingness of employers to pay for job placements. And, a first step for its impact hypothesis might involve its ability to place and retain youth in jobs, as a step towards reducing youth unemployment. Part II includes a full chapter on each of these three hypotheses and explores them in detail, along with a wide range of examples.
Once you have articulated your core assumptions around value, growth, and impact, it’s time to validate and learn. Of course, there are probably dozens of assumptions you could test, so start with the riskiest. This will help you to identify potential points of failure and improve your approach before you make a bigger investment to build infrastructure, hire a team, or manufacture a product.
So, how do we go about validating assumptions? That’s where an MVP comes in.
With a set of prioritized assumptions in hand, the next step is to formulate one or more hypotheses that will validate or invalidate them. While an assumption represents a general belief about what will happen, such as “I believe people will buy my product,” a testable hypothesis precisely articulates a provisional theory that can be proven or disproven, such as “If 20 people are offered my product for $10, then 60% will agree to buy it.” This if–then structure is one way to make your test (the “if” piece) and the expected result (the “then” piece) explicit. A hypothesis should be objectively measurable so that success or failure is not a matter of debate.
Each hypothesis can be tested through controlled experiments using one or more MVPs. Think of an MVP as the cheapest and quickest prototype or proxy that can enable learning. The faster we can learn, the less time and money we are likely to waste pursuing a fruitless path. An MVP consists of running a test and comparing the results to the original hypothesis to prove it right or wrong. If the test fails, the experiment can be tweaked using different messaging or targeting based on what was learned, then rerun; in some cases, the solution may require a more significant redesign or need to be scrapped altogether.
Alas, in the social sector, many powerful forces are aligned against starting small. What happens instead? Sadly, it means millions of laptops, food packages, cookstoves, or other goods are distributed to people who don’t like, know how to use, or maintain them. It means training people on skills that are irrelevant or quickly forgotten. It means running costly and time‐consuming evaluations only to discover a fundamental flaw. It means deploying a solution that may work, but which is so expensive that it never reaches more than a few. Or, it means designing and deploying an expensive, multidimensional intervention that results in little impact, or even negative unintended consequences.
The goal of deploying an MVP is to reduce risk. If we can uncover a big or small issue with 10 people before we rolling out to thousands, we’ll save money, make faster progress, and avoid alienating potential customers. With a small audience, we can also build a bespoke MVP that is handcrafted to elicit the learning we need, before investing in costly design, production, and infrastructure.
When testing a hypothesis, observing actual behavior is far more accurate and revealing than asking hypothetical questions. This is where the MVP comes in. The aim is to create a simple prototype, mockup, or simulated experience to see how people react. Are they confused? Uninterested? Or do they beg you for the product itself?
In fact, Nexleaf Analytics, a nonprofit that builds and deploys wireless sensors, has found significant differences between actual and self‐reported behavior. Early on in its work, it sought to encourage the usage of biomass cookstoves in order to reduce the harmful emissions that contribute to the deaths of over four million people a year. To do so, Nexleaf installed rugged sensors to measure the duration and frequency of cookstove use in poor areas of India. They discovered that when women were asked to report on their own usage, their answers did not match the sensor data. In fact, women both underreported and overestimated their usage.
Ultimately, Nexleaf found the most accurate responses were elicited by combining actual usage data with interviews. In one rural Indian village in which some households were not using cleaner cookstoves, this technique uncovered problems that made the stove design unappealing. Women complained that wood had to be chopped into small pieces and that the batches of fuel sometimes didn’t last long enough to complete their cooking.
Obtaining fully honest feedback can be particularly difficult for mission‐driven organizations, as we often work with disadvantaged populations in which a cultural barrier or power differential may exist. Thus, people may be too polite or afraid to offend you, and instead say what they believe you want to hear. The social enterprise d.light found that showing potential customers a single solar lamp model would rarely elicit any negative feedback. But when it offered three different versions to compare, people were more comfortable sharing what they preferred about each.
We don’t need to go so far as to install sensors for every MVP, but by finding ways to observe people’s behavior we can gain a far more accurate picture.
There’s no hard‐and‐fast rule for what makes a good MVP, other than that it enables you to learn as quickly as possible. So, how do you design an MVP? Ask yourself, What could you do to learn more about your biggest risk if you had to do something tomorrow? How about next week or next month? If you’re spending more than a few hours, you’re probably overanalyzing. MVPs are intended to be imperfect. If you’re not sure, try it. Getting out of the building and seeing how customers behave will open your eyes, even with an imperfect experiment.
What is most important is to shift your mindset from one of building to one of learning. This is not as easy as it sounds, as we’ve been trained our whole lives to build things. I’ve often watched teams begin with a simple prototype that quickly grows in complexity based on customer excitement. Soon they’re focused more on running the product or service than validating the remaining high‐risk assumptions.
The most common approach to building an MVP of a product is to create a rough prototype with minimal functionality. In the case of software, a paper mockup or interactive storyboard can be used to see how users respond to the features, interface, and flow. In the case of hardware, a prototype might consist of cardboard, string, duct tape, or whatever is on hand that can convey the experience of the product.
At the other extreme, an MVP could take a more expensive existing product to test questions around feature set, willingness to pay, or acceptance for a different use or demographic. While testing with a pricey proxy may seem counter to lean, remember that the aim is to avoid wasting time and money. Once we learn how users respond, we can invest more heavily to reduce costs for a design that will be more likely to work.
For an MVP of a service, you might create a flyer or Web page to describe the service, make a special offer for early signups, and see who bites. To test the service itself, you could invite a small cohort to participate in a bespoke experience. Or you might outsource provision, as Harambee did, and avoid building up costly staff and infrastructure. In the case of distribution, you could showcase a catalog and fulfill orders manually to learn about product mix before investing in a streamlined supply chain.
You can also design an MVP to test paths to scale. For example, if you hope to grow your organization quickly, you might put out a job ad and confirm the availability of the talent you need at the price you can pay. If you hope to scale through replication, you might recruit a typical organization and see how well they are able to preserve the fidelity of your offering with minimal training. Or, if you hope to scale through government, you might engage with a current or past official to understand timeframes and criteria for changing policies and winning procurements, then test those.
These are only a few examples of the vast possibilities. A single MVP can often shed light on multiple hypotheses. Have fun, and be creative. Just remember to start small. You’ll find many more detailed examples of MVPs in Part II, “Validate.”
The process of validated learning looks at hard data on what works to confirm a hypothesis. It cuts through the emotion and politics that can often drive decision‐making and instead focuses on empirically demonstrated evidence.
Harambee knew that placing youth into jobs was not enough. To provide real value to employers, it had to address the pain point of poor retention. If it could do so, companies would be willing to pay Harambee a placement fee and would keep coming back. This required it to beat the typical retention rates for each sector. So, it tracked candidates after being hired to determine if they stayed in their jobs and followed up to understand the reasons behind any departures.
One sector in which it saw high attrition was retail and hospitality. After interviewing the youth, Harambee discovered that they simply weren’t physically prepared to stand all day as part of their jobs. As a result, Harambee modified its bridge program so that candidates would regularly stand during their five days of training. This gave youth a chance to get comfortable with that job requirement before showing up to work. Those who couldn’t adjust self‐selected out, so employers only received candidates who were truly prepared.
Before deploying an MVP, document the key hypotheses you are testing, what data you will collect, and – most crucially – the measurable success metrics. In Harambee’s case, improving retention was key. Establishing your criteria in advance will keep you honest and prevent confirmation bias, in which attachment to an idea can lead teams to find some evidence that it’s working. These same criteria can also form the basis of an agreement with executives or funders on the results needed to unlock further investment.
There’s a crucial distinction between what The Lean Startup calls vanity metrics versus actionable or innovation metrics (see Table 5.1 for an example). Vanity metrics tend to reference cumulative or gross numbers as a measure of reach. In the absence of any data on the costs entailed and ensuing impact achieved, they give no indication of whether an intervention is working or better than another alternative. With enough time or money, reach can be increased through brute force. Big numbers may simply mean someone is good at telling a story and raising money.
Table 5.1 Examples of vanity versus innovation metrics.
Vanity metrics | Innovation metrics |
|
|
On the other hand, innovation metrics measure the value, growth, or impact being delivered at the unit level. In the business world, this is analogous to the unit economics – the profit made on each sale – as opposed to the aggregate users or revenues. During the dot‐com bubble, startups fueled by plentiful venture capital built up big audiences at a financial loss. Of course, they soon came crashing down, as the fundamentals weren’t there. For a mission‐driven organization, the equivalent metrics are the unit costs along with the unit yields – such as the rate of adoption, engagement, and success that will bring about the intended social impact. These data points drive feedback loops, can be tested and improved through experiments, and indicate whether a solution is on track. When the targets are achieved, scaling becomes far easier and more cost effective.
Vanity metrics have spread throughout the social sector like a communicable disease. If you go to the website of your favorite mission‐driven organization, I bet what you’ll see highlighted is the number of people it has served or reached. While at USAID, I constantly railed against the continual pressure to share the number of people we’d “touched.” It’s meaningless.
Most workforce development organizations will tell you how many people they’ve trained. Some may even share how many job placements they’ve made. But, with enough outreach you can find people in need, with enough funding you can run large numbers of trainings, and with enough participants a certain number will find jobs. What matters is how much you spend to train each person, what percentage of them get jobs, and whether they stay and grow in those jobs. If you can make the biggest difference in long‐term employment with the fewest dollars spent, you will have a meaningful competitive advantage and a way to magnify impact.
How metrics are used can also make them more or less meaningful. Many mission‐driven organizations collect reams of data – primarily for the purpose of accountability and reporting, as required by their funders. Don’t mistake such compliance data with the innovation metrics that you need to learn and drive improvement. There’s a difference between doing things right and doing the right things.
Beyond tracking learning for each MVP, consider replacing your traditional project dashboard that highlights vanity metrics with one that tracks progress against innovation metrics. This can rally stakeholders, ensure transparency, and build engagement.
To do so, it’s important to determine how you aspire to be better than the alternatives. If people are burning kerosene today, is your value proposition to offer a less polluting option at the same price? If another nonprofit has a program for reducing gang violence, do you propose to expand reach with a lighter‐weight model that can make a similar impact at a dramatically lower cost? If you aim to create a movement, do you have a message that engages more people to take more effective action than other groups? If you can’t clearly articulate how you are better than what’s out there, think hard about whether you should be competing for scarce funding and attention.
Innovation metrics can be used to track the success criteria that must be met for your unique model to work. These in turn can be broken down into testable hypotheses and MVPs that will move you closer and closer to your goal.
In Harambee’s case, an innovation metric dashboard might include the cost of training per job seeker, percentage of candidates hired into jobs, and average retention rate, each with an associated target. Experiments can then be run to test a hypothesis: for instance, whether modifying the training so that students stand all day does in fact lead to higher retention. Based on results, the success criteria may need to be adjusted but should always show a plausible path to the overall goal in aggregate. For example, if Harambee learns that its price point is too high, it might simultaneously reduce the fee it charges employers, its training expenses, and the expected success rate. The strategy remains the same: to demonstrate that it can deliver more value for money than the alternatives.
Now that we’ve identified our hypotheses, built a minimally viable product, measured the results, and started to learn, it’s time to do it all over again. After all, the build–measure–learn feedback loop (see Figure 5.1) is at the core of the Lean Startup model.
Engineers such as myself can get stuck building ever‐more‐elegant solutions, scientists can get stuck gathering and analyzing data, and academics can get stuck delving deeper and deeper into research. But it’s time to put aside the perfectionist in all of us, as the most critical indicator of successful innovation is the speed of iteration. The faster we run through the entire cycle, the faster we learn, and the faster we improve.
Alas, traditional grants are not designed to support this model. Many global development programs run for three or five years, with midline and endline evaluations. This means that a five‐year program only starts to seriously validate the intervention after two and a half years. And, this is no MVP – the evaluation can take a whole year itself. The upshot is that it can be three and a half years before any meaningful insights emerge on what is working. By then, it’s a bit late to change much. In reality, such evaluations are more about ensuring compliance and preventing fraud than learning and improving.
When we think fast feedback loops, we tend to focus on the early startup stage in which we are running rapid experiments to home in on product–market fit for a new solution. This usually involves lots of lightweight MVPs and little scaffolding. But establishing the culture and infrastructure for ongoing build–measure–learn cycles can also help drive continuous innovation and improve performance for more mature products and services.
Accelerate Change, an incubator for social enterprises, ran through numerous iterations of the build–measure–learn cycle to develop a service for its client, the Fair Immigration Reform Movement. As lead entrepreneur, Veronika Geronimo set out to learn what low‐income immigrants to the United States wanted and needed. She discovered that the greatest demand was simply to learn English.
It turned out that existing classes were slow, expensive, and not always effective at improving language skills. Free or low‐cost options often had long waiting lists, and apps were hard to download and easily forgotten. Veronika and her team looked for a better way, quickly testing and failing many times with multiple ideas. They discovered class space was too expensive and that many students found it difficult to access online content and make digital payments.
Veronika’s team believed that the best way to learn a language wasn’t attending a few isolated classes each week, but rather using English in real‐world settings for multiple hours a day, every day. They realized that free content was widely available – on TV, radio, and the Web – and immigrants only needed encouragement and guidance to turn practicing English into a consistent habit. In order to test their hypotheses while limiting bureaucracy and brand risk, they launched the service under a new name, Revolution English.
Through text messages, Facebook Messenger, and email, students receive tips, reminders, and resources for do‐it‐yourself immersion strategies that can be used throughout their daily lives – while watching TV, listening to the radio, or talking with friends and family. The experience is constantly optimized and improved using experiments that are carefully tracked and analyzed. Although it is free for users, the service has become profitable, primarily through ad revenue, and thus scalable.
By listening to users, constantly experimenting, rigorously measuring results, and developing a business model, Revolution English created a solution that both works and scales. One study found that students who supplemented their ESL classes with Revolution English increased their practice time by over eight hours per week, and that greater engagement correlated with higher final‐exam scores. Revolution English now serves over 400,000 people with its free service and expects to reach a million by the end of 2018.
After taking a few spins through the build–measure–learn cycle, how will you know when to claim success, keep going, or acknowledge failure? On a regular basis we need to step back, consider what we’ve learned, and draw conclusions from our experiments. Knowing when we have hit the limits of our current path and need to pivot is essential to innovation.
Identifying success is easy. After running tests and making improvements, have you met your success criteria – the targets for conversion rate, referrals, changed behavior, cost structure, etc. – that indicate your model will work? If so, it’s time to take the next step. By starting small, you will have learned important lessons that make a bigger investment appropriate, though of course risks will remain. But don’t think for a minute you are done experimenting. Rather, you’re simply ready to move to the next stage of learning that incorporates a more realistic scenario, higher‐fidelity solution, or expanded audience.
On the other hand, if you haven’t reached your targets but are making substantial headway, productively learning through experiments, and have additional ideas to test, buckle down and persevere. However, don’t fall victim to wishful thinking. Think of how microwave popcorn is made – the pops start slowing down. If there is a gap of more than two seconds, you better hit stop or your popcorn will burn. Deciding whether to pivot or persevere is similar. When the pace of learning and progress towards your success criteria has stalled for a while, it’s a good time to reevaluate. Is your learning resulting in new improvements that are just as or more promising than your previous ones?
If a key assumption has been invalidated or your model continues to fall far short of your success criteria, it’s time to pivot. In The Lean Startup, Eric defines a pivot as “a change in strategy without a change in vision.” Remember loving the problem rather than the solution? Here’s a good opportunity to practice. Stay focused on the goal you defined in Chapter Two, but it may be time to consider a new path. Is there another promising solution that may use an alternative business model, positioning, technology, or delivery mechanism? If not, as another strategy to reach your goal you may need to pivot and tackle an altogether different underlying problem.
As Harambee continued its experiments to improve retention, it found that some young people would stop showing up for work during their first month, even after completing the training. It turned out that many youth were simply running out of savings before receiving their first paychecks and couldn’t pay for transport to and from work. To address this, Harambee educated employers and encouraged them to structure a payroll advance that could tide their new employees over.
With its recognition of the importance of transportation costs, Harambee looked further and discovered a strong correlation between poor retention and workers who had to take two or more minibuses to commute to work. These youth were spending too much time and money to make a low‐paying job worthwhile. As a result, Harambee tuned its matching algorithm to make transport and geography deciding factors for lower‐paying jobs.
But, even this wasn’t enough for those living in the isolated and desperately poor township of Orange Farm from the apartheid era, who were simply too far from any meaningful economic activity. For these youth, Harambee had hit a dead end. It was time to pivot and think out of the box. Then the idea hit the team – cruise ships. Where you live doesn’t matter if you will be posted at sea for months at a time. International cruise liners turned out to be a great option for a first job as well as an opportunity for adventure.
TOMS Shoes is an early pioneer among the growing number of modern companies seeking a double bottom line – to do well while doing good. An essential element of its brand is the One for One model it popularized. For every pair of shoes purchased, another pair is donated to a woman or child in need. Customers can purchase stylish footwear and feel good about fighting poverty at the same time.
Despite giving away more than 60 million pairs of shoes to date, TOMS came under criticism that its charitable model wasn’t necessarily making a meaningful difference. To TOMS’s credit, it responded by investing in a rigorous study to evaluate its impact. The research showed that donated shoes were not reducing poverty, or even the number of shoeless children. Instead, it had created a mindset of dependency and risked displacing local manufacturers as local shoe purchases declined.5
With this stark realization, it pivoted. While TOMS continued to give away shoes, it sought to have them manufactured locally to build local industry and create local jobs. It also diversified its giving to provide more of what communities wanted – eyeglasses, clean water, and training and supplies for birth attendants.
Reaching a lot of people or making a profit does not equate to social impact. The evolution of the One for One model is a valuable lesson in the importance of validating our assumptions, and being willing to pivot based on what we learn. By taking its impact seriously, TOMS is doing more good today as a result.
Deciding whether to pivot or persevere often entails a tough discussion that can affect people very personally. It’s difficult not to have an emotional attachment to a solution you’ve been pouring your heart and soul into for weeks or months. Understandably, the tendency is to kick the can down the road, hoping for an ever‐diminishing likelihood of better news. This is particularly true in the social sector, where almost anything we try is helping someone at least somewhat, making it even harder to discontinue a solution that is showing a benefit.
Scheduling regular meetings in advance about whether to pivot or persevere can ensure this important reflection happens, while relieving the day‐to‐day pressure of continuous scrutiny. A couple of failed experiments in a row may just be a natural part of the cycle. Instilling the discipline to step back regularly, perhaps once every month or quarter, will yield valuable perspective. Ask the question, What evidence do we have to indicate our presumed solution will solve the problem and achieve our goal?
One approach Reprieve, the London‐based human rights advocacy organization, has taken is to empower small teams to experiment with multiple approaches as they tackle big problems. As a counterbalance to this autonomy, the teams come together quarterly for a pivot‐or‐persevere session to review data and evaluate how well each effort is working. Having a regular mechanism to step back and assess helped prevent people from becoming too attached to their own programs. We’ll learn more about Reprieve’s groundbreaking work in Chapter Nine.
The experimental, learning‐oriented approach described in this chapter has become pervasive in Silicon Valley at both startups and more established companies. Beyond tech, businesses of all stripes are embracing lean methodologies. Eric Ries’ second book, The Startup Way, highlights stories from one of the biggest, GE, and shows how larger organizations can integrate a “system of entrepreneurial management.”6 There is a growing recognition of the need for new tools in the face of the accelerating change and uncertainty in our world today.
Through my interviews and direct experiences with over 200 organizations, I have seen this movement begin to take root among those working to achieve social good as well. Make no mistake, this still largely consists of early adopters, whether smaller social enterprises and philanthropists or innovation teams at larger organizations. Yet, they are starting to deliver better solutions and tangible results. People are taking notice.
The rest of this book delves into the unique challenges, contexts, and constraints that have, to date, limited the adoption of lean approaches to social good. My hope is that by learning from the paths these pioneers have blazed, we will unlock the latent potential to deliver dramatically greater social impact at scale.