[ CHAPTER 4 ]

Mapping the Ecosystem:

Identifying Pieces and Places

Imagine four partners at a conference table discussing their companies’ latest groundbreaking endeavor. Even when they share a vision of what they are trying to accomplish—what new value their joint initiative is trying to create—they will often have different visions of how their separate efforts come together. Who will move first? Who is dependent on whom? Who faces the customer and who is just an invisible cog? In a world of supply chains, the linear path of A hands off to B hands off to C hands off to D is relatively clear. A world of ecosystems, however, is a world of permutations—A, B, C, and D all need to work together simultaneously, and the combinations of possible roles is vastly greater. The challenge is usually not that partners openly disagree about what must happen first or who is responsible for what; it’s that these questions are not sufficiently explored. Instead, having agreed on the end vision, the partners assume that they also agree on the best path to get there. A dangerous assumption indeed.

When strategies explicitly call for collaboration, they make an implicit assumption about structure. In this chapter we will make this assumption explicit by exploring a systematic approach to clarifying not just who needs to come together to bring your value proposition to life, but also where they will be positioned and what risks lie within the plan. By making the structure of the ecosystem explicit, we will make our strategies more robust.

From Value Propositions to Value Blueprints

Your value proposition is a promise. It is a vision of the new value that your innovation efforts will create, as well as who this value will be created for. For effective, efficient innovation, you need a way to translate the value proposition into action. When the value proposition requires multiple elements to converge, you need an approach that will allow you to assess alternative configurations and generate shared understanding and agreement among the partners as to how these elements should come together.

To do this, we will use a mapping tool I call the Value Blueprint (see figure 4.1). The value blueprint is related to value chains and supply chains. The main difference is that while the latter tend to focus on the linear sequence of handoffs from suppliers to producers to distributors to end customers, the value blueprint is explicit about the specific location and links of complementors that lie off the direct path to market but are nonetheless critical for success. Indeed, it is the ease with which these off-path partners can be overlooked using traditional strategy tools that gives rise to the innovation blind spot. The value blueprint sits at the heart of all the tools we will develop from here onward.

The value blueprint is a map that makes your ecosystem and your dependencies explicit. It lays out the arrangement of the elements that are required to deliver the value proposition—how the activities are positioned: how they are linked, and which actor is responsible for what. We begin by identifying the full set of partners and specifying their positions: the suppliers your project relies on, the intermediaries that lie between you and your end customers, and the complementors whose offers are bundled at different points along the path. We then identify the changes in the activities and links that we are expecting from each participant. Finally, we assess how these changes affect the likelihood that the entire system will actually come together to deliver the value proposition.

We have already looked at value blueprints when describing many of the cases so far—Michelin’s PAX System, 3G telephony, digital cinema—to identify the actors and the links that make up the ecosystem.

The steps to construct a value blueprint are straightforward:

1. Identify your end customer. Ask: Who is the final target of the value proposition? Who ultimately needs to adopt our innovation for us to claim success?

2. Identify your own project. Ask: What is it that we need to deliver?

3. Identify your suppliers. Ask: What inputs will we need to construct our offer?

4. Identify your intermediaries. Ask: Who stands between us and the end customer? Who touches our innovation after us, and to whom do they pass it on the way to the end customer?

5. Identify your complementors. For each intermediary ask: Does anything else need to happen before this intermediary can adopt the offer and move it forward to the end customer?

6. Identify the risks in the ecosystem. For every element on the map ask:

a. What is the level of co-innovation risk this element presents—how able are they to undertake the required activity?

b. What is the level of adoption risk this element presents—how willing are they to undertake the required activity?

    It is often most productive to characterize the status of each element of your innovation effort along a green–yellow–red traffic light continuum. For co-innovation risk, green means that they are ready and in place; yellow means that they are not yet in place, but that there is a plan—they may be late, but they’ll get there; and red means that they are not in place, and there is no clear plan. For adoption risk, green means your partners are eager to participate and see clear surplus from their involvement; yellow means they are neutral but open to inducement; and red means they have clear reasons to prefer the status quo and prefer not to participate in the proposition as it stands. In assessing the risk implied by new links, it is important to consider the incentives of each linked party to choose to interact in this new way.

7. For every partner whose status is not green, work to understand the cause of the problem and identify a viable solution.

Figure 4.1: A generic value blueprint maps the actors and the links that make up the ecosystem.

8. Update the blueprint on a regular basis. Your value blueprint is a live document, and as conditions change over time, it will need to be modified accordingly.

By making these relationships clear, the value blueprint forces everyone involved in the conversation to confront the challenges that lie beyond their own immediate responsibilities; to consider how they want to organize and address the risks that are inherent in every collaborative endeavor; and to deal with these issues proactively. Note that what matters here are the elements, not their ownership. When different elements come from the same firm, they must still be assessed separately.

It is rare for a significant innovation to start life with an all-green-light blueprint. It is also not necessary. Some yellow lights are acceptable, as long as they are accompanied with a plan to turn them green. Yellow lights are signs of delays to come, but they need not be showstoppers. Red lights, however, are a major problem. Any red light that appears on your map—whether because of a partner’s inability to deliver or unwillingness to cooperate, or due to a problem on your part—must be addressed. This can mean any number of scenarios, from managing incentives to finding a way to eliminate the troublesome link in your blueprint. Often, identifying the most promising path is an iterative process. Only once you have made the necessary adjustments can you confidently start your engines.

This is not to say that seeing all green guarantees success; you will still face all the usual unknowns of the market and its vagaries. Execution is critical. But unless you have a plan to get to green across the board, expect delays and disappointment even if you deliver your own part flawlessly.

The Elusive E-Reader

Let’s apply the value blueprint methodology to examine why Amazon and Sony achieved radically different outcomes in developing the market for e-readers, and how these outcomes were rooted in the starkly different approach they used to construct their ecosystems.

Even before the World Wide Web, technology companies had been trying to figure out how to make books digital. As early as 1990, Sony introduced its Data Discman Reader. But with limited content (a small number of reference titles, novels, and the Yellow Pages), which was only available on Sony-published CDs, few consumers found the $550 device attractive. The Rocket, developed by NuvoMedia in 1998, was the first product to allow e-books to be downloaded from a PC. That same year the SoftBook, developed by SoftBook Press, arrived with an internal modem that made the PC unnecessary, and in 2000, Gemstar released two models that boasted backlit screens and long battery life. Each of these innovations furthered e-reader technology but was beset by limitations: they were too expensive, too clunky, and their eyestrain-inducing screens made reading on them a pain (literally). Simply put: they didn’t offer a better experience than a good old-fashioned paperback, and so customers saw no incentive to purchase them.

Figure 4.2: An early attempt at an e-book reader–Sony’s Data Discman (1990). (Alan Levenson / TIME & LIFE Images / Getty Images.)

Then, in 2000, in what many industry insiders viewed as proof that the electronic book was ready for the mainstream, online retailers sold 500,000 downloaded copies of Stephen King’s novella Bag of Bones in a mere forty-eight hours. All the big publishers—Random House, HarperCollins, Simon & Schuster, TimeWarner, and Penguin—launched digital imprints, hoping to take advantage of this new way to attract readers, and the following years saw growth in the e-book sector. Random House’s e-book revenues doubled; Simon & Schuster saw double-digit growth in e-book sales; Microsoft and Acrobat competed to distribute software to support the new e-books. Yet no electronic reading device gained traction in the market, and e-books remained an R&D curiosity. As Carolyn Reidy, the president of Simon & Schuster, pointed out: “The hardware was not consumer-friendly and it was difficult to find, buy and read e-books.” Mass adoption by consumers remained elusive.

It was into this environment that Japanese electronics giant Sony launched its PRS-500 Portable Reader in September 2006. This new effort came two years after the failure of its Librié e-book in the Japanese market. There, the Librié was undone by the inertia of the Japanese publishers. Paltry content and intense digital rights management (DRM) that deactivated e-books after only a few months in order to reduce piracy sealed its fate. Sony had high hopes that the U.S. market would be more receptive to the value offered by its device. CEO Sir Howard Stringer noted, “We’ve been very cautious in launching [the Reader] because, as you know, it failed in Japan two years ago. This is a totally different version with totally different economics and software.”

Retailing at $350, the Reader was almost 20 percent cheaper than the Librié. It also boasted a brighter screen, longer battery life, and more memory. Users could choose from approximately 10,000 titles available at Connect.com, the online bookstore that Sony launched alongside the Reader. E-books could be downloaded (in Sony’s proprietary BBeB format) onto a PC and then transferred from the PC to the Reader via USB cable.

Sony’s Reader was a Lamborghini to the Model Ts of earlier attempts. Slim and lightweight, with a highly praised “electronic ink” technology that was as easy on the eyes as was paper, it was touted as the iPod of the book industry. It achieved what no other reader had managed: a reading experience that approximated traditional print, with all the advantages (storage, search, and portability) inherent to digital media. The launch met with much fanfare from the press, where the Reader was hailed as “the electronic gadget that could change the way we read.”

So why, having delivered this exceptional device, did Sony fail to deliver on its promise? The answer lies in its value blueprint.

Finding Clues in Blueprints

Sony’s target customer was clear: mainstream book readers.

Its project (the Reader), its suppliers, and its intermediaries (the retailers that would sell the hardware) were clear as well.

Sony brought massive technology resources to the Reader project. It developed both the Reader hardware as well as a new DRM standard (BBeB) for managing content. It successfully partnered with cutting-edge suppliers like E Ink, the company that developed the remarkable screen technology. It leveraged its brand, marketing prowess, and existing distribution relationships to ensure getting the Reader into customers’ hands. At launch time, Sony had green lights across the line of the project, supplier, and intermediaries.

But a great e-reader is not enough to complete the value proposition for the customer. They also need something to read. Enter Sony’s complementors.

Sony’s plan for getting e-books to readers depended on bringing on board authors, publishers, and its own e-book retailer, Connect.com.

Even though some authors could have been convinced to issue e-books (yellow light), it was the publisher who controlled the flow of content. And publishers were problematic on both the co-innovation and the adoption fronts.

As co-innovators, publishers looked like reasonable partners. They would need to innovate, modifying their internal processes and systems to manage and package e-books. This was a technical hurdle but a manageable one. Publisher yellow light.

As adopters, however, publishers were highly ambivalent about whether and how to approach e-books. First, the economic and legal aspects of this new offering had to be hashed out: What is an e-book worth? What will the royalty payouts to authors be? How should the contractual language read? What would profit margins look like? The publishers—conservative firms clinging to a traditional business model—would not commit to e-books until these concerns were settled. And Sony was in no position to settle them. Publisher red light number one.

Second, was the question of standards. Various e-book file formats were vying to establish themselves with publishers and hardware firms. They ranged from proprietary formats from giants like Adobe and Microsoft, to efforts by focused start-ups, to open-source proposals. But the very idea of having their copyrighted content in the digital wilderness—a hacker’s dream—sent shudders down the publishers’ spines. Competing digital rights management (DRM) systems were making diverse claims and pushing different methods to protect this precious content, but the cacophony of approaches added to the confusion, lowering publisher confidence in committing to any one approach. Sony’s proposed DRM solution, the BBeB format, was thus just one more unproven option in a crowded field. Publisher red light number two.

Turning any one of these lights green would not be enough. Sony would need a clear path to turn all of them green before publishers would come on board in a meaningful way.

E-book distribution was a separate problem. Sony launched its own online retail outlet—Connect.com—to establish a retail foothold on the content side. But establishing an online store, and attracting both suppliers and buyers to make it a worthwhile venue in which to transact, is a very different challenge than creating a great piece of hardware. So even though Sony had a plan, it was far from clear that it would work. Yellow light; maybe red.

Throw in uncertainty about demand and you have room for a lot of debate, a lot of confusion, and a lot of hype but not a lot of progress. The Reader was a great device, but customers weren’t flocking to buy it. Combine this with a decade-long history of false starts, and where was the incentive for publishers to commit to Sony’s specific e-book vision? The traditionally risk-averse industry preferred to debate standards while taking a gradual, wait-and-see approach to digitizing their books. They much preferred the status quo of selling hard-copy books through their established online and brick-and-mortar retailers.

And as long as publishers held back, end consumers would be held back too. A lack of adequate content meant Sony’s Reader wouldn’t make a dent in the marketplace; slow sales of the device dissuaded publishers from swiftly resolving the myriad issues that would result in more content. At the time of the Reader’s launch, Nick Bogaty, executive director of the International Digital Publishing Forum (IDPF), noted: “I’ve always said that four factors need to be in place for the market to take off. You need a device that makes reading pleasurable, content at the right price, a great selection of content, and e-books that are easy to use.”

Sony got the first element right. But even at 10,000 titles, the available content on its online store was a haphazard collection—by way of comparison, a well-stocked independent bookstore carries upward of 50,000 titles, while the average Barnes & Noble superstore carries as many as 200,000 titles. Moreover, the price points for the e-books that were available were not low enough to convince readers to invest the initial $350 for the device. Although backlist titles went for as low as $4, the difference between the price of a best seller on Connect.com and the same hardcover discounted by conventional booksellers was negligible.

Figure 4.3: The Sony Reader value blueprint at launch.

Further reducing the benefit to users was the multistep process required for them to acquire an e-book. With a book already in mind, you would still have to search for an online bookseller, find the book on its Web site, purchase the title from an unfamiliar vendor, download the file to your PC, and then hook up your Reader for the transfer. For the bulk of consumers, it was easier to head to the local bookstore or order a hard copy online than to deal with a spotty inventory, an arbitrary backlist, an inconvenient process for getting e-books onto the Reader, and high prices. Sony’s advances in e-reader devices, while significant, were meaningless if the printed book remained the better experience.

As this chaos played out for electronic books, Sony sat on its golden egg and waited. And waited, and waited. . . . In the excitement of launching the PRS-500, it was clear that Sony was focused on delivering a great product as the key to unlocking the potential of e-books. But while the product was certainly a cornerstone, it was not the whole structure. As the value blueprint shows so starkly, a plethora of dependencies had to be managed for e-books to gain traction.

Had Sony’s management attempted to create a value blueprint during the early stages of its Reader development, they would have been forced to confront the fact that they had no clarity for how content was going to make its way onto the device, no matter the excellence of the hardware. The exercise would have forced a change in their path: find a way to eliminate the publisher’s red lights, reduce expectation for the launch, or drop the Reader and pursue another project.

The fact that the Reader was a green light for Sony did not offset the red lights elsewhere. And, without a clear plan for how to turn red to green, the Reader was dead in the water. As an e-book reader, Sony’s device was commendable; as an e-book solution, it was a flailing effort.

Sony’s journey in e-readers is a story of a great product waiting for a market to arrive. Unfortunately for Sony, when the market did finally emerge, it did so on Amazon’s terms.

The Kindle Conquers

As the publishing industry haggled over how to make e-books a winning proposition, Amazon entered the fray. In 2007, the largest book retailer in the world launched the Kindle, the innovation that finally brought e-books into the mainstream. As a device, the Kindle was regarded as inferior to Sony’s Reader. Described by one analyst as “downright industrially ugly,” it was larger than the Reader, weighed more, and had an inferior screen. Moreover, it was a very closed platform that was able to load content only from Amazon, and which precluded users from transferring the books they purchased to or from any other device, sharing with friends, or even connecting to a printer.

Figure 4.4: Amazon’s Kindle value blueprint at launch.

How could Amazon engineer a triumph with a weaker product? The company did it by engineering a solution. Take a look at Amazon’s value blueprint above. What is the primary difference between the approaches taken by Sony and Amazon?

For readers, the Kindle provided a one-stop shop, a simple, inexpensive way to purchase and enjoy anything from Jane Eyre to the latest New York Times best seller. Presenting the Kindle, CEO Jeff Bezos announced, “This isn’t a device, it’s a service.” Unlike Sony’s Reader, the Kindle offered a complete experience for the customer: an expansive library of books, initially including more than 90,000 titles and growing to approximately 330,000 within two years; the right price (while a new hardcover usually costs around $25, most Kindle books, including new titles and best sellers, were $9.99 or less); and the ability to download the book instantly using Amazon’s wireless network. Bezos explained his vision for a streamlined user experience: “You shop right from the device. . . . One of the reasons people are so excited about this device is because it doesn’t involve the PC. They don’t have that dread of ‘how am I going to get this to interoperate with my PC?’ It just works as a stand-alone device.”

It is easy to praise the value proposition. But, as evidenced by the initial excitement around the Reader, Sony had a compelling vision too. The key difference was the way in which Amazon aligned the ecosystem to bring its value proposition to life. Often overlooked, but critical to its success, is what Amazon changed on the back end to create its offer. As its value blueprint makes clear, in order to create this seamless experience, Amazon changed the way critical elements of the ecosystem were configured by both extending its successful position in retailing and simplifying the value proposition for all the other parties involved. A few yellow lights, yes, but a clear plan for turning them all green.

As one of the Internet’s biggest success stories, Amazon’s powerful retail platform gave it enough leverage to approach publishers with several innovations that would encourage the creation of digital books for Kindle. After all, this “King of the Retail Jungle” was responsible for approximately 30 percent of books sold in the United States. Publishers had to pay attention. But Amazon did not simply bully publishers into supporting the Kindle. Amazon created conditions in the ecosystem that made joining the long-awaited e-book revolution a more attractive proposition for publishers than any previous attempt.

First, Amazon tackled the DRM issue. The Kindle was both closed and proprietary, meaning users could not print their e-books, read them on another device, or share them with other people. While this restriction was a turnoff for consumers, it was critical to reducing publishers’ perceptions of risk and total cost in making their adoption decision. In the language of chapter 3, shifting readers from +4 to +2 is well worth the effort if it will shift the publishers from –1 to +1. In looking at the total ecosystem, Amazon made the wise choice to reallocate value to its weakest link, the publisher. This strong DRM system gave publishers a much-needed sense of security at a time when the dangers of piracy—as exhibited by the popularity of file-sharing sites within the music and movie industries—topped the list of their digital concerns.

Amazon also increased the relative benefit for publishers by effectively subsidizing their participation through a counterintuitive retail model. Traditionally, bookstores pay publishers a certain percentage of a book’s list price to acquire a title and then sell it to their customers for a profit. (So, if the list price of a book is $25.00, and the publisher charges the bookstore 50 percent, the bookstore pays only $12.50. If the store then sells the book at 20 percent off the list price for $20.00, it makes $7.50.) For e-books, Amazon paid the publisher 50 percent of the list price of the print version but then sold the e-book for $9.99. So, if the price of a standard hardcover at the time was $25.00, and Amazon paid the publisher $12.50, the company actually lost $2.51 on each e-book sold. To jump-start the e-book ecosystem, Amazon sacrificed some e-book profits up-front, but it was able to make up much of the difference in its sale of the $399 Kindle device (which, by some estimates, earned margins of $200 per unit).*

In the short term, everybody was a winner: the publisher received the same amount it would have earned from a print version and saw a boost in sales; the customer enjoyed a cheaper, and some would say better, reading experience without sacrificing breadth of book choice; and Amazon emerged as the leader in the electronic book revolution. It was a position worth fighting for. According to Forrester Research, by 2015, U.S. consumers are expected to spend $3 billion on e-books. This forecasted growth is especially impressive given that, according to the IDPF, sales of e-books in 2007 were only around $10 million. But the Kindle’s entrance into the market lit a fire: by the end of 2010, e-book sales were fast approaching $120 million. By the time Amazon launched the Kindle 3 in 2010, it held 80 percent market share of electronic books and, with estimated sales of the Kindle at 6 million for that year, 48 percent market share of e-readers.

Deconstructing E-Book Value Blueprints

Sony and Amazon built their value blueprints using identical pieces but placed them in very different positions. In contrast to Sony, Amazon followed a blueprint that put it firmly in the role of integrator, bringing together all the various elements required for value creation itself, and delivering a comprehensive, intuitive experience to its customers. It took on far more responsibility for organizing the system than did Sony. While Sony assumed its red lights would somehow work themselves out, Amazon turned red to green by taking the lead and blazing a trail for the entire industry.

Amazon’s and Sony’s efforts to conquer e-books were the inverse of one another: Sony enjoyed competence in its hardware but was a stranger to the ecosystem; Amazon was well positioned in the ecosystem but was less competent with its hardware. The e-book ecosystem—like so many of today’s innovative efforts—is ultimately a system of interdependencies. Success would not be determined on the basis of a winning effort at any single point; it required moving the entire cohort of partners in the same direction. We will further explore strategies for successful ecosystem construction and expansion in chapters 7 and 8.

Sony’s hyperfocus on the hardware element left an enormous blind spot that ultimately undid its efforts. Its pioneering Reader may have been first to market in 2006, but by 2010, it was fighting to hold the number five spot in the e-reader marketplace.

In contrast, Amazon’s willingness to enter the fray with a plan to drive the entire system forward meant that e-books could finally gain traction, and do so on Amazon’s own terms.

The e-book market continues to evolve with the entry of new platforms, like Apple’s iPad and Barnes & Noble’s Nook, as well as with the unbundling of reading platforms from reading hardware, like the Kindle app. The competitive dynamics are sure to change as the ecosystem matures.

The one certainty throughout will be that, in the race between competing blueprints, the winners are the ones who have a plan to get to green across the board. Drawing a value blueprint is an exercise in discipline that forces you to construct the entire picture around your project at the beginning. It shows you where you have a coherent strategy, where you have inconsistencies, and where you are just hand waving (“Oh, that will eventually fall into place”). And because it gives you a clear view of all the elements and their status, the value blueprint allows you to manage your red and yellow lights from the get-go, rather than as a series of tactical adjustments in the face of go-to-market surprises.

To see why up-front clarity is so important, consider one of the most disappointing flops in the history of the pharmaceutical industry: inhalable insulin. See if you can find the point of breakdown—the blind spot moment.

The Promise of Inhalable Insulin

At the turn of the twenty-first century, pulmonary insulin was a darling of the pharmaceutical industry’s biggest players, and for good reason. There are more than 347 million diabetics worldwide. Of the 25 million diabetics in the United States, 4.8 million have to administer their own insulin, the vast majority of which do so by needle injection (a tiny fraction use insulin pumps). By allowing patients to use an inhaler (similar to those used by asthma patients), pulmonary insulin administers the correct dose of insulin noninvasively—without a dreaded needle. It was an innovation that could reduce pain, add convenience, and enhance the quality of life of millions of diabetics around the world.

The excitement around pulmonary insulin was enormous. “We’ve never had such a response to anything we’ve done,” stated Dr. Jay S. Skyler, associate director of the Diabetes Research Institute at the University of Miami Miller School of Medicine and a lead investigator on a patient study. The popular press embraced the idea, with headlines like USA Today’s “Insulin Without Injections Nearly a Reality,” and the London Times’, “The Potential for Inhalable Drugs to Change Medicine Is Breathtaking.”

It’s easy to see why the world was excited. Besides patients’ understandable desire to escape the needle, inhaled insulin promised greater compliance. More than 90 percent of diabetes patients suffer from type 2 diabetes, the onset of which is often due to poor lifestyle choices. Diabetes has a high level of deniability in its initial stages. The stigma associated with the disease, and the discomfort of daily insulin injections, means that there is often a five- to eight-year window between the time patients need insulin and the time they actually begin treatment. This new, noninvasive option would help patients accept their illness earlier, saving lives and societal costs through the avoidance of later-stage complications. (The estimated total annual costs of diabetes in the United States alone is over $200 billion.) A 2003 report from Pharmaprojects articulated these expectations: “These products, if approved, could expand the market for insulin to several times its current value as a result of patients being more willing to take the therapy if it is offered via inhalation rather than by injection.”

In 1998, the race to market for pulmonary insulin began in earnest. Novo Nordisk started development of AERx. Later that year, Pfizer and Aventis entered a joint venture to create Exubera. Eli Lilly entered the development race with a device dubbed Air. And in 2001, MannKind Corporation, a California biotechnology start-up, joined in with its own pulmonary insulin effort.

As clinical trials and market research were painstakingly conducted, enthusiasm grew and everyone in the sector was predicting a blockbuster win. In 2001, Morgan Stanley Dean Witter forecasted Exubera annual sales of more than $1.5 billion by 2009; three years later, Credit Suisse First Boston predicted the device would garner $1 billion annually by 2007.

Expectations were high, but developing inhalable insulin was a monumental task. Peter Brandt, Pfizer’s U.S. head of pharmaceutical business, stated the challenge: “Pfizer had to create the means to manufacture inhalable insulin, a substance that had never existed before. . . . Exubera is as much a manufacturing innovation as it is a breakthrough medical advance.”

Pfizer led the pack through a combination of its own breakthroughs and its rivals’ stumbles. “With a two-and-a-half- to three-year lead time, Pfizer will have a blockbuster product on its hands,” predicted Robert Hazlett, an analyst for SunTrust Robinson Humphrey. And on January 12, 2006, the company announced a business success to match its R&D prowess. Taking advantage of a contractual clause triggered by Aventis’s merger with Sanofi, Pfizer successfully sued for the right to buy out Aventis’s share in Exubera for $1.3 billion, gaining full control of the drug. This looked like a masterstroke when, fifteen days later, Exubera received regulatory approval in Europe by the EMEA, followed just one day later by approval from the U.S. Food and Drug Administration (FDA).

In its approval, the FDA excluded patients who smoked and patients who suffered from lung degradation or heart disease. The FDA also posed a requirement that all patients have a pulmonary function test prior to initiating therapy to make sure their lungs were able to absorb the insulin and recommended a follow-up test six months after initiation and every year thereafter. Neither constraint was regarded as a problem.

Figure 4.5: Pfizer’s Exubera inhaler. (AP Photo / Mark Lennihan.)

Pfizer had won the race, and it had overcome major technological hurdles to do so. The project’s risk had been extraordinary—surely an extraordinary win was in order.

Exubera was far from a perfect product, a fact that everyone acknowledged. Indeed, all the pharmaceutical firms developing inhalable insulin solutions, as well as the Wall Street analysts, the broad health-care community, and the press were clear-eyed about a host of limitations. The first-generation inhaler devices were bulky, the powdered insulin was more expensive than the injected alternative, and the novelty of the approach meant that doctors and nurses would need extensive training to be able to teach their patients how to properly use the inhaler.

Still, each of these challenges had been identified years before Exubera’s launch, when it was still in early clinical trials. And each of these factors was regarded by Pfizer, the analysts, and the medical community as manageable. Yes, the early devices were regarded as bulky by a subset of patients, but knowing this, Pfizer already had a more elegant second-generation device well under development and had accounted for bulkiness when it set its sales forecasts. Yes, the insulin was more expensive, but this is the case for almost every new drug. Pfizer, no stranger to the vagaries of insurers and formularies, planned for Exubera to graduate to an increasingly favorable copayment tier over time, just as other successful insulin drugs had done in the past. Yes, the device required training, but Pfizer’s strategy called for an initial rollout targeting experienced endocrinologists and diabetologists who had “a wealth of experience with not just the use of insulin but the oral agents as well and, most importantly, with this patient population.” Its explicit plan was to first get this critical opinion-leading segment on board and only then, four to six months later, start rolling the drug out to general practitioners (GPs) and other nonspecialists who were less expert, more distracted, and generally less open to adopting new insulin therapies.

Pfizer had all these limitations clearly in view and built a strategy that would allow it to overcome them and prosper. At the time of Exubera’s launch, Pfizer confidently predicted sales of $2 billion by 2010. Meanwhile, pharmaceutical rivals Eli Lilly, Novo Nordisk, and MannKind were running as hard as they could to get to market and capture some of the spoils. Wall Street analysts saw these limitations as well and incorporated them into their forecasts. These nominally objective critics pushed back against Pfizer’s estimates. Both Morgan Stanley and Bear Stearns believed that high manufacturing and education costs would be a problem and estimated $1.5 billion in Exubera sales by 2010. WestLB was even more guarded, projecting “only” $1.3 billion.

Figure 4.6: Value blueprint characterizing Pfizer’s expected path to market for pulmonary insulin in 2005 as the company awaited regulatory approval (excludes pharmacies).

The big debate was whether Exubera would be a super blockbuster or just a blockbuster.

After the FDA approved Exubera for sale in January 2006, Pfizer made big investments in preparing the market: it developed a rich array of educational materials, set up a twenty-four-hour call center for patient support, and trained 2,300 sales reps in the intricacies of teaching and pitching Exubera to doctors and nurses. By October, the company had reached out to more than 5,000 endocrinologists and diabetologists—its target launch group. And in January 2007, Pfizer launched its “full-court press” to GPs and nurses.

Pfizer had a clear view of the challenges—and a clear plan to overcome them. Not a red light in sight.

Or so they thought.

Dead on Arrival

By the end of 2006, Exubera sales were “negligible.” Blaming the sluggish initial performance on educational and marketing hurdles, Pfizer looked forward to its full rollout in 2007 with its head held high. The company maintained its projection that sales of Exubera would reach $2 billion, although perhaps not by 2010, as it had previously stated. But by July of 2007, Pfizer reported that sales “continued to be disappointing.”

In October of 2007, Pfizer pulled the plug. Exubera was dead. Total sales: $12 million.

Pause for a moment to consider the difference between expectations of $1.2 billion and actual sales of $12 million. That’s a miss of one hundred to one. Achieved sales amounted to one one-hundredth of the plan. Staggering. Exubera went down as “one of the most stunning failures in the history of the pharmaceutical industry.”

Pfizer’s exit was initially regarded as an opportunity by its rivals, whose inhalable insulin offers would address many of Exubera’s shortcomings. “The problem is it was kind of like generation zero,” said Mads Krogsgaard Thomsen, Novo Nordisk’s chief science officer. “All those [unfavorable qualities], the size, the number of steps to administer . . . have been addressed in our product.” Lilly president John Lechleiter was similarly confident in his firm’s efforts. “[Pfizer’s exit] really doesn’t diminish our enthusiasm for our product. We believe there’s a place for more convenient administration of insulin. As we have said all along, the device, the technology behind the approach that we’re taking, we think is going to be more convenient for patients, easier to use. We’re not backing away an inch . . . [there is] a big opportunity there for the person that can come along or the company that can come along with the right product.”

But within five months, both Novo Nordisk and Lilly would terminate their own inhalable insulin efforts. In the end, Pfizer wrote off $2.8 billion on its Exubera effort. This astronomical figure made Novo Nordisk’s $260 million loss on AERx and Eli Lilly’s $145 million loss on Air look like relative bargains.

It’s tempting to explain away Exubera’s failure as rooted in an imperfect product (the bulky inhaler), a burdensome training requirement (every patient would need to be taught, usually by already overworked doctors and nurses), or misestimations of needle phobia and improvements in insulin pens (which undermined the relative benefit of a noninjection-based option). It’s also tempting to blame a management team that perhaps fell in love with its own ideas and ignored all these warning signs.

But while these were contributing factors, they cannot explain the failure. Pfizer is a great company, renowned for its marketing prowess. Having worked with thousands of patients and scores of doctors throughout the years of Exubera’s clinical trials, having conducted countless focus groups, they knew better than anyone that some patients would embrace the inhaler, while others would balk until a smaller one was available (which is why they were already on their second-generation device). Of course they considered this in setting their expectations. As the world’s leading insulin pen producers, Novo Nordisk and Eli Lilly knew more than anyone about improvements in pens and patient reactions to needles but still raced forward with their own inhalable insulin projects. Of course they accounted for the drawbacks of inhaled insulin in their expectations. And as for Pfizer management falling irrationally (or politically) in love with a losing darling, recall that it took a major lawsuit to force Sanofi-Aventis to cede control of Exubera, which they too saw as “the next big thing.”

Pfizer, Lilly, and Novo Nordisk were all rushing toward the same goal. This was not a Pfizer miss; this was an industry miss. A failure this big and broad doesn’t come from problems with execution or misreading customer preferences. It comes—just like the failures in run-flat tires, HDTV, and 3G telephony—from a blind spot.

The Inhalable Insulin Blind Spot

Every other firm racing to deliver its own inhalable insulin envied the way in which Pfizer overcame the development and manufacturing challenges of both the drug and the device, the way it cleared the regulatory challenge in achieving FDA approval, and the careful (and costly) path it was carving to educate the health-care community about inhalable insulin. What everyone overlooked, however, was a subtle but critical change in the ecosystem that came about because inhalable insulin . . . needed to be inhaled.

When the FDA approved Exubera, it added one crucial caveat by requiring patients to undergo an FEV1 lung function test, which was performed on a device called a spirometer. The good news was that the FEV1 was very common and easy to administer—no co-innovation risk here. The bad news was that this made it all too easy to overlook its true implication.

Pfizer’s reaction to the news was telling.

Summarizing the market rollout plan for Exubera for Wall Street analysts, the head of Pfizer’s U.S. pharmaceutical business explained, “We’re starting with the targeting of physicians who are basically large users of insulin now, and therefore have a wealth of experience . . . with this patient population. That therefore, by definition, means you are going to have an awful lot of the endocrinologists in that group. That is, if you will, our first wave of the rollout . . . is to target those highly experienced, primarily endocrinologists, with things like the early experience or starter kits . . .” The plan was to first target endocrinologists and use their buy-in to support the second-wave push to GPs.

At the very same meeting, when asked about how Pfizer planned to respond to the lung function test requirement and the availability of spirometers, Dr. Michael Berelowitz, Pfizer senior vice president in charge of worldwide medical and outcomes research, replied, “As far as the pulmonary function testing required for Exubera is concerned . . . in primary-care practice, there is a requirement that physicians [GPs] be able to do pulmonary function in their patients with asthma and so on. So they have availability of this kind of equipment, and they have comfort with this kind of equipment. That is what we have heard from physicians as we have spoken to them and they become used to the idea. So we don’t see that as an issue.”

Can you see the contradiction? It went unnoticed before, during, and after the analyst call—by everyone. But when the plan hit reality, the disconnect was stark.

“We don’t see that as an issue.” They should have used a wider lens.

Figure 4.7: Value blueprint of the actual path to market for the first phase rollout to endocrinologists after regulatory approval (excludes pharmacies).

The lung function test would not be an issue for general practitioners. But Pfizer’s plan (which followed the tried-and-tested industry norm) hinged on specialists adopting the new product first. There was a critical disconnect between the way the two executives conceived of the plan. A clear value blueprint would have forced the right question to the surface: what does it mean for an endocrinologist to perform a lung function test before prescribing a treatment? Answer: something very different than it would mean for a GP to do the same.

While spirometers are standard equipment in a GP’s office, used to test for asthma, they have no natural place in an endocrinology practice. Thus, the specialist would need to refer the patient to another doctor, nurse practitioner, or lab, and then set up a second patient visit before treatment could begin.

Now consider the fact that there is an acute shortage of endocrinologists in the United States. Waiting times for appointments can be three, even nine, months. Indeed, many endocrinologists were so overbooked that they were not taking new patients.

The lung function test requirement means that, beyond assessing the endocrinologist’s reaction to the training challenge (which was accounted for), we need to consider how enthusiastic the doctor will be to delay patient treatment for weeks or months—and the patient’s reaction to the inconvenience of both the delay as well as the multiple visits. Not to mention the insurer’s view on paying for a second specialist visit.

In his postmortem debrief on Exubera in October 2007, Ian Read, Pfizer’s then-president of worldwide pharmaceutical operations (and current CEO), commented, “We clearly underestimated the barrier to moving patients or the physician community earlier to Exubera. I think this is one of the major issues we underestimated—the resistance from physicians and patients to going onto Exubera—going onto insulin in any form earlier than they have been to date. So that is one major barrier. The second one is, per se, the burden that the Exubera technology represented to the practice, which went from the lung function testing, the training on the device, and while the size of the device may have been a component of that, I think you have to look at the totality of it.”

Had they used a wider lens to look at “the totality of it” from the beginning, Pfizer would have seen the one piece of the puzzle they didn’t account for—the blind spot that was so easy to miss unless you were actively looking for breakdowns. This was the log that broke the camel’s back.

Like Sony with e-readers, Pfizer succeeded in creating a technology miracle. Lilly and Novo Nordisk were close behind with products that looked even more promising. But what needed improving was not the product; it was the path. Like the significance of the garages in the run-flat case, the lung function test was easy to overlook precisely because it was already available. But there is a crucial difference between being available and being accessible. The moment that the lung function test became a regulatory requirement, success required a plan to solve the endocrinologist’s “lung-test loop.” And in the absence of such a plan, failure was virtually guaranteed.

As of May 2011, the only major firm still working on pulmonary insulin is MannKind, which as of the time of this writing has spent nearly $1 billion of founder Alfred Mann’s own money, convinced that the quality of both its insulin and the device is materially superior to those of any of its rivals. (MannKind’s MedTone device is one-tenth the size of the bulky Exubera, and its Technosphere Insulin System closely mimics the release of insulin a healthy individual experiences at the beginning of a meal.) Although the FDA denied approval of MannKind’s offering in January 2011, according to president and COO Hakan Edstrom, the company is “certainly resolved to pursue” its approval. More than just good luck in developing a great product, I wish MannKind a clear vision in recognizing its ecosystem challenges and finding a way to manage them in advance of launch.

The Value of Value Blueprints

Sony and Pfizer both failed to appreciate the ecosystem structure that their strategies implied. The moral of their stories is that huge allocations of resources and deep wells of talent on their own cannot make up for red lights on the path to success. If your value proposition requires multiple parties to collaborate, building a deep understanding of the structure of collaboration is critical. The fact is, today’s complexities require a new up-front conversation that hardworking managers may initially see as overkill. But gone are the days when “Sure, we’ll get to that down the road” will suffice.

Creating a value blueprint is an exercise in team discipline. It forces you and your team to be explicit about your value proposition and about all the steps you’ll take to make it a reality. It forces you to see the issues before they become problems. The explicit steps of the exercise require you to ask questions that may otherwise be happily put off or pushed aside, or that you may not even be aware you need to ask. Who exactly is your end customer? The retailer who displays your product? The person who purchases it? The person who uses it? In what order should your partners act? Who passes what to whom? Who is the visible brand and who is the invisible cog? Where do the co-innovators enter the critical path? Who comes first and who comes next? Who is ready and willing? Who is ready but unwilling? Who is neither?

But creating a value blueprint is also an exercise in communication. It forces a dialogue that will bring your assumptions—and those of your colleagues and partners—out of the shadows and into the light. These questions should be asked early. You may be surprised at how often teams working on the same overarching goal have radically different visions of the path. In the absence of a structured way to articulate and visualize the plan, it is easy to talk past each other and overlook inconsistencies, contradictions, and disconnections.

Left unarticulated, contradicting visions don’t conflict until after commitments are made and pieces are brought together. But when the strategy meets reality, details become disasters. At that late point, of course, the result is disagreement and waste: disagreement about who misunderstood, overstepped, or is to blame; waste of both effort and time as teams race to rework, rejigger, and repatch the system.

Even when using these tools, there is still no guarantee that the blueprint you draw will be right. But by following the methodology in a dedicated, team-based effort, you can guarantee your best shot. Using a wide lens to harness and direct the collective insights of your partners makes falling victim to a blind spot far less likely. Having a disciplined approach at the beginning of a project allows you to see the impediments you’ll eventually have to confront anyway. It shines a bright light on the path between you and your end customer. Modify the notion of “If we build it, will they come?” to instead ask, “If we build it, how will they get here?” You will want to know if the answer is, “We’re not sure,” before you commit your resources.