Having seen the impact of the Software Paradox across a wide variety of organizations, from consumer to enterprise, startup to industry bellwethers, the obvious question is how to respond. There is no one path forward, as the appropriate organizational response will depend on a number of variables, including the resources on hand, current business model, market permission to and accessibility of adjacent markets, and so on.
There are, however, three recommended strategic considerations for organizations subject to the Software Paradox moving forward.
The first step to solving any problem is to acknowledge the problem, which in this case means accepting that the upfront, realizable commercial value of software is in a period of decline. Even if you work for a Palantir or a Splunk and your business is currently an exception to this trend, it’s useful to model the impact if only as a thought exercise. Many businesses, including successful ones, have been caught unprepared by unanticipated downturns in their ability to monetize software. As in the Innovator’s Dilemma, few have predicted this in advance based on the mechanics of their existing businesses. More problematically, the more successful their history of generating software revenue, the more difficulty they have in envisioning challenges to it moving forward.
As a result, the most important recommendation for organizations of all shapes and sizes moving forward is to anticipate worst case scenarios at a minimum. Even in cases where organizations cannot or will not make some of the operational changes recommended below, the exercise of focusing on nonsoftware areas of a given business can help identify under-realized or -appreciated assets within an organization. Particularly ones for whom the sale of software has been low effort, brainstorming about other potential revenue opportunities is unlikely to be time wasted.
One vendor in the business intelligence and analytics space has privately acknowledged doing just this; based on current research and projecting current trends forward, it is in the process of building out a 10-year plan over which it assumes that the upfront licensing model will gradually approach zero revenue. In its place, the vendor plans to build out subscription and data-based revenue streams. Even if the plan ultimately proves to be unnecessary, the exercise has been enormously useful internally for the insight gained into its business.
Whatever the outcome of the previous exercise, it is important for every business to be continually moving in the direction of value. In an industry that prides itself on speed of innovation and change, and whose history demonstrates same, it’s counterintuitive that the conventional wisdom is that current market success equates to future market success. But this is, for better or for worse, typical. It is important to actively counteract this dangerous contentment with the status quo by continually evaluating the actual value of a market, whether that’s by way of quantitative metrics like margin or more qualitative assessments of untapped opportunity or inbound risk.
Quantitative metrics will be of most use in existing businesses, where comprehension of a given market is high. They will, however, be less efficient in new markets. The best example of this is perhaps Apple’s experience in the tablet space. The company’s assessments of the opportunity for Newton-like devices were, in hindsight, clearly overly optimistic, to the extent the company drove itself dangerously close to the breaking point. Almost the exact same product space would prove fantastically lucrative a decade later; the iPad revenue stream alone is a Fortune 500–type business. In a best case scenario, businesses will continually monitor, both quantitatively and qualitatively, their individual product lines for signs of a decline and have plans in place to react when it does arrive.
The case study for this behavior within the technology industry is IBM. Steve Mills, IBM’s senior software executive, in fact, regularly describes “moving toward value” as a key component of the company’s overall strategy—an assertion that is born out, in fact, by the company’s regular decommitments from underperforming markets like PCs or x86 servers, markets that may still be profitable but unable to yield the types of margins the company prefers.
However value is assessed, ultimately, organizations need to be prepared to move toward it. If disruption has not come to your software market yet, it is on the way. And as the saying goes, if you don’t find yourself a seat at the table post-disruption, you will be the meal.
Wherever possible, it’s useful for businesses to hedge themselves against the potential for declines in their business. Importantly, this does not involve the abandonment or deprecation of existing software revenue lines. Quite the contrary: these should be maximized for as long as may be sustained. Diversification of revenue sources, however, is a long-proven strategy for unexpected disruptions to one or more lines of business. Software-dependent organizations, therefore, should be actively working to identify adjacent or emerging revenue opportunities that could complement or even outperform their existing software businesses. The most common pattern of model expansion, in fact, will be organizations using their software margins to effectively subsidize the generation of the business models that will complement it in a best case scenario, or replace it in a worst.
The highest profile example of this in practice today may be Microsoft. Even as it was relentlessly generating revenue by way of its flagship software offerings, it was pouring money into its own cloud infrastructure. According to the company, it has spent $15 billion on its cloud infrastructure to date, with no signs of the investments slowing. This is an enormous expense, particularly relative to the costs of developing software, but it is the scale that’s necessary to be competitive in this market: Google spent $2.35 billion in the first quarter of 2014 alone according to its financials. But if Microsoft can efficiently generate revenue using the lower expense model of software, why would it feel compelled to spend so freely to compete in the services world? The only logical explanation for the level of commitment is that the company has projected or at least anticipates the possibility of disruptions to its core revenue streams, and is diversifying the business ahead of these challenges.
It should be noted that this is a good practice even if one finds the evidence suggesting a broad-based decline in commercial software businesses unpersuasive. The fact is that the majority of software businesses today are leaving money on the table by focusing strictly on the production and delivery of software at the expense of other customer needs in the process, whether that’s operational assistance (services), improved decision-making (telemetry analytics), or the ability to amortize their capital outlay over longer periods of time (subscription models). Irrespective of what software organization leaders might think of the long-term forecast for software as a revenue-generating asset, it is irresponsible not to pursue additional avenues of growth for the business, or to not attempt to protect the organization by diversifying its revenue-generating abilities.
Beyond the above exercise, which requires detailed consideration of abstract principles and their precise relation to your business, what are some specific models to explore that can act to mitigate any decline in software-related revenues while opening up net new lines of business? There are too many to detail in these pages, so obvious candidates such as advertising-supported business models are omitted here. The following are business models that every producer of software, be they an organization of hundreds of thousands of people or a two-person startup, should consider.
The simplest transition for many who would sell software, from a logistical standpoint if not public relations, is to transition customers to subscription models. For enterprises, this is already typical for support and maintenance, and in many cases licensing. At Red Hat, for example, 87% of the company’s revenue is subscription-based. Even in the consumer world it’s not without precedent.
As common as the model might be, the Adobe case clearly demonstrates, users may resist the idea of a subscription. There’s no getting around the fact that there is, at least on a consumer level, some discomfort with the idea of renting rather than owning software. Some even compare the practice to sharecropping. Nor is this attitude entirely without justification. Most obviously, renters can have software taken away from them, while buyers at least have access to the version they purchased guaranteed. More important, customers will tend to pay more over the longer term for subscription software versus that which is purchased up front. The delta varies depending on type and category, but in general, the reason businesses shift to annuity-style payments versus upfront windfalls is that they have greater longer-term value.
But the fact is that a variety of markets are trending toward rentals. Today, millions of users all over the world forgo purchasing music in favor of monthly subscription fees to large catalogs such Pandora, Rdio, or Spotify. Millions more have given up the purchase of DVD or Blu-Ray discs in favor of online media streamed by Amazon or Netflix. Virtually every commercial SaaS application consumer or enterprise is purchased via subscription. Even in the mobile world, hundreds of apps have abandoned upfront pricing for software in favor of subscription-like in-app purchases. Even better, from a user’s perspective, is how the software subscription model incents a different model of development. Traditionally, software manufacturers are forced to choose between improving a particular version and holding new features back to improve their chances of persuading consumers to upgrade to the next version of the software. Under a subscription model, a customer’s desire for more up-to-date software with the latest features is perfectly aligned with the vendor’s need to minimize churn by continually demonstrating value. Even in cases such as Microsoft Office, where additional features may be little or no incentive for subscription, integrations with backend services can make up the difference.
The net for businesses that continue to charge an upfront, one-time licensing fee is that you should at least evaluate the possibility of transitioning customers to monthly subscription models. Whether it’s enterprises increasingly paying for their infrastructure on a monthly basis to consumers increasingly buying their media—or in some cases software—the trajectory of payment models is clear. Software may be more difficult to sell, but it’s generally a better and more viable proposition when sold over time.
If a transition from upfront licensing to a subscription model involves the least amount of organizational effort, embracing a SaaS model may involve the most. Most pure-play software organizations today have some operational infrastructure competency, even if it’s just for build and test purposes. But very few—even among larger, well-resourced players—have the current ability to create a production-quality hosted version of their product. In a historically tight hiring environment, after all, it can be difficult enough to hire the engineers necessary to develop the software; finding those with the operational skills to host it, as well as to design the requisite billing, account management, etc., pieces that transform it into a SaaS offering is exponentially more so.
Unfortunately, in spite of these difficulties, hosting a given piece of software is becoming necessary in an increasing number of categories. More often than not today, availability and convenience will trump features and performance. Much as MySQL once enjoyed an adoption advantage over PostgreSQL simply by virtue of being the only one of the two available in the Linux repositories, so too today does software accessible as a service have an advantage over that which must be downloaded, installed, and configured—even if the latter is open source. In cases where it’s not practical or possible to host the software for production, offering a trial, sandboxed environment like Cloudera Live can be an enormously useful recruitment tool. Another avenue to monetize services is hosting a complementary service such as MongoDB’s Monitoring and Management service; such value-add services can be an excellent blend of software and service-based models.
It is also worth noting that sustaining a service-oriented software offering does enjoy some advantages over traditional distributed models. As Andreessen Horowitz’s Preethi Kasireddy and Scott Kupor describe, development and support costs can be substantially lower for SaaS businesses.
In a perpetual license business, the R&D (and support) teams are often maintaining multiple versions of the software, with multiple versions running in the wild. Even Microsoft had to finally—12 years later—deprecate its support for Windows XP, despite all sorts of customers from ATM operators to federal, local, and international governments mourning the loss.
This generally doesn’t happen in SaaS because all customers are running on the same hosted version of the software: one version to maintain, one version to upgrade, one version on which to fix bugs, and one physical environment (storage, networking, etc.) to support. Given that software companies at maturity often spend 12–15% of their revenue in R&D, this cost advantage is very significant and further enables SaaS companies to be even more profitable at scale—particularly if they use multi-tenant architectures. Not to mention that this simplified hosting and support model is the very linchpin for long-term SaaS customer success and retention, especially as compared to the buy-but-don’t-use “shelfware” behavior that characterizes perpetually licensed enterprise software.
— Preethi Kasireddy and Scott Kupor
The costs and challenges notwithstanding, the future is services. Perhaps the best example of the industry’s march in this direction is the public cloud. In almost every case, a physical server will outperform the virtual equivalent offered up by public clouds. And yet the adoption of public cloud has been sufficient to force Dell to go private, IBM to decommit from the x86 server market entirely, and HP to try and charge for firmware upgrades. This is the power of convenience. Much like the camera you have with you being better than the high-end SLR that’s too heavy to carry around, developers—the new kingmakers within the enterprise—are heavily advantaging time to productivity when it comes to technology selection.
Which means that software providers need to adapt to a market that isn’t just evaluating the capabilities of their offering, but how quickly they can be spun up. Screaming performance and differentiated features are wonderful, but as technologies from MySQL to MongoDB have amply demonstrated, they are far from the end all and be all. The most dangerous belief for any software company today is that the solution to their adoption problem lies in better software engineering.
The solution to problems of adoption is not a better product, but a focus on barriers to adoption. Which in many cases means offering the software as a service, daunting as that task may seem. Organizations with the ability to both develop and host their software will be far more insulated from any software-related revenue declines than pure-play competitors, which is why it’s useful for every software organization to at least talk about the possibility of developing the capability internally or tightly partnering externally.
For many years, as Basecamp’s Jason Fried reminds us, lumber companies treated sawdust, the byproduct of their operations, as industrial waste. Worse, the waste was a hazard. Besides being highly flammable and thus a potential cause for fire or explosion, sawdust is a known carcinogen, bacterial vector, and can have detrimental effects on local water systems. Not surprisingly, then, lumber mills had little love for sawdust. At least until they learned that they could sell it.
In searching for a use for the scrap wood left over from one of his factories, Henry Ford decided to use it in the manufacture of charcoal briquettes, which the subsequent Ford Charcoal Briquettes company did from 1921 until it was sold to the Kingsford Chemical Company in 1951. But charcoal was just one use for sawdust.
The lumber industry sells what used to be waste—sawdust, chips, and shredded wood—for a pretty profit. Today you’ll find these by-products in synthetic fireplace logs, concrete, ice strengtheners, mulch, particle board, fuel, livestock and pet bedding, winter road traction, weed killing, and more.
— Jason Fried
The software equivalent of sawdust today is data. Every second a piece of software runs, every time it’s deployed, every time a user interacts with it, every time a transaction is completed, interesting and potentially valuable data is generated. Today, a small number of companies are leveraging this data in any sort of systematic, meaningful way outside of categories such as web analytics, where the practice is common.
The innate appetite for this information, however, is immense. One of the operating principles behind the success of wearable fitness platforms like the Fitbit or the Jawbone One is the Hawthorne Effect. Coined during a study of manufacturing worker productivity, it simply suggests that humans perform better when they know they are being monitored. Today we can see the implications of this as companies are able to compare themselves against a baseline of other users, as in New Relic’s Application Speed Index, which allows a given customer the chance to compare their performance to other similar customers in an anonymized fashion.
Data-based revenue models are certainly not new; Acxiom, Bloomberg, Fair Isaac, Lexis-Nexis, and others have built large revenue streams off of controlled, borderline monopoly–level access to data streams. Today, however, data is everywhere, which means that the opportunities to monetize have multiplied exponentially.
Perhaps the most attractive feature of data-based business models, however, is the degree to which they can function as a moat or barrier to entry around your business. This is a lesson that Apple inadvertently taught the industry during the launch of its Maps application. Aesthetically, and it can be argued functionally, Apple’s Maps software eclipsed Google’s offering virtually overnight. Unfortunately for Apple, mapping applications are dependent on the corpus of data behind it, and Google’s was and is substantially superior. While it was possible, then, for Apple to make up ground in software very quickly, doing so in the world of data was substantially more challenging even for a company of its resources. There are no shortcuts, as data simply cannot be generated overnight. There are only two means of producing it: collecting it over time, or acquiring an entity who has done so. Unlike software, then, which is a thin shield against would-be market entrants, organizations that amass a large body of data from which to extract value for themselves or their customers are well protected against even the largest market incumbents.
Every software organization today should be aggregating data, because customers are demanding it. Consider, for example, online media services such as Netflix or Pandora. Their ability to improve their recommendations to customers for movies or music depends in turn on the data they’ve collected from other customers. This data, over time, becomes far more difficult to compete with than mere software. Which likely explains why Netflix is willing to open source the majority of its software portfolio but guards the API access to its user data closely. Over in the enterprise world, Cloudera is using its own Hadoop infrastructure to aggregate customer data to inform its own support approach, and in the consumer electronics space, Nest expects revenue from its data-oriented utility provider business to eventually eclipse the sales of its primary product, the Nest thermostat.
Even for businesses that lack a cohesive plan for using their data, the resources to really put it to work, or both, it is imperative to at least begin collecting that data as soon as possible. It is always possible to create a plan and the software to execute it later. Data not collected, however, cannot be conjured on a whim.
The primary difficulty for many software producers, particularly those that have experienced a great deal of commercial success, is that they begin to lose the ability to differentiate between software and revenue. History, of course, demonstrates conclusively that this is a problematic approach. Software that 10 years ago would have had a seven-figure price attached to it is today available for free as open source. Certainly there remain areas where software commands a very high price, but the number of these opportunities is smaller by the year as the portfolio of open source solutions improves in both quality and volume.
In such a climate, the more appropriate way to think of software is as an organizational asset: nothing more and nothing less. Looking at software without assuming monetization can allow more strategic opportunities to emerge.
In spite of the acquisition cost of OTI, for example, and an additional $40 million invested in the platform, IBM made the decision to open source the Eclipse platform, at once making it more difficult to monetize and available to competitors. Why would they take such a risk? Because the perceived benefits, from a broader, more stable community to increased pricing pressure on a competitive product, Microsoft’s Visual Studio, outweighed the costs. This step was only possible, however, because the company considered the software an asset to be leveraged, as opposed to revenue incarnate.
Why would Google, for its part, openly publish details of its MapReduce apparatus and the Google filesystem, which Doug Cutting and Mike Cafarella would later use to create Hadoop? Because one of the biggest challenges to software organizations is hiring. In a world in which Google had kept details of MapReduce and the Google filesystem private, it would be impossible for them to assess these skills in external candidates. Worse, each new hire would have to be exposed and on-boarded to a very different programmatic approach. By thinking about software less as something to be protected, then, Google was able to publish details that had a chance to dramatically improve the efficiency of its recruitment and training, which collectively would easily offset the cost of giving its competitors insights into innovative internally developed technologies.
The key realization for any organization, then, is to not elevate software to an untouchable status. If every business has three types of customers—those that will pay, those that might pay, and those that will never pay—it’s possible to use software to extract real value even out of customers that will never in fact be paying customers. Software can be used for direct monetary gain, to be sure, but used strategically it can accomplish things money could never buy. When it comes to the value of software, then, remember to keep an open mind.
More relevant to smaller businesses than larger entities, generally, the full stack startup was mentioned previously in the context of the Nest case study. The idea is similar to classic vertical integration but more narrow in scope. Whereas classic vertical integration stories such as automotive manufacturing extend deeply into supplier territory, such as Ford manufacturing its own steel, full stack startups are those whose focus extends to each layer necessary to deliver the desired user experience. Their equivalent of manufacturing steel—owning and maintaining the underlying technical infrastructure—may have no bearing on their ability to target the opportunity, and as such, many full stack startups are content to effectively outsource their infrastructure to public cloud suppliers or other infrastructure specialists.
But realizing that the experience will be shaped by factors beyond just the software, full stack startups build or acquire competencies in all areas necessary to shape the user experience. The disadvantages of the process are primarily effort and cost centered. While the costs of developing software have plummeted in recent years thanks to a combination of open source software, public cloud infrastructure, and free or low-cost SaaS applications, the same is not true of nondigital startups. As James Park, CEO of Fitbit, told the Wall Street Journal:
If you are releasing software, you can do multiple deployments, and constantly tweak it. With hardware, you make your bet a year-and-a-half in advance, then you live with it. Mistakes can be expensive. Nowadays things are easier, because of Kickstarter and things like that. This is a capital-intensive business. I would tell others to maximize things like crowdfunding.
— James Park
These costs notwithstanding, depending on the area of opportunity, a full stack startup might be the only realistic approach to a given market. As Dixon said when he coined the term:
Prominent examples of this “full stack” approach include Tesla, Warby Parker, Uber, Harry’s, Nest, Buzzfeed, and Netflix. Most of these companies had “partial stack” antecedents that either failed or ended up being relatively small businesses.
— Chris Dixon
It’s difficult to conceive of how companies like Nest, Tesla, or Uber could have achieved what they have, had they taken a software-only approach to a given market. In some cases, such as Netflix, it’s difficult to imagine them existing at all absent this approach—imagine if the company had to wait for studios to license its technology to stream their media.
None of which is to say that the full stack approach is going to be the correct one in every setting, just that it’s an important question to ask as software strategies are shaped. It’s even possible for a full stack approach to graduate to true vertical integration, as in the case of Apple. Apple has long been an adherent to the full stack philosophy, delivering a tightly integrated experience that it controlled top to bottom, even if it outsourced the actual manufacturing. As it moved into its own chip manufacturing late in the last decade, however, it extended that philosophy even further into true vertical integration territory.
The most important consideration, integration semantics aside, is to determine what a business needs to control to deliver value to a customer. It is from there that everything else, strategy included, follows. Software may be the most important given component, but if it’s one of many, the wider strategy needs to take that into account.