If you can't measure it, you can't manage it.
—Peter Drucker
It's all about the Benjamins, baby.
—Puff Daddy
Whether you take your leadership lessons from Drucker or Diddy, you know that a huge part of the CCO's job is to help your colleagues understand the “scoreboard” for measuring Customer Success.
Over the years, we, as leaders, have developed various methods to quantify most aspects of business such as:
And yet, with all of these statistics, we still can't measure what is usually the greatest “hidden” asset in our business—our customer base. How are we doing with clients? Are we delivering value for them? Are they likely to stay with us? Are they fans of ours?
If you've studied the field, the Net Promoter Score was created to partially address these questions. But with the trend toward digital transformation, companies are awash in data about their clients that they could be using to measure client health.
Customer Health Scoring is the concept that you can integrate various signals about your clients in order to quantify your customer base. Our goals for this chapter are to show you how to build a Customer Health Scoring framework and point out what mistakes to avoid.
But just to motivate you first, let's go back to Kevin Meeks from Splunk. We asked Kevin how he sleeps at night running such a huge Customer Success and Renewals Organization.
There's the fast way to do Customer Health Scoring, and the super-thorough way. We'll start with the fast way—which is more than a stellar starting point. But if you're an advanced CS leader and you want to be challenged to think more deeply, you can skip ahead to the thorough version.
The most important Customer Health Score you can create is one that signals the degree to which the customer is getting value from your product. In other words, are they getting that Outcome that we began to discuss in Chapter 6? You'll want the best measure of ROI that you can find.
You might have a measure of ROI that your product tracks automatically. For example, we spoke with an invoice management company whose clients wanted to reduce the amount of time it took to collect on invoices from their own customers. The company's ROI metric is “reduction in number of days to collect on the invoice.” The greater the reduction, the greater the Outcome for the client.
Not every business has a clear ROI metric, though. In that case, you'll want to pick a measure of product adoption that is a strong proxy for whether clients are getting value. This metric isn't likely to be the sheer number of logins or page views; rather, it should capture the kinds of behaviors that you expect successful clients to perform in your product. If the ultimate milestone of client activity in your product is to create a dashboard, then that dashboard creation can become your ROI proxy. If you don't actually have product telemetry (adoption data), then you can use a measure of client engagement that you believe is a strong signal of ROI. It could be the client's usage of the online community or portal, or their engagement with marketing emails. The point is to take a stand on a single metric that's a proxy for ROI.
Using that ROI metric or proxy, you can create a Customer Health Score that uses a traffic-light analogy: green when the metric is strong, yellow when mediocre, and red when problematic. If you want to get even more granular in your assessment of client health, include additional colors—light-green in between green and yellow, and orange between yellow and red. Then define the metric's range for each color.
The nice thing about creating a Health Score that's based on ROI is that it tends to correlate with other metrics that matter, even if not always (and if you prefer to achieve perfection, move on to the next section in this chapter). Clients with a green ROI metric are likely to renew at faster rates (because they can justify the renewal to their CFO or Procurement department), expand at faster rates (because they have proof of success with you), and advocate more frequently (because they have real success stories to share).
Let's say you've got the fast way under your belt and a little complexity doesn't scare you. Read on, friend, or skip to the next chapter.
As with other areas of measurement, we'd break the value of quantifying your customers into three conceptual buckets. We've drawn an analogy between CS and Sales in Figure 24.1, to give you the gist.
Did you pay close attention to the table? Did something weird jump out? Did you notice that we said “Customer Health Scores,” not “Customer Health Score?” One of the biggest mistakes companies make when implementing Customer Health Scoring is thinking everything can be distilled down to one number.
One way to think about it is to consider the old parable of the Blind Men and the Elephant, as shown in Figure 24.2.
Customer Health is the “elephant.” But there are many views into health and each is like one of the people grasping at the elephant. In our clients, we see seven common types of views into Customer Health:
Value | Sales | Customers |
Report | Report on trend of overall bookings and pipeline to the company, board, and investors. | Report on trend of overall Customer Health Scores to the company, board, and investors. |
Incent | Incent sales rep with commission based upon bookings. | Incent team members managing clients (e.g. Customer Success Managers, Account Managers) toward growth in Customer Health Scores. |
Act | Take action on individual “deals” in the pipeline to convert them to bookings. | Take action on customers where you can improve Customer Health Scores in some way. |
Clients provide multiple areas of value to vendors and you should measure these separately. As we discussed in Chapter 5 on growth, for a typical vendor the desired outcomes for the vendor include:
So a Vendor Outcomes Scorecard could have the following top-level dimensions:
And here's the confounding thing that you know if you've managed clients for a long time: Clients can be guaranteed to stay with you near term (because they are stuck) and be a negative advocate (a detractor). Clients can be engineering you out long term (not “sticky”) and short term be planning to expand. Clients can be about to churn (Retention Risk) and be an advocate! It's important to separate out the various “outputs” of a client relationship into separate metrics.
As an example, see the sample Vendor Outcomes scorecard in Figure 24.3. We've created “groups” for Retention, Expansion, and Advocacy, with sample indicators for each:
Retention Indicators
Expansion Indicators
Advocacy Indicators
In general, vendor-centric scorecards are the types you'd want to share internally or with your board (or your CFO).
Pivoting a different way, some companies may want to measure client risk by functional owner—so you can define clear accountability by department (see Figure 24.4). Examples could include:
On the positive side, some companies want to easily expose opportunity—or whitespace—for their sales team. You can imagine taking each product/service area and using logic to define the unsold opportunity to sell that offering into the given customer. For example, imagine you have two product lines:
You could define rules for what you expect a client to purchase:
You could have further overrides based upon health. If a client has risk issues in a given product, the expansion score for that product could be set to “NA” until the issues are resolved.
You can then put a very sales-friendly view in front of your reps of the “selling opportunity” in their accounts, as shown in Figure 24.5.
Conversely, you could imagine putting yourself in the shoes of your clients and asking how they measure the success of the relationship. As we discussed in the earlier section in this chapter on the Fast Way to start measuring customer health, we're looking to measure to what degree the client is achieving an ROI. To solve this problem more deeply, we're going to use four scorecards, using what we call the DEAR framework:
There are two reasons why we use four scorecards. First, unlike the invoice management company, not every vendor has an obvious ROI metric that they can track. In fact, most vendors don't. In that case, the ROI metric may be subjective (we'll come back to this), and you'll be referring to other hard metrics (D, E, and A) that can give you objective insight into the degree to which the client is achieving an Outcome. Moreover, not every company can track Adoption; for example, many on-premises software companies don't have telemetry, so they'll rely more heavily on the other scorecards to assess whether the client is achieving an ROI.
The second reason we use four scorecards is that these measures tend to come in order: A client can't achieve ROI without first adopting the product; they can't adopt the product without first engaging with your company in some way, at least during onboarding; and giving the client access to the product is obviously a foundational step that needs to happen. Tracking progress in these scorecards over the course of the client's journey can help you understand whether the partnership is heading in the right direction.
Let's come back to the ROI scorecard. If you don't have an ROI metric that you can track, consider tracking what we like to call “Verified Outcomes.” These are testimonials from the executive sponsor at the client that attest to the client's belief that they have achieved an ROI. For example: If that invoice management company weren't able to track the reduction in collection time for some reason, they could send a survey their client and ask about their ROI attainment. The client could respond, “Vendor X was extremely helpful to us. Because of our use of their platform, we were able to reduce the time to collect on invoices from Y to Z.” That would count as a Verified Outcome, and the scorecard could reflect the existence of that Outcome within the past year.
But it's not just about the Outcomes. It's also about how you make a client feel—the Experience—as we discussed earlier in Chapter 6. To envision a Client Experience Scorecard, think about all of the bumps in a typical client experience:
A sample Client Experience Scorecard might look like Figure 24.6 and include the following:
For many organizations, the focus on Customer Health is managing leading indicators. Often, the leading indicators for customer retention and expansion tend to be around the level of engagement between the client and the vendor. A Client Engagement Scorecard, shown in Figure 24.7, might include:
Some businesses, particularly high-touch ones, want to drive clients toward increasing levels of “maturity” with their product or service. At the same time, they want to assess and staff clients differently based upon that maturity level. A Client Maturity Scorecard like the one in Figure 24.8 could include:
Continuing the theme, if you have large customers and/or multiple products, it gets even more complicated (see Figure 24.9). You may have a “sticky” relationship with one business unit and be about to churn another. You may have an advocate in one business unit and a detractor in another. Similarly, a client may be about to churn one product line with you and be in the process of expanding on another. Make sure you measure client health at a granular level—the same level at which your client is measuring you!
Prediction: You're going to spend the most time and energy as a company on the easiest problem. You'll spend months and quarters of precious time talking about how your “data isn't clean” and waiting until you “figure out your data.” In fact, you're reading this right now thinking, “We're not like the typical company—our data is a mess.” Without knowing anything about your business, we can tell you that you have enough data to start. You likely have some combination of:
If you're lucky, you may also have product telemetry of some kind.
Now, if you have none of those, stop reading this and go get yourself some data! But if you're like most companies with a bunch of the above data but with issues in quality, you're not alone. Just from the most readily available of the data sources we listed you can make progress on Customer Health Scoring.
This is the hard part. Now that you have the data and your objectives, you need to turn the former into the latter. Below are some principles to get started:
These are pretty much the inverse of the above, but just for completeness:
Your mind might be swimming right now with ideas (and maybe anxiety) about how to build and roll out scorecards at your company. You might be wondering, does it have to be this complex? Fortunately, it doesn't. We went way down the rabbit hole in discussing the multifaceted question of how to measure customer health. But we don't know a single company that has implemented every suggestion this chapter. And making your clients successful doesn't require that. What it does require is thoughtfulness about what aspects of customer health matter most to your business.
So what do we do next? We'd recommend parallel processing:
One final thought: Let's circle back to the “elephant in the room.” The elephant in this chapter is technology—specifically software. We've intentionally tried to keep this chapter completely agnostic vis-à-vis software, but if you've read this far, you probably understand that measuring your customer base according to the framework we've laid out is completely impossible without some sort of software solution.
Some companies choose to do this through a patchwork of tools (and lots of spreadsheets). Others look to all-in-one platforms or home-built solutions. As the CEO and COO of a Customer Success software company, you might be surprised to find out that we don't actually recommend Gainsight to every company—even though we are wholly convinced we're the most sophisticated and full-featured offering in this category. No matter what size or stage your company is in, the last step (and a crucial part of each prong of the parallel process we talked about above) is a software evaluation. We'll come back to choice of technology in the next chapter.
We think this question of causation or correlation is relevant as the industry analyzes the impact of Customer Success. As we write that statement, we realize we've been asked that question as much as almost any other.
What the questioner (sometimes your CFO, sometimes your CEO) is asking is, “Why should I invest in Customer Success?” And more subtly, “How do you know there is a causal relationship?”
This dialog often comes up when CS teams present correlations:
Now, if your CFO is like many with whom we've worked, they might lean to the skeptical side:
In The Book of Why, the author, Judeah Pearl, talks about the near-impossibility of the counterfactual—namely, analyzing what would have happened in the alternate decision path. What would have happened if you hadn't invested in CS, if you hadn't focused on those accounts, or if you hadn't driven adoption?
Pearl would argue that sophisticated CS teams (with a lot of volume) might design a controlled experiment to truly figure out the causality. As in the drug industry, CS teams could run an A/B test. They could perfectly separate out a set of accounts upon which to test an intervention (e.g. having a CSM) versus a control group where no CSM is assigned. They would need to be careful to make sure there is no “bias” in the group design. For example, you couldn't have the test group be the customers who wanted to engage with a CSM (since those might be more likely to stay anyway). And the good news is several companies have done this. In fact, athenahealth presented at our Pulse conference event on experimental testing in their client base.
But the reality is that most companies don't have the volume, consistency, or processes to perform a real-life A/B experiment. And don't forget about the “harm” issue. Just like it's sometimes unethical to put a patient into a control group if the drug is lifesaving, it's often challenging to imagine ignoring a certain set of clients.
So what do we do—throw up our arms and say, “CS has no provable ROI”? No! We look to our colleagues in other functions to realize how they analyze their own impact. From our vantage point, there are five models:
For some companies, CSM has become part of the product and part of the value proposition. The customer needs the CSM to get value and to work with the vendor. Over time, this cost looks a lot like a Customer Support cost. It's often categorized as a part of “Cost of Goods Sold.”
Given this, CSM as Support lowers Gross Margins. Hence, the pressure will be about automating and reducing costs to the barest minimum that still works.
Many companies are trying to break the logjam of budgeting by self-funding CS out of their own pockets. Specifically, they are creating premium “Customer Success Plans,” which often include premium support, training, professional services “points,” and advanced Customer Success. So in a way, Customer Success becomes another product (though a service product). The great thing is that this approach is easier to justify at the CFO level. Indeed, Salesforce pioneered this model. At Pulse events, Splunk and PTC have presented about similar concepts.
One challenge to wrestle with is, “What about customers who don't sign up for the plans?” The answer from some of these companies is that they use profits from the plans to fund a “for-free” Customer Success service at scale.
Some companies have evolved their thinking to look at CSM as a cost of the sale (renewal). As such, they may have a renewal rep and a CSM working on a given client and they look at the combined cost as a cost of the overall transaction. These models look a great deal like the justification of a sales engineer (SE) or “pre-sales” resource to partner with an account executive (AE) for a new sale. Indeed, just as companies establish target ratios of “SE:AE,” they may have a ratio of “CSM:Renewal Rep.” In this approach, the CSM makes the overall renewal and/or expansion more effective just like a great SE helps an AE close.
The concept of a “lead” was brilliant. Marketers created a well-defined (though often debated) output metric that could be quantified. Sophisticated companies measure a “cost per lead” and a lead-to-sale conversion rate.
We've seen some more sophisticated teams look at their CSM team this way. Allison articulated the concept of three leading indicators for CS: Customer Success Qualified Save (CSQS) for clients that the CSM team helped bring back to health, Customer Success Qualified Lead (CSQL) for upsell generated by the CSMs, and Customer Success Qualified Advocate (CSQA) for references and case studies generated by the CS team. If you have the appropriate formulas of conversion from each of these to lagging indicators (e.g. new business, renewal an expansion), you could come up with a funding model for CS.
We can get to the point where CSM is so fundamental that we don't even question it. Isn't that how sales works? Most companies have a “capacity model” spreadsheet. You input the number of account executives to hire along with assumptions around attainment, ramp time, quota, and other factors, and the model spits out a bookings number. Sales reps create sales—it's like magic!
We think many of us have been in situations where it wasn't so easy. Sales reps can't succeed without leads—or happy clients—or products/services. Sales reps often drive deals and make them happen—but rarely on their own. Sometimes they get lucky and get a “bluebird.”
So maybe one day, far into the future, your CFO will ask, “How do we know those Sales reps caused the expansion versus just being there to facilitate? How do you know it wasn't the CSM that made it happen?”
Maybe we're dreaming? Or maybe we've seen the future!
In this chapter, we discussed a fast way to get started with a Customer Health Score that's based on an ROI metric. Then we dove deep into the different aspects of customer health that you may want to explore as you evolve your CS program over time. Finally, we discussed conceptual ways to think about the impact of Customer Success. Most businesses say “our customers are our greatest asset.” And yet, those same companies have no way to measure this asset. It's time to fix that.