24
Metrics: How Do I Measure Customer Success?

If you can't measure it, you can't manage it.

—Peter Drucker

It's all about the Benjamins, baby.

—Puff Daddy

Whether you take your leadership lessons from Drucker or Diddy, you know that a huge part of the CCO's job is to help your colleagues understand the “scoreboard” for measuring Customer Success.

Over the years, we, as leaders, have developed various methods to quantify most aspects of business such as:

  • Finance: GAAP (or IFRS) accounting rules
  • Sales: Bookings methodologies
  • Marketing: “Marketing qualified leads”
  • Personal ego: Twitter followers

And yet, with all of these statistics, we still can't measure what is usually the greatest “hidden” asset in our business—our customer base. How are we doing with clients? Are we delivering value for them? Are they likely to stay with us? Are they fans of ours?

If you've studied the field, the Net Promoter Score was created to partially address these questions. But with the trend toward digital transformation, companies are awash in data about their clients that they could be using to measure client health.

Customer Health Scoring is the concept that you can integrate various signals about your clients in order to quantify your customer base. Our goals for this chapter are to show you how to build a Customer Health Scoring framework and point out what mistakes to avoid.

But just to motivate you first, let's go back to Kevin Meeks from Splunk. We asked Kevin how he sleeps at night running such a huge Customer Success and Renewals Organization.

There's the fast way to do Customer Health Scoring, and the super-thorough way. We'll start with the fast way—which is more than a stellar starting point. But if you're an advanced CS leader and you want to be challenged to think more deeply, you can skip ahead to the thorough version.

The Fast Way to Measure Customer Health

The most important Customer Health Score you can create is one that signals the degree to which the customer is getting value from your product. In other words, are they getting that Outcome that we began to discuss in Chapter 6? You'll want the best measure of ROI that you can find.

You might have a measure of ROI that your product tracks automatically. For example, we spoke with an invoice management company whose clients wanted to reduce the amount of time it took to collect on invoices from their own customers. The company's ROI metric is “reduction in number of days to collect on the invoice.” The greater the reduction, the greater the Outcome for the client.

Not every business has a clear ROI metric, though. In that case, you'll want to pick a measure of product adoption that is a strong proxy for whether clients are getting value. This metric isn't likely to be the sheer number of logins or page views; rather, it should capture the kinds of behaviors that you expect successful clients to perform in your product. If the ultimate milestone of client activity in your product is to create a dashboard, then that dashboard creation can become your ROI proxy. If you don't actually have product telemetry (adoption data), then you can use a measure of client engagement that you believe is a strong signal of ROI. It could be the client's usage of the online community or portal, or their engagement with marketing emails. The point is to take a stand on a single metric that's a proxy for ROI.

Using that ROI metric or proxy, you can create a Customer Health Score that uses a traffic-light analogy: green when the metric is strong, yellow when mediocre, and red when problematic. If you want to get even more granular in your assessment of client health, include additional colors—light-green in between green and yellow, and orange between yellow and red. Then define the metric's range for each color.

The nice thing about creating a Health Score that's based on ROI is that it tends to correlate with other metrics that matter, even if not always (and if you prefer to achieve perfection, move on to the next section in this chapter). Clients with a green ROI metric are likely to renew at faster rates (because they can justify the renewal to their CFO or Procurement department), expand at faster rates (because they have proof of success with you), and advocate more frequently (because they have real success stories to share).

The Advanced Way to Measure Customer Health

Let's say you've got the fast way under your belt and a little complexity doesn't scare you. Read on, friend, or skip to the next chapter.

Why Measure Customers

As with other areas of measurement, we'd break the value of quantifying your customers into three conceptual buckets. We've drawn an analogy between CS and Sales in Figure 24.1, to give you the gist.

Did you pay close attention to the table? Did something weird jump out? Did you notice that we said “Customer Health Scores,” not “Customer Health Score?” One of the biggest mistakes companies make when implementing Customer Health Scoring is thinking everything can be distilled down to one number.

One way to think about it is to consider the old parable of the Blind Men and the Elephant, as shown in Figure 24.2.

Customer Health is the “elephant.” But there are many views into health and each is like one of the people grasping at the elephant. In our clients, we see seven common types of views into Customer Health:

  1. Vendor Outcomes: Track the overall ROI of Customer Success to your business.
  2. Vendor Risk: Get an early warning on risk in accounts.
  3. Vendor Expansion: View how the Sales team is expanding accounts.
  4. Client Outcomes: Track your impact on customers.
  5. Client Experience: Voice of the Customer team wants visibility.
  6. Client Engagement: Marketing team driving Customer Success.
  7. Client Maturity: Company effort to get clients at higher levels of sophistication.
Value Sales Customers
Report Report on trend of overall bookings and pipeline to the company, board, and investors. Report on trend of overall Customer Health Scores to the company, board, and investors.
Incent Incent sales rep with commission based upon bookings. Incent team members managing clients (e.g. Customer Success Managers, Account Managers) toward growth in Customer Health Scores.
Act Take action on individual “deals” in the pipeline to convert them to bookings. Take action on customers where you can improve Customer Health Scores in some way.

Figure 24.1

Schematic illustration of the old parable of the Blind Men and the Elephant.

Figure 24.2

Vendor Outcomes Scorecards

Clients provide multiple areas of value to vendors and you should measure these separately. As we discussed in Chapter 5 on growth, for a typical vendor the desired outcomes for the vendor include:

  • Value if the client stays with them
  • Incremental value if the client expands with them
  • Incremental value if the client helps the vendor acquire new clients (e.g. as a reference)

So a Vendor Outcomes Scorecard could have the following top-level dimensions:

  • Retention: Are they likely to stay with us?
  • Expansion: Are they likely to expand in spend or consumption with us?
  • Advocacy: Are they likely to be an advocate for us?

And here's the confounding thing that you know if you've managed clients for a long time: Clients can be guaranteed to stay with you near term (because they are stuck) and be a negative advocate (a detractor). Clients can be engineering you out long term (not “sticky”) and short term be planning to expand. Clients can be about to churn (Retention Risk) and be an advocate! It's important to separate out the various “outputs” of a client relationship into separate metrics.

As an example, see the sample Vendor Outcomes scorecard in Figure 24.3. We've created “groups” for Retention, Expansion, and Advocacy, with sample indicators for each:

Retention Indicators

  • Adoption Sophistication score: Number of advanced capabilities used
  • Sponsor score: Relationship with exec sponsor
  • Support Health score: Presence or lack of recent poor support experiences

Expansion Indicators

  • Marketing Engagement score: Attendance to recent marketing events
  • Open Opportunities score: Presence of open sales opportunities in CRM
  • Utilization score: Percentage of contracted products or services used

Advocacy Indicators

  • Sentiment score: Recent survey feedback (e.g. Net Promoter Score)
  • Reference score: Recent reference activity
  • Community score: Activity in online community
An illustration of the sample Vendor Outcomes scorecard.

Figure 24.3

In general, vendor-centric scorecards are the types you'd want to share internally or with your board (or your CFO).

Vendor Risk Scorecards

Pivoting a different way, some companies may want to measure client risk by functional owner—so you can define clear accountability by department (see Figure 24.4). Examples could include:

  • Support health: Does the client have too many cases open? Repeated cases? Cases aging too long? This could be owned by the head of support.
  • Product health: Does the client have open bugs or critical enhancement requests? Similarly, the head of product would be responsible for this score.
  • Marketing engagement health: Is the client engaged in vendor marketing activities? The marketing leader would be accountable here.
  • Product/service adoption health: Is the client using the vendor's product/service actively and well? Often, the Customer Success team would directly drive this.
  • Services health: Have the client's services projects with the vendor gone well (on time, on budget, on quality, etc.)? A head of Professional Services might take this on.
An illustration of the Vendor Risk Scorecards.

Figure 24.4

Vendor Expansion Scorecard

On the positive side, some companies want to easily expose opportunity—or whitespace—for their sales team. You can imagine taking each product/service area and using logic to define the unsold opportunity to sell that offering into the given customer. For example, imagine you have two product lines:

  • Lightsabers
  • Tricorders

You could define rules for what you expect a client to purchase:

  • If industry = Star Wars
    • For Lightsabers
      • GREEN = 10+
      • YELLOW = 1–9
      • RED = 0
    • For Tricorders
      • GREEN = 2
      • YELLOW = 1
      • RED = 0
  • If industry = Star Trek
    • For Lightsabers
      • GREEN = 2
      • YELLOW = 1
      • RED = 0
    • For Tricorders
      • GREEN = 10+
      • YELLOW = 1–9
      • RED = 0

You could have further overrides based upon health. If a client has risk issues in a given product, the expansion score for that product could be set to “NA” until the issues are resolved.

You can then put a very sales-friendly view in front of your reps of the “selling opportunity” in their accounts, as shown in Figure 24.5.

An illustration of the Vendor Expansion Scorecard.

Figure 24.5

Client Outcomes Scorecards

Conversely, you could imagine putting yourself in the shoes of your clients and asking how they measure the success of the relationship. As we discussed in the earlier section in this chapter on the Fast Way to start measuring customer health, we're looking to measure to what degree the client is achieving an ROI. To solve this problem more deeply, we're going to use four scorecards, using what we call the DEAR framework:

  • D = Deployment: Is the client activated? This scorecard could represent licenses activated as a percent of all licenses entitled, or a binary Yes or No on whether the onboarding has been completed.
  • E = Engagement: Is the client engaged? This could be a binary Yes or No on whether there has been a check-in, whether live or by email, with a decision-maker in the last three months. (By the way, vendors that are very product-oriented often underestimate the value of checking in with their executive sponsor at the client. It's usually extremely valuable.)
  • A = Adoption: Is the client using the product? This could be a measure of the quality of usage (e.g. the percentage of users who complete a high-value action), depth of usage (e.g. the percentage of users that are daily actives), or breadth of usage (e.g. the percentage of features that are regularly used).
  • R = ROI: Is the client seeing value? To illustrate this, remember the example of the invoice management company that we discussed earlier, with the ROI metric “reduction in time to collect on an invoice.”

There are two reasons why we use four scorecards. First, unlike the invoice management company, not every vendor has an obvious ROI metric that they can track. In fact, most vendors don't. In that case, the ROI metric may be subjective (we'll come back to this), and you'll be referring to other hard metrics (D, E, and A) that can give you objective insight into the degree to which the client is achieving an Outcome. Moreover, not every company can track Adoption; for example, many on-premises software companies don't have telemetry, so they'll rely more heavily on the other scorecards to assess whether the client is achieving an ROI.

The second reason we use four scorecards is that these measures tend to come in order: A client can't achieve ROI without first adopting the product; they can't adopt the product without first engaging with your company in some way, at least during onboarding; and giving the client access to the product is obviously a foundational step that needs to happen. Tracking progress in these scorecards over the course of the client's journey can help you understand whether the partnership is heading in the right direction.

Let's come back to the ROI scorecard. If you don't have an ROI metric that you can track, consider tracking what we like to call “Verified Outcomes.” These are testimonials from the executive sponsor at the client that attest to the client's belief that they have achieved an ROI. For example: If that invoice management company weren't able to track the reduction in collection time for some reason, they could send a survey their client and ask about their ROI attainment. The client could respond, “Vendor X was extremely helpful to us. Because of our use of their platform, we were able to reduce the time to collect on invoices from Y to Z.” That would count as a Verified Outcome, and the scorecard could reflect the existence of that Outcome within the past year.

Client Experience Scorecards

But it's not just about the Outcomes. It's also about how you make a client feel—the Experience—as we discussed earlier in Chapter 6. To envision a Client Experience Scorecard, think about all of the bumps in a typical client experience:

  • Poor expectation setting in sales
  • Long onboarding
  • Rocky onboarding experience
  • Bad support experience
  • Repeated support experiences
  • Outages
  • Weak relationship with account team
  • Product/service quality issues

A sample Client Experience Scorecard might look like Figure 24.6 and include the following:

  • Sales experience: Survey client after sale to see how rep did in expectation setting.
  • Onboarding time: Measure actual onboarding time versus promised.
  • Onboarding experience: Survey client after onboarding.
  • Support experience: Survey client after cases.
  • Support frequency: Measure frequency of tickets.
  • Uptime: Measure service uptime for client.
  • Relationship: Regular Net Promoter Score survey.
  • Quality: Count of bugs affecting client.

Client Engagement Scorecards

For many organizations, the focus on Customer Health is managing leading indicators. Often, the leading indicators for customer retention and expansion tend to be around the level of engagement between the client and the vendor. A Client Engagement Scorecard, shown in Figure 24.7, might include:

An illustration of a sample Client Experience Scorecard.

Figure 24.6

An illustration of a Client Engagement Scorecard.

Figure 24.7

  • Product/service engagement: How sophisticated is the client's usage of the product/service in question?
  • Marketing engagement: How often does the customer attend webinars, events, etc.?
  • Community engagement: Is the client active in the vendor's online community?
  • Advocacy engagement: Is the client an active advocate for the vendor?

Client Maturity Scorecards

Some businesses, particularly high-touch ones, want to drive clients toward increasing levels of “maturity” with their product or service. At the same time, they want to assess and staff clients differently based upon that maturity level. A Client Maturity Scorecard like the one in Figure 24.8 could include:

  • Business processes: Does the client have business processes implemented around the vendor's product or service?
  • Sophistication: How sophisticated is the client's usage of the vendor's product or service?
    An illustration of a Client Maturity Scorecard.

    Figure 24.8

  • Tenure: How long has the client been using the vendor's product or service?
  • Training: How many people at the client have been trained on the vendor's product or service?
  • Advocacy: Is the client an active advocate for the vendor?

What (Parts of) Clients to Measure

Continuing the theme, if you have large customers and/or multiple products, it gets even more complicated (see Figure 24.9). You may have a “sticky” relationship with one business unit and be about to churn another. You may have an advocate in one business unit and a detractor in another. Similarly, a client may be about to churn one product line with you and be in the process of expanding on another. Make sure you measure client health at a granular level—the same level at which your client is measuring you!

An illustration of a table representing the large customers and/or multiple products which gets more complicated.

Figure 24.9

Where to Get the Data

Prediction: You're going to spend the most time and energy as a company on the easiest problem. You'll spend months and quarters of precious time talking about how your “data isn't clean” and waiting until you “figure out your data.” In fact, you're reading this right now thinking, “We're not like the typical company—our data is a mess.” Without knowing anything about your business, we can tell you that you have enough data to start. You likely have some combination of:

  • Sales data in a salesforce automation (sometimes called CRM) system
  • Financial data in an enterprise resource planning (ERP) system
  • Customer feedback data in a survey system
  • Services project data in a professional services automation system
  • Marketing engagement data in marketing automation system
  • User engagement data in a community system
  • General customer data in data warehouse
  • Website activity data in a marketing analytics system
  • Support data in a support ticketing system

If you're lucky, you may also have product telemetry of some kind.

Now, if you have none of those, stop reading this and go get yourself some data! But if you're like most companies with a bunch of the above data but with issues in quality, you're not alone. Just from the most readily available of the data sources we listed you can make progress on Customer Health Scoring.

How to Define Your Scores

This is the hard part. Now that you have the data and your objectives, you need to turn the former into the latter. Below are some principles to get started:

  1. Define Customer Health scorecards, not scorecard: Per the previous section, define multiple ways of measuring client health. The beauty is you don't have to choose! You can mix and match the same atomic data points (e.g. usage, Net Promoter Scores) into these separate views on client health.
  2. Define scorecards at the level your clients experience: Per the previous section, whether your clients buy at the business unit level or at the product line level (or both), make your scores equally granular.
  3. Distinguish leading indicators from lagging indicators: It's okay to have scorecards that track both leading indicators (e.g. how engaged is the client in marketing) and lagging indicators (e.g. renewal forecast), but don't average them together and expect meaning.
  4. For each scorecard, mix and match data: For example, your Marketing Engagement scorecard might include a measurement of their attendance to webinars (from your Webinar system), their open rate on emails (from your Marketing Automation system), and their registration for in-person events (from your Event Management system).
  5. Automate wherever possible: Manual inputs on Customer Health (e.g. a CSM's subjective perspective/scoring on customer sentiment) are sometimes necessary, but such inputs create more process for CSMs and are seldom objective. Pull in hard data from sources (see above) wherever possible to minimize subjectivity and the need for CSMs to make time-consuming inputs.
  6. Define an understandable grading scheme: While eventually a numeric system (0–100) may be appropriate for visualizing a trend, you may want “bands.” We recommend color bands for intuitive understanding (e.g. red/yellow/green), as we discussed earlier in this chapter.
  7. Leverage trends, but be careful: You may want to look at “changes” to measure health (e.g. a client dropping in usage could be a bad sign), but you need to watch out for “false positives” (e.g. a usage drop due to vacations).
  8. Look for “absence of data”: One of the most powerful signals you can look for is the negative. Which clients haven't attended an event or webinar recently? Which clients didn't open the roadmap release email or attend the roadmap webcast? Which decision maker didn't respond to your recent survey?
  9. Vary rules by customer segment or maturity: You can't treat all customers the same in terms of measurement. Make sure you are defining rules based upon the unique segments of your business.
  10. Leverage benchmarking where appropriate: If you have a common value (e.g., transactions/day) across clients, you can use benchmarking rules to compare a given client against the average or median of its peers—and then score based upon this benchmark.
  11. Use overrides (but sparingly): While you may normally take a combination of measures to determine a score (e.g. a combination of webinar attendance, event attendance, and open rate to define Marketing Engagement), you may need an override in cases where, no matter what the other measures say, a selected variable trumps all others. For example, if you get a Detractor Net Promoter Score response from an executive decision maker, that may trump all other objective data. That being said, don't have too many overrides or your scoring system will be for naught.
  12. Focus on actionability: A big part of driving Customer Success as a company is identifying early signs. But equally important is finding actionable early signs. A client that stopped using your service is interesting, but what do you do about it? Perhaps more interesting is a client who is actively using your service but not reading your release notes. Near term, they are healthy. Long term, they may not perceive your innovation and may leave you. And you can do something about it.
  13. Don't have too many measures: While I gave you many examples here, don't overwhelm your team with too many scoring measures overnight. Start with a few (I've seen 6–12 work).
  14. Keep old scores: While you may decide to change the rules in terms of how you measure an area, I encourage you to retain the old data. Hide it, for sure, to not confuse your team. But keep the old data, as it may come in handy down the road.

How Not to Do It

These are pretty much the inverse of the above, but just for completeness:

  • Don't average averages: Don't take all of your data about all aspects of customers, average them together, and expect meaning out of it. It's the same with averaging data across products and business units within a client. You wouldn't average your Balance Sheet and Income Statement together and expect useful information, would you?
  • Don't practice false precision: I like color coding because numbers sometimes lead you to a false sense of confidence about how much you know (87/100 health going to 86 is likely noise).
  • Don't overdo trending: If you want to see “red” every time a client drops 10% in some metric, you will see a lot of (false) red.
  • Don't wait for perfection: The beauty of having multiple scorecards is that you can start now and keep adding incrementally.

Next Steps

Your mind might be swimming right now with ideas (and maybe anxiety) about how to build and roll out scorecards at your company. You might be wondering, does it have to be this complex? Fortunately, it doesn't. We went way down the rabbit hole in discussing the multifaceted question of how to measure customer health. But we don't know a single company that has implemented every suggestion this chapter. And making your clients successful doesn't require that. What it does require is thoughtfulness about what aspects of customer health matter most to your business.

So what do we do next? We'd recommend parallel processing:

  • Run an offsite with your team to define a roadmap for how to measure different parts of your client base.
  • In parallel, pick just one of the concepts in this chapter (such as the simple health score based on an ROI metric, discussed in the first section) and get going so you have a starting point.

One final thought: Let's circle back to the “elephant in the room.” The elephant in this chapter is technology—specifically software. We've intentionally tried to keep this chapter completely agnostic vis-à-vis software, but if you've read this far, you probably understand that measuring your customer base according to the framework we've laid out is completely impossible without some sort of software solution.

Some companies choose to do this through a patchwork of tools (and lots of spreadsheets). Others look to all-in-one platforms or home-built solutions. As the CEO and COO of a Customer Success software company, you might be surprised to find out that we don't actually recommend Gainsight to every company—even though we are wholly convinced we're the most sophisticated and full-featured offering in this category. No matter what size or stage your company is in, the last step (and a crucial part of each prong of the parallel process we talked about above) is a software evaluation. We'll come back to choice of technology in the next chapter.

Correlation versus Causation in Customer Success

We think this question of causation or correlation is relevant as the industry analyzes the impact of Customer Success. As we write that statement, we realize we've been asked that question as much as almost any other.

  • “How do we quantify the impact of Customer Success?”
  • “How do we justify the need for Customer Success?”
  • “What's the ROI of Customer Success?”

What the questioner (sometimes your CFO, sometimes your CEO) is asking is, “Why should I invest in Customer Success?” And more subtly, “How do you know there is a causal relationship?”

This dialog often comes up when CS teams present correlations:

  • Time-based correlation: “We invested in Customer Success this year—look how much our retention rose!”
  • Account-based correlation: “We focused Customer Success on these accounts—look at the saves we made!”
  • Leading indicator correlation: “Check out how much adoption has gone up!”

Now, if your CFO is like many with whom we've worked, they might lean to the skeptical side:

  • “Sure, retention went up, but how do we know your team is the cause?”
  • “Sure, those accounts did well, but were they going to do well anyway?”
  • “Sure, adoption went up, but wouldn't that happen naturally?”

In The Book of Why, the author, Judeah Pearl, talks about the near-impossibility of the counterfactual—namely, analyzing what would have happened in the alternate decision path. What would have happened if you hadn't invested in CS, if you hadn't focused on those accounts, or if you hadn't driven adoption?

Pearl would argue that sophisticated CS teams (with a lot of volume) might design a controlled experiment to truly figure out the causality. As in the drug industry, CS teams could run an A/B test. They could perfectly separate out a set of accounts upon which to test an intervention (e.g. having a CSM) versus a control group where no CSM is assigned. They would need to be careful to make sure there is no “bias” in the group design. For example, you couldn't have the test group be the customers who wanted to engage with a CSM (since those might be more likely to stay anyway). And the good news is several companies have done this. In fact, athenahealth presented at our Pulse conference event on experimental testing in their client base.

But the reality is that most companies don't have the volume, consistency, or processes to perform a real-life A/B experiment. And don't forget about the “harm” issue. Just like it's sometimes unethical to put a patient into a control group if the drug is lifesaving, it's often challenging to imagine ignoring a certain set of clients.

So what do we do—throw up our arms and say, “CS has no provable ROI”? No! We look to our colleagues in other functions to realize how they analyze their own impact. From our vantage point, there are five models:

1. CSM Is Like Support: CSM as a Cost of Doing Business

For some companies, CSM has become part of the product and part of the value proposition. The customer needs the CSM to get value and to work with the vendor. Over time, this cost looks a lot like a Customer Support cost. It's often categorized as a part of “Cost of Goods Sold.”

Given this, CSM as Support lowers Gross Margins. Hence, the pressure will be about automating and reducing costs to the barest minimum that still works.

2. CSM Is Like Services: CSM as a Product

Many companies are trying to break the logjam of budgeting by self-funding CS out of their own pockets. Specifically, they are creating premium “Customer Success Plans,” which often include premium support, training, professional services “points,” and advanced Customer Success. So in a way, Customer Success becomes another product (though a service product). The great thing is that this approach is easier to justify at the CFO level. Indeed, Salesforce pioneered this model. At Pulse events, Splunk and PTC have presented about similar concepts.

One challenge to wrestle with is, “What about customers who don't sign up for the plans?” The answer from some of these companies is that they use profits from the plans to fund a “for-free” Customer Success service at scale.

3. CSM Is Like Pre-Sales: CSM as a Sales Enabler

Some companies have evolved their thinking to look at CSM as a cost of the sale (renewal). As such, they may have a renewal rep and a CSM working on a given client and they look at the combined cost as a cost of the overall transaction. These models look a great deal like the justification of a sales engineer (SE) or “pre-sales” resource to partner with an account executive (AE) for a new sale. Indeed, just as companies establish target ratios of “SE:AE,” they may have a ratio of “CSM:Renewal Rep.” In this approach, the CSM makes the overall renewal and/or expansion more effective just like a great SE helps an AE close.

4. CSM Is Like Marketing: CSM as a Sales Input

The concept of a “lead” was brilliant. Marketers created a well-defined (though often debated) output metric that could be quantified. Sophisticated companies measure a “cost per lead” and a lead-to-sale conversion rate.

We've seen some more sophisticated teams look at their CSM team this way. Allison articulated the concept of three leading indicators for CS: Customer Success Qualified Save (CSQS) for clients that the CSM team helped bring back to health, Customer Success Qualified Lead (CSQL) for upsell generated by the CSMs, and Customer Success Qualified Advocate (CSQA) for references and case studies generated by the CS team. If you have the appropriate formulas of conversion from each of these to lagging indicators (e.g. new business, renewal an expansion), you could come up with a funding model for CS.

5. CSM Is Like Sales: CSM as Obvious

We can get to the point where CSM is so fundamental that we don't even question it. Isn't that how sales works? Most companies have a “capacity model” spreadsheet. You input the number of account executives to hire along with assumptions around attainment, ramp time, quota, and other factors, and the model spits out a bookings number. Sales reps create sales—it's like magic!

We think many of us have been in situations where it wasn't so easy. Sales reps can't succeed without leads—or happy clients—or products/services. Sales reps often drive deals and make them happen—but rarely on their own. Sometimes they get lucky and get a “bluebird.”

So maybe one day, far into the future, your CFO will ask, “How do we know those Sales reps caused the expansion versus just being there to facilitate? How do you know it wasn't the CSM that made it happen?”

Maybe we're dreaming? Or maybe we've seen the future!

Summary

In this chapter, we discussed a fast way to get started with a Customer Health Score that's based on an ROI metric. Then we dove deep into the different aspects of customer health that you may want to explore as you evolve your CS program over time. Finally, we discussed conceptual ways to think about the impact of Customer Success. Most businesses say “our customers are our greatest asset.” And yet, those same companies have no way to measure this asset. It's time to fix that.