CHAPTER 2

Introduction to Balanced Measures

As we saw in chapter 1, the TDRp framework divides measures into three distinct categories: efficiency, effectiveness, and outcomes. Using these categories ensures that all measures have a home; that is, any measure you use in your organization will fit into one of these three categories.

The question we face is: Do we need measures in all three categories for all programs? Or can we pick and choose which categories of measures are most appropriate and leave it at that?

In this chapter, we make the case for why it is important to have both efficiency and effectiveness measures for all programs and provide guidance on when you should add outcome measures to the mix. We also highlight the unintended consequences of using only one category of measures.

The Consequences of Focusing on a Single Type of Measure

In this section we explore a little history for each type of measure and the unintended consequences of selecting just one. We start with efficiency measures.

Focus Only on Efficiency Measures

There is no mystery why organizations devoted and still devote significant energy to gathering efficiency measures. As we described in chapter 1 and will describe in detail in chapter 3, efficiency measures focus on how much and how long. These measures, even without technology, are easy to compile and compute. We can easily track how many employees attended our training programs, we can count how many courses we delivered, and we can compute the number of hours of training delivered. With a calculator we can then determine the hours delivered per employee and the average hours of delivery or class. If the raw counts are accurate, the ratios are simple.

With the introduction of the learning management system (LMS) in the late 1990s, organizations could now efficiently compute their efficiency measures. The LMS enabled calculations and reporting at scale, so learning functions used efficiency measures as their primary indicator of success. And when you augment that data with automated financial systems, these organizations now could compute not only throughput but also the direct costs of training and the investment per learner.

While reporting of efficiency measures took a giant leap forward in the early 2000s, organizations still faced challenges. First, many organizations failed to develop standard definitions for measures. For example, one organization with which Peggy consulted had more than 20 different definitions of headcount that varied by line of business or geographical region. A senior HR leader reported that it took more than 100 touches of the headcount data to compute a valid headcount statistic for the CEO. Beyond headcount, the organization didn’t have common approaches for computing costs. So, creating comparable ratios of usage or investment were impossible.

Second, the data were typically not actionable. Few organizations we have encountered set throughput or financial goals beyond having an overall budget for learning. If the organization trained 1,000 employees in a new distribution process, no one knew if that was a good or bad number or what to do about it. The tendency was simply to report the data and take the “It is what it is” view of the result.

Finally, and most importantly, efficiency measures only tell part of the story. If your organization delivers 10,000 hours of training per year but has no indication of its effectiveness (aside from ad hoc and anecdotal feedback), it lacks any ability to determine if or where it should improve quality. And often, that lack of insight can lead to unintended consequences.

The most telling client example occurred in 2013 with a business unit within a Fortune 500 firm. Peggy’s role was to improve knowledge-sharing processes within the customer contact center of this specific unit. The head of learning for the business unit, new to the firm, had decided that the learning organization had too many resources that were not delivering value. He focused solely on efficiency of the organization, namely, to deliver the most training at the lowest cost. He laid off or redeployed a number of employees and insisted that they reduce their design cycle time to better respond to client needs. Many of his employees, not trained in instructional design, did the best they could within the tight constraints set by their new leader.

As part of her consultant role, Peggy interviewed line employees to understand their onboarding experience and, in particular, how they applied training on the new features of the product at each product launch. They told her that they attended the formal training developed by the department head’s staff but that it didn’t meet their needs.

Their training lasted from five to 10 days, and the materials consisted of a huge binder filled with paper copies of all the slides. When employees got back to their jobs, they were overwhelmed by the sheer volume of material covered and had no easy way to reference specific sections they didn’t understand or refresh their knowledge in areas they forgot. To address this problem, supervisors began to design and conduct “booster” sessions. When asked to describe the booster sessions, the employees reported that supervisors took the binders, separated out the materials into modules, added important tips and techniques, and then retaught the sessions to their employees. Peggy asked, “Are you the only group that is conducting booster sessions?” The response was “no.” Most departments were replicating this “best practice” across the business unit.

Stop and think about what happened here. In the push for efficiency, the head of learning cut staff and reduced cost. On the key measures that management cared about, the department head was exceeding expectations. But in the process of cutting cost, he also cut capability. His people churned out substandard training that didn’t meet the needs of the line employees, who then filled the gap by designing, developing, and conducting their own “booster” training. If we added up the costs associated with these “shadow” training organizations, it’s likely that the head of learning wouldn’t have looked so stellar on his primary metric: efficiency. Furthermore, if he had gathered at least some effectiveness data (however crude), he would have known that efficiency without effectiveness is no way to run a department. Here is how an L&D organization overly focused on efficiency without regard to effectiveness can create a downward cycle for itself (Figure 2-1).

Figure 2-1. Consequences of Being Overly Focused on Efficiency

Caution

Efficiency measures (activity, volume, reach, cost) are critical measures for learning but must be balanced with effectiveness measures to avoid unintended consequences.

Focus Only on Effectiveness Measures

As learning measurement became more common in the mid to late 1990s, many organizations began to realize that efficiency measures were necessary but not sufficient. Increasingly, organizations adopted the Kirkpatrick four levels of evaluation (more on that in chapter 4), to measure reaction, learning, and behavior and to discuss results. Some went further to measure the Phillips isolated impact instead of results for Level 4 and added ROI as a fifth level to the mix. Before learning evaluation systems became readily available, many organizations used paper surveys that were either manually tabulated or scanned and tabulated in Excel, often by third-party vendors.

The challenges with paper surveys were multifaceted. In some organizations, instructors were responsible for creating and reviewing their own “smile sheets” (as they were pejoratively called) and then taking action based on the class feedback. This approach prevented any data aggregation across the myriad instructors and classes taught. There was no way to look across all this data and find opportunities for improvement.

Other organizations took the scanning route, sending the completed “scantron” evaluations to a third party, which processed the paper forms and tabulated the results. By engaging a third party and mailing evaluations across the globe, the delay between delivery and publication of results became exceedingly long. At a minimum, weeks passed before the third party provided their findings. In some cases (as Peggy discovered in her own organization), evaluations took months to travel across the globe to the scanning center. The value of the data degraded as it aged, making it less likely that anyone would act on the findings.

About the time that the LMS was taking off and during the height of the dot-com boom, technology solutions were emerging to simplify the gathering of effectiveness data while also speeding the time to insight. SurveyMonkey formed in 1999 as an ad hoc online tool, Metrics That Matter launched in 2000 (originally a product of KnowledgeAdvisors, now part of a suite of integrated employee journey analytics offerings from Explorance), and other solutions followed. The LMS increasingly included basic survey capabilities to gather learner feedback.

Now effectiveness data could be gathered as easily as efficiency data. Online, self-serve systems reduced the administrative burden of collecting, scanning, and processing data and enabled learning organizations to provide results in a day or less rather than weeks or months.

However, effectiveness data, like efficiency data, has its challenges. To gather it at scale requires valid and reliable surveys. In Peggy’s own organization at the time, senior learning practitioners questioned if the feedback received online would be fundamentally different from what a learner would record on a paper survey. Also, because surveys were self-report, colleagues questioned if they could rely on personal assessment of how much an employee learned, applied, and improved. While self-report was deemed acceptable for reaction data, clients often pushed back on the reliability of self-report for higher orders of evaluation such as application of learning and business results. Nevertheless, with scalable-data collection options limited, organizations increasingly adopted these methods for gathering effectiveness data.

In addition, L&D was hiring more people with degrees in instructional systems design and leveraging their skills and capabilities to create higher quality training. As a result, in many organizations effectiveness data began to be perceived as more important (and certainly sexier) than efficiency data. Unfortunately, when effectiveness trumps efficiency, the result is a more complex process, involving more people such as designers, subject matter experts (SMEs), and business leaders and considerably longer lead times. And the unintended consequence is that when L&D can’t get its products out the door in a timely manner, shadow organizations emerge to fill the gap. Sound familiar? An overdependence on effectiveness measures had serious consequences for the L&D function over time (Figure 2-2).

Figure 2-2. Consequences of Being Overly Focused on Effectiveness

Focus Only on Outcome Measures

As L&D organizations continue to mature, they recognize that efficiency and effectiveness measures provide only a partial picture of their contribution to the enterprise. Measuring business results and demonstrating the impact of specific L&D programs on core business outcomes has been and continues to be the holy grail of the learning function.

The push for demonstrating L&D’s link to the business is a healthy one. As the L&D function works closer with their business counterparts, it is being asked to account for the outcome: “We invested $250,000 in this solution, did it have an impact? And if it did, we need to show the evidence.” Of the dozens of L&D professionals we meet every year, at some point nearly all of them tell us, “I have to show business impact.”

The demand for this data comes from multiple places. First, business leaders rightly expect their learning counterparts to “operate like a business,” meaning that they set clear performance goals, manage to those goals, and demonstrate that their investments have contributed to the overall health of the organization. Second, in a tight labor market, where securing the needed talent is becoming challenging, organizations must improve their performance from within. And how better to do that than train, develop, upskill, and reskill the existing workforce? L&D therefore needs to demonstrate that it is not only training employees but also improving their performance today and in the future.

Few would disagree that demonstrating L&D’s impact on business results is a good practice. However, getting that data is often challenging. First, the outcome data are not always available, at least in the right form to demonstrate the tight link between a program and its business impact (outcome). Second, even if it is available, the approach to secure it for a specific program may not work for others; that is, gathering outcome data doesn’t scale. Third, not all programs are designed to have a clear business outcome. Programs to develop better communication skills will certainly be linked to specific behavioral indicators, but the business outcome may be fuzzy or too many links away in the chain of evidence. Finally, in our experience, few L&D organizations have the discipline to consistently gather outcome data. Unlike effectiveness and efficiency data, which can be standardized across programs, expected outcomes depend on the program objective: Is the intent to improve productivity, reduce cost, drive growth, or minimize risk? Each objective requires a different suite of data elements. Incorporating outcome data into the mix takes discipline to identify the outcome before the program is designed, develop a measurement approach to isolate the impact of training, gather the right data at the right time, and finally, develop the analytics to demonstrate impact.

The challenge of using predominantly outcome data can be illustrated with a client story. This client housed its L&D function within the sales organization. Not surprisingly, business leaders wanted to know, “Has Program XYZ increased sales?” The L&D team had been doing a stellar job of linking their programs to improved business performance. The business, duly impressed, increased L&D’s budget and increasingly integrated it into the planning cycle. Then something unexpected happened: Sales declined after L&D trained the sales associates. Should L&D take “credit” for that, too? After all, if it takes credit for the upside, then doesn’t it bear some responsibility for the downside as well? The sales decline occurred in the spring of 2008. The recession had hit, and this company, a consumer products producer, was an early victim of the declining economy. The good news for L&D was that they also had a large volume of efficiency and effectiveness data. They could demonstrate that training had improved skills and individual performance. What they needed (and didn’t have) was a mechanism to show that, without the training, sales would have declined even further. (See chapter 4 for a discussion on methods to isolate the impact of training on business outcomes.)

When an L&D organization focuses its energies on demonstrating outcomes to the exclusion of efficiency and effectiveness, we believe the function is no longer managing its day-to-day business. L&D needs both efficiency and effectiveness data to demonstrate that it is allocating its funds to the right programs for the right audiences and delivering value to learners and the business. It can’t do that if it is only collecting outcome data that is not available until weeks or months after the conclusion of a program.

As we have shown in our discussion of efficiency and effectiveness data, focusing solely on outcome data has unintended consequences (Figure 2-3). While the example shown is extreme, L&D functions that eschew efficiency and effectiveness measures in favor of outcomes will miss out on important information and limit their ability to demonstrate to the organization that they can run their operation like a business.

Figure 2-3. Consequences of Being Overly Focused on Outcomes

The Solution: Be Purposeful About What You Measure

As we have shown throughout this chapter, the field of learning measurement has and continues to evolve. Measurement fads (yes, we do have those) come and go as do frameworks and approaches. But what never goes out of style is taking a deliberate and thoughtful approach to measurement. As we discuss in chapter 6, L&D professionals need to create program-level measurement strategies that contain:

•  the reasons for measuring

•  the users, why they want the measures, how they will use them, and the desired frequency

•  the measures themselves, with specific measures identified for important programs or department initiatives

•  use of sampling and expected response rates

•  plans to collect and share the data

•  resources required.

While we don’t believe in a one-size-fits-all approach to measurement, we advise that three guidelines should inform the selection of measures: All programs should have a suite of efficiency and effectiveness measures; all strategic programs should have defined outcome measures; and non-strategic programs may not have an associated outcome measure.

First, every solution should have a suite of efficiency and effectiveness measures as part of the measurement strategy. As we have demonstrated, focusing only on efficiency or effectiveness measures produces unintended consequences that are avoided when both sets of measures are included in the plan. (See chapters 3 and 4 for a comprehensive set of efficiency and effectiveness measures.)

Second, for strategic solutions that advance the business strategy, every solution should have not only efficiency and effectiveness measures but also clearly defined outcome measures. For L&D, demonstrating the business impact of these types of training programs is very important. These programs should contribute, in a measurable way, to advancing the business strategy through improved financial outcomes (such as increased sales or profit), enhanced customer outcomes (such as new clients or increased loyalty of existing clients), operational outcomes (such as productivity or cycle time), or employee outcomes (such as engagement or retention of high performers).

For these programs, L&D should partner with the business outcome owner to determine how learning can best support the business goal. Also, the discussion with the business goal owner should inform not only what measures are in play but also how much the solution will contribute to those outcomes.

For example, let’s return to the sales example described earlier. In this case, sales declined. L&D had not had the explicit discussion with the vice president of sales about how much their sales enablement program would or should impact sales. Instead, imagine they had agreed with the vice president of sales that their program would contribute to a 2 percentage point increase in sales relative to an overall target of a 10 percent increase in sales. When sales declined by 5 percent, they could have demonstrated (with the right data to isolate their impact) that sales would have declined by 7 percent had they not delivered the enablement program.

Third, recognize that not all solutions, particularly non-strategic programs, will yield a tangible business outcome. Solutions designed to “run the business” and focus on industry-
standard skills are unlikely to have a clear business outcome. Compliance programs, for example, are intended to reduce organizational risk. But with a few exceptions, quantifying the direct link between a suite of compliance programs and an outcome can be challenging.

For example, one financial services firm ramped up efforts to ensure their frontline associates more consistently flagged customer money laundering activities. They developed an extensive suite of anti-money laundering training that was broadly rolled out across the firm. The L&D leaders felt that the best indicator of success was an increase in the volume of suspicious activity reports (SARs). However, training was not the only effort implemented to ensure that employees filed SARs. And, the timeframe between the training and the expected increase in SARs was uncertain. The organization would have been better served to evaluate whether employees were applying the skills attained and if their behaviors would lead to uncovering incidents of money laundering. These three guidelines are summarized in Table 2-1.

Table 2-1. Guidelines on Choosing Measures

Guidelines

Why

Benefits

All programs should have a suite of efficiency and effectiveness measures

Avoids unintended consequences of focusing simply on efficiency or effectiveness

Ensure L&D manages and monitors its operation

All strategic programs should have defined outcome measures

Ensures alignment of L&D with strategy priorities and that the organization is investing appropriately

Creates a collaborative partnership with the business and joint ownership for outcomes

Non-strategic programs may not have an associated outcome measure

The link between the program and business outcome is tenuous

Minimizes non-valued added measurement activities

Conclusion

In the three next chapters, we discuss the most common efficiency, effectiveness, and outcome measures and provide details about the specific types of measures you should consider when developing your measurement strategy. We then provide guidance on creating a measurement strategy and selecting the most appropriate measures based on your strategy and reasons for measuring. Throughout, remember that there are common measures but no magic list that you should always use. The key is to keep them balanced and ensure they are meaningful for your business to enable leaders to manage L&D efficiently and effectively to meet needs and advance business outcomes.