CHAPTER 7
We are now ready for the hardest part of creating a measurement strategy—selecting the measures. As we noted in the introduction, we have identified more than 180 measures for L&D and more than 700 for all of HR. So, the profession does not lack measures! Instead, the challenge is to select the right measures for the coming year or two, understanding that the measurement strategy will evolve in future years, requiring different measures. Moreover, the goal is to select the smallest number of measures required to meet your needs. There is no extra credit for selecting hundreds of measures, most of which would go unused but require valuable time to generate.
The purpose of measuring provides some guidance in the selection of measures, but as noted earlier, the same measure can serve different purposes. Table 7-1 gives some general selection guidance based on the purpose of measuring.
Table 7-1. Guidance on Measurement Selection by Purpose for Measuring
Measurement Purpose | Measures |
Comments |
Manage |
• Same as evaluate |
Same as evaluate |
Analyze |
• Typically, the CLO will decide which measures to analyze; they may include efficiency, effectiveness, and outcome measures |
|
Evaluate |
• Efficiency measures: participants, completion rate, on-time completion, cost • Effectiveness measures: Levels 1–3 (minimum); Level 5 for key programs • Outcome measures for key programs; isolated impact (Level 4) if possible |
|
Inform and monitor |
• Efficiency measures (e.g., participants or courses) • May also include basic effectiveness measures (e.g., Levels 1, 2, and 3) |
Determine what leaders want to see and provide it |
As you can see, the same measure may be employed for all purposes, especially the most common efficiency and effectiveness measures.
Our approach will be to first select measures for strategic programs, which are directly aligned to organizational goals. The same measures will be employed for most programs, but there will be some differences. The purpose of these measures will be to inform, monitor, evaluate or analyze, and manage; they will be used primarily by the program manager and CLO. Second, we will examine measures for non-strategic programs, which are not directly aligned to organizational goals. Like strategic programs, they typically contain multiple courses. The measures will be similar to those for aligned programs except there usually will be no outcome measure. Third, we will examine measures for courses of general interest, which are typically taken at the discretion of the employee. These are often referred to as general studies courses. Fourth, we will suggest measures to be used at the department level to inform, monitor, analyze, and manage measures across many programs as well as measures for improvement initiatives in L&D processes and systems. Last, we will suggest some other common measures often used by the CLO. The plan is shown in Table 7-2.
Ensure that you have a robust measurement strategy. As explained in chapter 2, such a strategy should contain all three types of measures (efficiency, effectiveness, and outcome), not necessarily for one program or initiative but certainly across all programs and initiatives. A strategy with only efficiency or only effectiveness measures would be unbalanced, regardless of the reason for measuring (except perhaps to inform and monitor if the focus was only on one type of measure). And a strategy with no outcome measures indicates an L&D department that was not well aligned to the organization since learning should always be able to support at least one strategic or organizational goal.
Table 7-2. Plan for Measuring Programs, General Studies Courses, and Department Initiatives
The right combination of measures for a program depends on the program, the purpose and goals of the program, and the reason for measuring the program. The result may be:
• efficiency and effectiveness measures
• efficiency and effectiveness measures plus an outcome measure.
We will refer to programs directly aligned to organizational goals as strategic programs. By organizational goals we mean those set by the CEO, business unit leader, or head of HR—such as a 10 percent increase in sales, a 5 percent increase in product quality, or a 2 percentage point increase in employee engagement. Directly aligned programs would be designed specifically to help achieve these goals. An example would be a sales training program to help sales representatives close more deals. And, if it is an organizational goal, there should be an outcome measure.
We will refer to programs not directly aligned to top-level goals as non-strategic programs. While not directly aligned, these programs nonetheless are very important to the success of the organization, and in some cases, may be more important than the strategic programs. Examples include basic skills training and compliance.
In both cases, we are talking about structured programs, which usually consist of multiple courses in a particular order covering multiple learning objectives. We will address measures for individual courses after discussing measures for both types of programs. Let’s start with the strategic programs.
Measures for consideration are shown by category in Table 7-3. Any given program may not employ all these measures, depending on the specific program, resources, priorities, and other considerations.
Table 7-3. Recommended Measures for Programs Directly Aligned to Organizational Goals
Efficiency Measures |
Effectiveness Measures |
Outcome Measures |
• Unique participants • Total participants* • Completion rate • Completion date† • Cost |
• Level 1 participant reaction • Level 1 goal owner reaction • Level 2 learning • Level 3 intent to apply • Level 3 actual application • ROI |
• Level 4 actual impact from control group or • Level 4 actual impact from trend or regression or • Level 4 initial estimate of impact and • Level 4 final estimate of impact |
Notes:
* If a program consists of just one course, the number of unique and total participants will be the same. If there are multiple courses for the same audience, then in theory Total participants = Unique participants × Number of courses. In practice, participants may not start or complete every course, so measured total participants may be less.
† Completion date or on-time completion is a measure of the program’s progress in meeting development and delivery deadlines, so there may be a measure for each.
While these are the common measures for formal learning, there may also be informal learning integrated into the program, and occasionally, performance support may eliminate the need for any formal learning. So, the measures in Table 7-4 may be employed in addition to those measures listed by category.
Table 7-4. Measures of Informal Learning for Strategic Programs
Efficiency Measures |
Effectiveness Measures |
Content |
|
• Unique users • Percentage reach • Total number of documents available • Total number of documents accessed • Percentage of documents accessed • Average number of documents accessed • Average time spent on site |
• Participant reaction to content |
Communities of Practice |
|
• Total number of community of practice members • Active community of practice members • Percentage of active community of practice members |
• Participant reaction to community of practice resources and activities |
Performance Support |
|
• Performance support tools available • Performance support tools used • Unique performance support tool users • Total number of performance support tool users • Percentage of performance support tool users |
• Participant reaction to performance support tools |
Coaching or Mentoring |
|
• Number of coaches/mentors • Number of coachees/mentees • Ratio of coaches/mentors to coachees/mentees • Average time in program • Average time in meetings • Frequency of meetings |
• Participant reaction to coaching • Participant reaction to mentoring • Coach/mentor reaction |
Just as informal learning should always be considered when designing a program, so should the informal learning measures. While it does not make sense to repeat the list in Table 7-4 for each program, you should select the appropriate informal learning measures whenever informal learning is part of the program.
With this in mind, the following examples illustrate the selection of measures for formal learning in programs that are directly aligned to organizational goals.
Since sales training programs are typically directly aligned to an important organizational goal, all the key effectiveness and outcome measures are employed (Table 7-5). If resources do not permit using all the measures, the recommendation is to measure intent to apply and initial estimate of impact as part of the post-event survey and then skip the follow-up survey.
Table 7-5. Recommended Measures for Sales Training Programs
Efficiency Measures |
Effectiveness Measures |
Outcome Measures |
• Unique and total participants • Completion rate • Completion date • Cost |
• Level 1 participant reaction • Level 1 goal owner reaction • Level 2 average score on first attempt or number of attempts required to pass • Level 3 intent to apply • Level 3 actual application • ROI |
Impact of learning on sales • Level 4 actual impact from control group or • Level 4 actual impact from trend or regression or • Level 4 initial estimate of impact and • Level 4 final estimate of impact |
Since quality, efficiency, and other productivity training programs are typically directly aligned to an important organizational goal, all the key effectiveness and outcome measures are employed, just as with sales training (Table 7-6). If resources do not permit using all the measures, the recommendation is to measure intent to apply and initial estimate of impact as part of the post-event survey and then skip the follow-up survey.
Table 7-6. Recommended Measures for Quality, Efficiency, and Other Productivity Training Programs
Efficiency Measures |
Effectiveness Measures |
Outcome Measures |
• Unique and total participants • Completion rate • Completion date • Cost |
• Level 1 participant reaction • Level 1 goal owner reaction • Level 2 average score on first attempt or number of attempts required to pass • Level 3 intent to apply • Level 3 actual application • ROI |
Impact of learning on goal • Level 4 actual impact from control group or • Level 4 actual impact from trend or regression or • Level 4 initial estimate of impact and • Level 4 final estimate of impact |
Since safety training programs generally are directly aligned to an important organizational or business unit goal, all the key effectiveness and outcome measures are employed (Table 7-7). If resources do not permit using all the measures, the recommendation is to measure intent to apply and initial estimate of impact as part of the post-event survey and then skip the follow-up survey.
Table 7-7. Recommended Measures for Safety Training Programs
Efficiency Measures |
Effectiveness Measures |
Outcome Measures |
• Unique and total participants • Completion rate • Completion date • Cost |
• Level 1 participant reaction • Level 1 goal owner reaction • Level 2 average score on first attempt or number of attempts required to pass • Level 3 intent to apply • Level 3 actual application • ROI |
Impact of learning on safety • Level 4 actual impact from control group or • Level 4 actual impact from trend or regression or • Level 4 initial estimate of impact and • Level 4 final estimate of impact |
Leadership programs are a little different from most other L&D programs (Table 7-8). The completion rate may not be measured or reported because it is assumed all leaders will complete it, and knowledge tests usually are not given, so there is no Level 2. In some organizations the leadership score on the employee engagement survey is used as the organizational goal. At Caterpillar, there were seven questions on leadership and these formed a leadership index, which served as the measure for leadership. With this, Level 4 could be measured. Most organizations, however, do not try to measure the isolated impact on a leadership score but can still measure the isolated impact of leadership training on revenue, cost reduction, and so forth. If resources are limited, the Level 4 and 5 measures can be skipped, especially if the program has excellent senior leadership support.
Table 7-8. Recommended Measures for Leadership Training Programs
Efficiency Measures |
Effectiveness Measures |
Outcome Measures |
• Unique and total participants • Completion date • Cost |
• Level 1 participant reaction • Level 1 goal owner reaction • Level 3 intent to apply • Level 3 actual application • ROI |
Impact of learning on leadership score (used by some organizations) • Level 4 actual impact from control group or • Level 4 actual impact from trend or regression or • Level 4 initial estimate of impact and • Level 4 final estimate of impact |
The list of recommended measures for programs not directly aligned to organizational goals is the same, except there will be no outcome measures because the program is not designed to directly contribute to achieving an organizational goal (Table 7-9). Examples of non-strategic programs would be onboarding, basic skills training, team building, communication skills, data literacy, IT skills, and career exploration. These are all important programs and should increase workforce competency, which should in turn improve performance, which should help the organization achieve its goals, but the impact is indirect compared to a sales training program, which should directly lead to higher sales.
Table 7-9. Recommended Measures for Programs Not Directly Aligned to Organizational Goals
Efficiency Measures |
Effectiveness Measures |
Outcome Measures |
• Unique participants • Total participants • Completion rate • Completion date • Cost |
• Level 1 participant reaction • Level 1 goal owner reaction • Level 2 learning • Level 3 intent to apply • Level 3 actual application • ROI |
• May include a competency measure |
Just as with strategic programs, the measures for informal learning listed in Table 7-4 should be considered whenever informal learning is part of the program design.
Depending on the program, there may be a measure of competency, which in some cases could be a proxy for an outcome measure. As discussed in chapter 5, an increase in competency is usually a means to the end of improving performance, which is the outcome. However, there are circumstances where the program is very important, and the isolated impact of learning is difficult or impossible to measure. In these cases, a measured improvement in competency may serve as a proxy for the unmeasurable outcome.
With a caveat to also consider the informal learning measures shown earlier, the following examples illustrate the selection of measures for programs not directly aligned to organizational goals.
Typically, compliance programs are not directly aligned to an organizational goal. For compliance programs, the most important measures are unique participants and completion rate (to make sure the right employees completed the course), as well as the final test score for each employee to prove that they passed the course (Table 7-10). These measures are required for the organization’s legal defense. Other measures will be helpful in evaluating and managing the program. Often, Level 3 is not measured for compliance, but it should be for those programs that are important to the organization’s culture like sexual harassment. In these cases, actual application may also be worth measuring.
Table 7-10. Recommended Measures for Compliance Programs
Efficiency Measures |
Effectiveness Measures |
Outcome Measures |
• Unique and total participants • Completion rate • Completion date • Cost |
• Level 1 participant reaction • Level 1 goal owner reaction (perhaps) • Level 2 individual score on final test, average score on first attempt or number of attempts required to pass • Level 3 intent to apply |
• Generally none, but could be reduction in violations or percent of complaints |
Although these programs are typically not directly aligned to an organizational goal and thus do not normally have an outcome measure, their highly quantitative nature and focus on customer voice lends them to isolated impact and thus ROI (Table 7-11). All the key effectiveness measures can still be employed, but there may not be an outcome measure, although the goal owner may believe it will contribute to achieving a quality or productivity goal. If resources do not permit using all the measures, the recommendation is to measure intent to apply and initial estimate of impact as part of the post-event survey and then skip the follow-up survey.
Table 7-11. Recommended Measures for Process-Improvement Training Programs
Efficiency Measures |
Effectiveness Measures |
Outcome Measures |
• Unique and total participants • Completion rate • Completion date • Cost |
• Level 1 participant reaction • Level 1 goal owner reaction • Level 2 average score on first attempt or number of attempts required to pass • Level 3 intent to apply • Level 3 actual application • Level 4 isolated impact on various organizational metrics like sales or cost • ROI |
• Impact of learning on process improvement • Productivity • Quality |
Basic skills training programs are not typically directly aligned to an organization or business unit goal. Basic skills training enables employees to meet the minimum requirements of their jobs. In some cases, this training is provided during the onboarding period to ensure they can perform the job for which they were hired. In other cases, basic skills training addresses changing job requirements or preparation for a new role. Some organizations like those in the restaurant industry complete basic training in just a few hours. Other professions such as accounting, consulting, and the military send their employees to in-house or external training that may last weeks or months.
Because basic skills programs can vary significantly in length, you should include a duration measure or time to proficiency. For Level 1, the opinion of the trainee’s current or receiving supervisor is particularly important and may serve as a headline measure of success (Table 7-12).
Table 7-12. Recommended Measures for Basic Skills Training Programs
Efficiency Measures |
Effectiveness Measures |
Outcome Measures |
• Unique and total participants • Duration of program • Completion rate • Completion date • Cost • Time to proficiency |
• Level 1 participant reaction • Level 1 receiving supervisor reaction • Level 2 individual score on final test, average score on first attempt or number of attempts required to pass • Level 3 intent to apply |
• None, unless some measure of competency or a measure of satisfaction from the receiving unit is used |
Reskilling programs are not typically directly aligned to an organization or business unit goal. Like basic skills training programs, reskilling programs may last for weeks or months, so duration measure is common. Likewise, some measure time to proficiency, especially when it is self-paced. For Level 1, the opinion of the supervisor receiving the re-skilled employee is particularly important and may serve as a headline measure of success (Table 7-13).
Table 7-13. Recommended Measures for Reskilling Programs
Efficiency Measures |
Effectiveness Measures |
Outcome Measures |
• Unique and total participants • Duration of program • Completion rate • Completion date • Cost • Time to proficiency |
• Level 1 participant reaction • Level 1 receiving supervisor reaction • Level 2 individual score on final test, average score on first attempt or number of attempts required to pass • Level 3 intent to apply |
• None, unless some measure of competency or a measure of satisfaction from the receiving unit is used |
IT and other professional skills training programs are not typically aligned to an organization or business unit goal. This category of programs includes professional skills such as IT, accounting, purchasing, logistics, finance, and HR. For Level 1, the opinion of the professional leader, like the head of IT as well as supervisors in that area, is particularly important and may serve as a headline measure of success (Table 7-14). From time to time, these programs may be aligned to an organizational goal (like reducing the cost of purchased material), and in these cases there will be an outcome measure.
Table 7-14. Recommended Measures for Professional Skills Training Programs
Efficiency Measures |
Effectiveness Measures |
Outcome Measures |
• Unique and total participants • Completion rate • Completion date • Cost |
• Level 1 participant reaction • Level 1 senior professional leader (e.g., head of IT or accounting) and supervisor reaction • Level 2 individual score on final test, average score on first attempt or number of attempts required to pass • Level 3 intent to apply • Level 3 actual application |
• None, unless some measure of competency is used |
Onboarding programs are very similar to basic skills training programs; they are not typically directly aligned to an organization or business unit goal. They are longer than most courses and are often required before a new hire can perform the job for which they were hired, so a duration measure is common. Unlike basic skills training, there usually would not be a time to proficiency measure and there may not be any knowledge tests. For Level 1, the opinion of the supervisor receiving the new employees is particularly important and may serve as a headline measure of success (Table 7-15).
Table 7-15. Recommended Measures for Onboarding Programs
Efficiency Measures |
Effectiveness Measures |
Outcome Measures |
• Unique and total participants • Duration of program • Completion rate • Completion date • Cost |
• Level 1 participant reaction • Level 1 receiving supervisor reaction • Level 2 individual score on final test, average score on first attempt or number of attempts required to pass |
• None unless the satisfaction of the receiving supervisor is used |
This concludes the section on program measures. Generally speaking, there will always be at least one efficiency and one effectiveness measure, and it is hard to imagine programs for which there would not be two or three of each. Program measures almost always should include a mix (Table 7-16). If the purpose is to evaluate, these measures will be shared in a program evaluation report. If the purpose is to manage, they will be shared in a program report. Both reports are explored in chapter 9.
Table 7-16. Summary of Recommended Core Program Measures
Efficiency Measures |
Effectiveness Measures |
Outcome Measures |
• Unique and total participants (if multiple courses) • Completion rate • Completion date (on-time completion) • Cost |
• Level 1 participant reaction • Level 1 goal owner reaction • Level 2 (learning) if appropriate • Level 3 (intent to apply) • Level 3 (actual application) for key programs |
• Program specific (measure of impact if aligned to strategic program) |
Next, we examine measures for courses not directly aligned to organizational goals and not part of structured programs to address organizational needs. These are often referred to as general studies and include courses like team building, communications, and cultural awareness. They are important and will indirectly contribute to a better workforce, but they do not directly contribute to the organization’s top goals or an important need like onboarding or basic skills training. Examples of unaligned programs in the previous section included the same courses, but the difference is how they are organized. Typically, a program will consist of multiple courses and be designed to meet some need. Often the courses will be mandated or strongly recommended. In contrast, general studies courses are offered individually and are completely at the discretion of the employee. Most general studies courses today are online and self-paced, but some are still instructor led like communications and teambuilding.
Given these differences, only basic efficiency measures and a Level 1 effectiveness measure are recommended for these courses. Since they are not directly aligned, there will be no Level 4 and no outcome measure. It is usually not worth it to measure Level 3, so the measurement strategy is very simple.
Because it is just one course, total participants will equal unique participants (Table 7-17). Since it is not directly aligned to any organizational goal and being managed, there is usually no need to measure completion rate or date. Cost may be measured. Level 1 participant is the only effectiveness measure that may be worth measuring, since it would be interesting to know if the learner thought it was worthwhile. If they don’t, the course should be updated or replaced. If Level 1 data can be easily obtained, then measure Level 1. If it is expensive and time consuming, your measurement effort is better spent on the aligned programs and department improvement initiatives.
Table 7-17. Recommended Measures for a General Studies Course
Efficiency Measures |
Effectiveness Measures |
Outcome Measures |
• Unique participants |
• Level 1 reaction |
• None |
Many organizations pay vendors to provide hundreds of e-learning courses for their employees to take at work, from home, or on the road. The organization may pay by course or by seat and often can swap courses in and out on a quarterly basis. This can be a very cost-effective way to provide quality learning to employees if the utilization rate is high.
Since we are now analyzing a suite of courses, the number of total participants is just as important as the number of unique participants as many employees may take more than one course. The average number of courses used per employee is another way to measure the intensity of usage, and the percentage of employees who accessed at least one course is a measure of the breadth of usage. It is also important to manage course utilization, so number and percentage of courses used are usually measured (Table 7-18). These measures provide guidance on swapping out less used courses for those with potentially better demand.
Table 7-18. Recommended Measures for Suites of E-Learning Courses
Efficiency Measures |
Effectiveness Measures |
Outcome Measures |
• Unique participants • Total participants • Number of courses used per employee • Percentage of employees who have accessed at least one course • Number of courses used • Percentage of courses used |
• Level 1 participant reaction measured periodically for entire suite of courses, not for each course • Level 1 manager reaction to the suite of courses |
• None, unless the courses are deployed to increase employee engagement or retention, in which case these could be outcome measures |
Level 1 participant reaction may be obtained if the question can be appended to the end of each course. If this is not possible, then conduct a periodic survey to obtain feedback on all the general studies courses in the suite taken in the last six or 12 months. Instead of asking about each course, ask about satisfaction with the suite of courses and whether the suite provides enough general learning opportunities. Additional questions might include satisfaction with the breadth of courses offered and an open-ended question on suggestions for other courses. It would also be good to get feedback from managers who have responsibility for employee development because these courses will be used heavily for general development in employee individual development plans.
We now turn our attention to initiatives that represent improvements in L&D processes and systems, or in aggregate measures of efficiency and effectiveness across all programs. The CLO decides on the initiatives for the year and sets a target or goal for improvement. In other words, these initiatives will be managed. (The last section of this chapter addresses other measures of interest to a CLO that may not be managed.)
If the initiative is to improve an efficiency or effectiveness measure, then the primary focus will be on that particular measure. If the initiative is to improve a process or system (like the LMS), then the primary focus will be on measures related to the particular process or system.
For department improvement initiatives, this general guidance will make selection of the appropriate measures easier:
• Choose the measure that is the target of the improvement initiative. This will be the primary measure.
• Consider adding a complementary measure to identify unintended consequences. For a primary effectiveness measure, this will be an efficiency measure. For a primary efficiency measure, this will be an effectiveness measure.
• Consider adding ancillary measures, which provide additional context.
An example of an unintended consequence would be a drop in the satisfaction with e-learning programs (complementary measure) as a result of introducing a lot of new (primary measure) but lower quality e-learning content.
The selection of the primary measure is always easy. It is simply whatever the CLO says. If they say the goal is to increase the percentage of e-learning, the percentage of e-learning is the primary measure. If they say the goal is to increase the average Level 1 reaction across all courses, the primary measure is Level 1 reaction. Selection of the complementary measure is trickier and often requires experience to know what the likely unintended consequences could be.
We start with improvement initiatives across all courses and then address improvements to processes and systems.
Following are some examples of initiatives across all courses.
The primary measure is the percentage of e-learning because that is what the CLO said it was. From experience, an emphasis on a quantity measure may lead to a deterioration in an effectiveness measure, so it would be wise to keep a close watch on learner satisfaction with e-learning (Table 7-19).
Table 7-19. Recommended Measures for Increasing the Percentage of E-Learning
Efficiency Measure (Primary) |
Effectiveness Measure (Complementary) |
• Percentage of e-learning by number of courses or hours or participants) |
• Level 1 participant reaction with e-learning |
In this case, the primary measure is satisfaction with e-learning because that is the goal of this initiative. From experience, an emphasis on a quality measure may be associated with a deterioration in an efficiency measure, so it would be wise to keep a close watch on e-learning usage (Table 7-20). For example, suppose the increase in e-learning satisfaction occurs because all the low-rated e-learning courses are removed. That would be one way to increase the average satisfaction but is probably not what the CLO intended.
Table 7-20. Recommended Measures for Increasing Satisfaction With E-Learning
Efficiency Measures (Complementary) |
Effectiveness Measure (Primary) |
• Participants for e-learning • Percentage of e-learning • Number of e-learning courses |
• Satisfaction with e-learning: Level 1 participant reaction with e-learning |
The primary measure is the percentage of on-time deliveries because that is the goal from the CLO. What could go wrong as this goal is pursued? For one, the staff may rush out new programs before they are ready, which is likely to show up as a drop in participant satisfaction and ultimately a drop in goal owner satisfaction. So, Level 1 becomes the complementary measure to watch (Table 7-21).
Table 7-21. Recommended Measures for Increasing On-Time Program Deliveries
Efficiency Measures (Primary) |
Effectiveness Measure (Complementary) |
• Percentage on-time program deliveries • Note: track both promised and actual completion deadlines |
• Level 1 goal owner reaction • Level 1 participant reaction |
Context: Assume Level 1 participant reaction has declined over the past two years. Analysis indicates that participants are unhappy with the old-style courses and the quality of the instructor’s in-class performance. Plans have been put in place to upskill the course designers and provide coaching to the instructors. At the same time, a new LMS is being installed.
This example shows a particular situation but illustrates how ancillary measures can be used to present a holistic picture. The primary measure is satisfaction with all learning because that is goal of this initiative. The most important complementary measures are probably the percentage of learning by modality since a change in the mix could produce an improvement of Level 1. For example, if e-learning has a lower Level 1 than ILT, simply reducing the percentage of e-learning could increase Level 1. In this case, no actual improvement occurred in either e-learning or ILT.
There are a number of ancillary measures that may be of interest in this particular example. One is the installation of a new LMS, which can easily impact Level 1 scores regardless of efforts to improve Level 1 for ILT. If the transition is messy and courses are not available, participants could reflect that in their Level 1 feedback. Since improvement is dependent on designers taking the new courses and instructors receiving coaching, both bear watching (Table 7-22).
Table 7-22. Recommended Measures for Increasing Participant Learning Satisfaction
Efficiency Measures (Complementary) |
Effectiveness Measure (Primary) |
• Percentage of ILT and e-learning by participants, courses, and hours |
• Satisfaction with learning: Level 1 participant reaction with all learning, ILT, and e-learning |
Potential Measures (Ancillary) |
|
• Installation date for new LMS • Number of designers taking new design courses • Completion date for designers to complete new courses |
• Number of instructors selected for coaching • Completion date for coaching targeted instructors |
Assume Level 3 actual application has been falling, and the CLO has directed staff to undertake an initiative to improve communication by the goal owner with the target audience and influence the goal owners to do a better job reinforcing the learning (Table 7-23).
Table 7-23. Recommended Measures for Increasing Overall Application Rate
Efficiency Measures (Complementary) |
Effectiveness Measure (Primary) |
• None |
• Level 3 intent to apply • Level 3 actual application |
Potential Measures (Ancillary) |
|
• Percentage of key programs with communication plans by goal owner in place at launch • Percentage of key programs with reinforcement plans in place at launch • Percentage of key programs where reinforcement occurs |
• Number of instructors selected for coaching • Completion date for coaching targeted instructors |
This example reflects another very specific situation but illustrates how ancillary measures can help present a holistic picture. The primary measure is application, which will be measured both ways. Intended application will serve as a leading indicator of actual application. No true complementary measures have been identified, but several important ancillary measures have been, and these are taken directly from the action plan for this initiative. The first measures whether the goal owner has communicated, and the last two measure how the goal owner is performing on reinforcement. These all will be leading indicators for application.
Last, we examine the selection of measures for process or system improvements. These will often involve very specific measures, which may not be as common and consequently not discussed in chapter 3. The approach, however, will be the same as in the previous section. The primary measure will come directly from the CLO’s goal, and it may be helpful to identify an ancillary measure of interest or a complementary measure to guard against unintended consequences.
Like our other examples, the primary measure comes directly from the CLO establishing a goal to increase LMS uptime. No complementary measure has been identified that may deteriorate because of increasing LMS uptime, but if resources must be redirected to work on the LMS uptime issue, it is possible that another measure could be adversely affected. An improvement in LMS uptime should result in higher user satisfaction, so that is included as an ancillary measure (Table 7-24).
Table 7-24. Recommended Measures for Increasing LMS Uptime
Efficiency Measure (Primary) |
Effectiveness Measures (Ancillary) |
• LMS uptime (percent of time LMS is available) |
• User satisfaction (a Level 1 type measure applied to the LMS) |
In this case, the objective is to reduce the average time of help desk calls. The danger, of course, is that the reduction in average call time will reduce the quality of the support (providing brief, incomplete guidance or not asking if there are any other issues), so user satisfaction should be measured to ensure the quality does not suffer. Depending on the plan to reduce call time, there may be other ancillary measures to monitor or manage as well. For example, if the plan includes posting FAQs, then an ancillary measure may be the number of FAQs and their completion date. If the plan is to provide better scripts for the help desk staff, then an ancillary measure would be the completion date for the scripts and perhaps training on the new scripts (Table 7-25). Each of these ancillary measures will help leaders manage the action items to reduce the average call time.
Table 7-25. Recommended Measures for Reducing Call Time
Efficiency Measures (Primary) |
Effectiveness Measures (Complementary)* |
• Average help desk call time |
• User satisfaction with help desk |
Potential Measures (Ancillary) |
|
• Number of FAQs posted • Completion date for FAQ posting |
• Completion date for new scripts • Completion date for training on new scripts |
Assume the CLO wants to increase the use of communities of practice, portal content, and performance support. Efficiency measures such as usage will provide insight into the adoption of these informal learning methods. However, the CLO also understands that usage without positive perceptions will not be sustainable. As a result, the CLO suggests that the organization also select one or two complementary effectiveness measures (Table 7-26).
Table 7-26. Recommended Measures for Increasing the Use of Informal Learning
Communities of Practice
Efficiency Measures (Primary) |
Effectiveness Measures (Complementary) |
• Number of communities of practice • Active communities of practice • Unique Users |
• User reaction to communities of practice |
Content
Efficiency Measures (Primary) |
Effectiveness Measures (Complementary) |
• Number of documents available • Number of documents accessed • Unique users • Total users |
• User reaction to content |
Performance Support
Efficiency Measures (Primary) |
Effectiveness Measures (Complementary) |
• Number of performance support tools available • Number of performance support tools used • Unique users • Total users |
• User reaction to performance support |
This concludes our section on department initiative measures to be managed. There will always be a primary measure, which is the goal of the CLO. It may be either an efficiency or effectiveness measure. It would be wise to supplement the primary measure with a complementary one that will help identify a potential unintended consequence of the initiative. And consider whether any ancillary measures would be useful.
The primary measures will be shared in the operations report, which is discussed in chapter 9.
While we have already discussed many measures and applications, there are still other measures that a CLO may want to see on a monthly, quarterly, or annual basis. Unlike the measures in the last section on improvement initiatives, these may not always be managed. Sometimes they will simply be used to inform or monitor and, if the results are not acceptable, may be moved over to the “manage” category.
Here, then, are some examples of these other measures to round out our chapter on measurement selection.
Typically, the CLO will be interested in the mix of courses, hours, and participants by modality. By definition, these measures will be aggregated across all courses for which there are data:
• courses available
• percentage of courses used
• percentage of courses, hours, and participants by modality
• percentage of courses by area or discipline.
In addition, some learning organizations segment their learning portfolio based on the program purpose, such as increased sales or enhanced employee productivity. In this case, the CLO might be interested not only in efficiency measures for each portfolio, but also effectiveness measures to enable comparisons across programs with a similar purpose.
CLOs are also usually interested in some measures of course management such as:
• total courses developed
• percentage of courses developed meeting the deadline
• total courses delivered
• percentage of courses delivered meeting the deadline
• classes canceled
• percentage of classes canceled
• effort to create new courses
• effort to update existing courses.
Another set of common measures focuses on reach or the number or percentage of employees touched by learning. As we discussed in chapter 3, reach is a measure of the breadth or dispersion of learning within the organization. Reach answers the question of whether learning is broad based (reaching most employees) or more narrowly focused (reaching only a few). These types of measures are often of interest to senior leaders:
• employees reached by L&D
• employees reached by formal learning
• percentage of employees reached by formal learning
• employees who have accessed content
• employees with a development plan
• percentage of employees with a development plan.
The CLO and the CFO are always interested in cost measures. The most common include:
• direct expenditure
• opportunity cost
• direct expenditure per employee
• direct expenditure per unique learner
• direct expenditure as a percentage of revenue, payroll, or profit
• percentage of direct expenditure for tuition reimbursement
• percentage of direct expenditure for external services.
The discussion in this section so far has focused primarily on measures for formal learning, but informal learning measures are becoming increasingly common. Content, a community of practice, performance support, or coaching should be added to the lists of measures that a CLO would like to see for the purposes of informing or monitoring. A complete list was included in Table 7-4, and a shorter list was provided in the discussion about increasing the use of informal learning in the previous section on department initiatives.
This concludes our discussion of other measures a CLO might want to see to inform or monitor. Note that measures to manage department improvement initiatives were discussed in the previous section. These measures should come up in the discussion with the CLO at the start of the measurement strategy process and could include any of the efficiency or effectiveness measures identified in chapters 3 and 4. These additional measures to inform or monitor would appear in scorecards or dashboards.
You should now be ready to create the measurement portion of your measurement and reporting strategy. Our advice is to start with a few measures and gradually grow your strategy through time, both in terms of number of measures and complexity. It is much better to execute a lean strategy successfully than fail to execute a grandiose strategy. A successfully executed lean strategy will produce results, give everyone valuable experience using the measures, and build internal expertise and credibility. In contrast, failure to execute a complex strategy may lead leaders to question the ability of the team to produce results and make it more difficult to win approval for strategies in the future.
With our measures selected, we are now ready to explore how the measures are used in reports. Chapter 8 introduces the five types of reports and chapter 9 focuses on the highest use of measures for management reporting.