CHAPTER 12

Reporting Year-to-Date Results and Making Forecasts

Like creating plans, reporting year-to-date (YTD) results and making forecasts play a very important role in running learning like a business. Recall the two key questions a manager must answer each month if they are executing the plan with discipline: 1) Are we on plan year-to-date? and 2) Will we make plan for the year? A comparison of YTD results to plan answers the first question, and a comparison of forecast to plan answers the second.

Consequently, reporting the comparison to plan is just as important as reporting the YTD results or forecasts themselves, which is why the management reports in chapter 9 contain columns for all four: YTD results, YTD results compared to plan, forecast, and forecast compared to plan. Note that you can make the comparison numerically (for example, 5 percent below plan) or characterize it simply as above plan, on plan, or below plan.

Furthermore, while we recommend you make the comparison of YTD results directly against plan, some may prefer to make it against YTD plan. Most organizations do not create YTD plans (which require a plan for each month of the year), so the comparison is against where you think you should be at this time of the year. For example, suppose the plan is for a 4 percent increase in sales due to learning and June YTD results show a 2 percent increase due to learning. If you compare to the annual plan increase of 4 percent, you are 2 percentage points below plan. On the other hand, you might conclude that you are making good progress toward achieving the 4 percent. In fact, you are halfway to the planned increase of 4 percent by the end of June, so in that sense you may be “on plan.” Either type of YTD comparison is fine as long as it is clear to the reader (user) which method is being employed. We will make our comparisons to the annual plan but factor in the progress against annual plan when making the forecast. So, for this example, we would characterize YTD results as 2 points below plan but characterize the forecast as being on plan since we are making good progress YTD.

We include YTD results and forecasts in all three management reports for all three types of measures. You may also choose to include YTD results in scorecards and dashboards, although there will be no comparison to plan. Scorecards and dashboards seldom include forecasts.

We begin with a discussion of reporting YTD results.

Reporting YTD Results

Our discussion starts with reporting YTD results for outcome, which are the most complex of the three types of measures. Then we will explore reporting YTD results for efficiency and effectiveness measures.

Reporting YTD Results for Outcome Measures

We examine reporting YTD results separately for quantitative and qualitative measures. It will also be helpful to explore quantitative measures when outcome data are and are not available. The scenarios we examine are shown in Table 12-1.

Table 12-1. Plan for Exploring the Creation of Outcome Measures

 

Are Outcome Data Available?

Type of Outcome Measure

Yes

No

Quantitative

X

X

Qualitative

 

X

When outcome data are available, we generally do not use qualitative outcome measures, as shown by the empty cell in Table 12-1.

Quantitative Outcome Measures

Unlike efficiency and effectiveness measures, YTD results for outcome measures generally won’t be found in any corporate system unless the organization has automated collection of survey results that include a question on isolated impact. Alternatively, you may use a manual system to generate the results, which will simply be saved in a spreadsheet. Otherwise, there will be no “hard data” on outcomes. Consequently, you may have to estimate the YTD values for outcome measures.

We start by examining the case where the organization collects outcome data and then explore the more frequent case where no outcome data exist.

Outcome Data Are Available

In chapter 4 we discussed how outcome data could be collected by survey in an automated process. This requires the addition of at least one question in the post-event survey or follow-up survey. If asked in the post-event survey, the question will be about the expected isolated impact of learning on their performance, referred to as the initial estimate for Level 4. (This is the percentage contribution learning is expected to make on their performance.) If asked during the follow-up survey, the question will be about the isolated impact from learning on their performance as a result of the course, referred to as the final estimate for Level 4. In either case, an 11-point decile scale should be used, with choices ranging from 0 to 100 percent in 10 percent increments. A standard confidence factor such as 50 percent could be applied to all answers to calculate the confidence-adjusted isolated impact from learning, or you could add a question about the confidence of the answer for isolated impact. This would also have an 11-point decile scale from 0 percent to 100 percent. The system would have to be able to multiply the two factors (percentage contribution from learning and the confidence in that estimate) for each respondent.

Modern survey tools make it very easy to obtain at least the estimate of isolated impact since it requires adding just one question. And most organizations today already administer a post-event survey, so simply add the question for your initial estimate. If the organization also has the ability to ask about confidence and multiply the two together, then add the confidence question. If not, simply apply a standard confidence factor of 50 percent to the average for the isolated impact. If the organization does follow-up surveys two to three months after the training, add at least the question on impact.

Even if the questions are not embedded in the post-event or follow-up surveys, they could be asked of a sample group of at least 30 participants.

The YTD data will be organized by program and used in both the program and summary reports. The YTD results for the outcome measure will simply be the average confidence-
adjusted isolated impact from the surveys multiplied by the YTD results for the organizational goal (also called the organization outcome), for instance, sales. The organization outcome YTD results will be readily available from a corporate system or from the goal owner. By formula, the YTD results for the learning outcome measure are:

YTD Results for Learning Outcome: Average confidence-adjusted isolated impact × YTD results for organizational outcome (goal).

For example, if the average confidence-adjusted isolated impact from the survey is 30 percent and YTD sales are up 6 percent, the YTD results for learning outcome would be 0.3 × 6% = 1.8%. This means that learning, by itself, contributed about 1.8 percent higher sales so far this year. The 1.8 percent is entered in the YTD results column for the learning outcome measure.

As mentioned, it is also important to characterize the YTD results compared to plan for the user of the report. In this case, that can be done by adding a note below the YTD results or in the right-hand margin comparing the average confidence-adjusted isolated impact (such as 30 percent) to the planned isolated impact, which is provided after the colon in the outcome measure. If the plan called for a 20 percent isolated impact, then the YTD results of 30 percent are above plan, so note that the 30 percent is above plan. If the YTD results are close to plan (plus or minus about 5 points), characterize it as on plan. If the YTD contribution is less than plan by 5 percent or more, characterize it as below plan in the note. Placement of the YTD results are shown in Table 12-2. The notes could also appear in a notes column on the right-side of the report or at the bottom of the report denoted with symbols.

Table 12-2. YTD Results for Organizational and Learning Outcome Measures When Learning Outcome Data Are Available

Outcome Data Are Not Available

Although it’s not difficult with current survey tools, most organizations are not yet collecting outcome data. Without outcome data, we cannot be as precise, but we can nonetheless at least characterize YTD results compared to plan, which is what leaders most want to know. And, in some cases, we can estimate YTD results for the outcome measure.

Let’s start with the former case, where we don’t have an estimate for YTD results. The best approach here is to follow a chain-of-evidence methodology. Ideally, L&D and the goal owner agreed on expectations for all the key efficiency and effectiveness measures before the program was launched, and you will have actual YTD results for these measures. (All captured in the program report.) If these measures, such as number of participants, completion dates, participant reaction, learning, and application rate are on or near plan, and if the organizational goal (outcome) is on plan, then it would be reasonable to assume that the isolated impact of learning is also on plan. So, enter “on plan” under the YTD results compared to plan column, or since we don’t have any actual YTD results for the outcome measure, center “on plan” between both columns (Table 12-3).

Table 12-3. Example 1: YTD Results for Organizational and Learning Outcome Measures When Learning Outcome Data Are Not Available

In this case, we are really comparing the learning outcome measure to our expectations for how it should be performing, rather than to any measured YTD results. For consistency and to avoid confusion, make the comparison of the organization outcome measure the same way. In other words, for each outcome measure, answer the question, “Are we on track to make plan for the year?” And add a note at the bottom to tell the reader this is what the YTD results compared to plan characterization means.

If YTD results for most or all efficiency and effectiveness measures are below plan, enter “below plan” for the outcome measure. If the key efficiency and effectiveness measures are above plan and if the organization outcome is above plan as well, you might enter “above plan” for the outcome measure (or you might be conservative and just enter “on plan”). Of course, it is possible that the situation is less clear cut, in which case you will have to use your judgment. Our advice is to be conservative and discuss it with the goal owner.

The second case involves estimating and showing YTD results for the learning outcome measure. The thought process is the same as the first case; namely, compare the YTD results for the efficiency and effectiveness measures to plan. In this case, though, we use the results of the comparison to estimate the YTD value for the outcome measure. If the efficiency and effectiveness measures are on plan, and if the organizational outcome is on plan, assume that the isolated impact is also on plan and simply multiply the YTD results for the organization outcome by the planned isolated impact. By formula:

YTD Results for Learning Outcome: Planned isolated impact × YTD results for organization goal.

For example, suppose the planned isolated impact is 20 percent and YTD sales are up 6 percent. If the other key learning measures are on plan, and if sales are on plan, multiply the 6 percent by 20 percent for a 1.2 percent YTD increase in sales due just to learning. In this case you would enter 1.2 percent in the YTD results column and “on plan” in the YTD results compared to plan column. If the other learning measures are below plan, then work with the goal owner to agree on a reasonable isolated impact to use that will be less than plan. Likewise, if the learning measures are above plan and the organizational outcome is above plan, you might agree on an isolated impact greater than plan. Add a note for each one providing the estimated contribution (see Table 12-4 for an example).

Table 12-4. Example 2: YTD Results for Organizational and Learning Outcome Measures When Learning Outcome Data Are Not Available

In both cases, when outcome data are not available, it is best not to show YTD results for the learning outcome measure if they exceed those for the organizational outcome measure. For example, if sales are on plan YTD, it would be best not to show the learning outcome YTD results as above plan. Likewise, if sales are below plan, be cautious in showing YTD results for the learning outcome measure as on or above plan. It may well be the case that learning is contributing as expected and the poor performance of the organizational outcome is due to other factors, but it would be best to make sure the goal owner is on board with this characterization before sharing widely with others.

Qualitative Outcome Measures and Outcome Data Not Available

Generally, qualitative outcome measures are employed when outcome data are not available, so we will assume outcome data are not available.

In this case, the goal owner and L&D leaders have decided to use a qualitative outcome measure like H/M/L. As there is no outcome data available to help create a YTD estimate, we rely on a chain of evidence methodology, which calls for looking at results for the outcome measure and results for Levels 0–3. First, are the YTD results for the organizational outcome measure (such as sales or injuries) showing improvement? Does it appear to be on plan? If so, how do the YTD results for Levels 0–3 compare with plan? If the organizational outcome is on plan and the key learning efficiency and effectiveness measures are on plan, we may assume that learning is also on plan. Of course, this should be discussed with the goal owner, who may have a different opinion. Assuming the goal owner and L&D both agree that goal owner expectations are being met, enter the plan for the learning outcome measure (such as H/M/L) in the YTD results column and enter “on plan” in the YTD results compared to plan column.

If the organizational outcome is not meeting expectations or the goal owner is not happy with the role training is playing, or if Levels 0–3 are not on plan, the goal owner and L&D need to agree on how to characterize YTD results. Perhaps they will agree that learning is below plan in impact and that a plan for medium impact should be downgraded to low impact (or high to medium). If so, make the appropriate entries. Conversely, if both parties agree that learning is exceeding expectations, mark the column for YTD results compared to plan as “above plan” and raise the YTD results column from “low” to “medium” or “medium” to “high.”

Some of the possibilities from a summary report are shown in Table 12-5. For the first goal, learning is having the expected impact but other factors (perhaps the economy) are driving sales higher than planned. For the second goal, learning is contributing less than planned, resulting in the organizational outcome being below plan. For the third goal, learning is also having the planned impact, but other factors are preventing the organization from achieving its planned impact.

We are now ready to examine the relatively simpler cases of reporting YTD results for efficiency and effectiveness measures.

Table 12-5. YTD Results for Qualitative Outcome Measures

Reporting YTD Results for Efficiency Measures

Unlike outcome measures, YTD data should always be available for the key efficiency measures, so we do not need to estimate them. Data on number of participants, courses, hours, and types (modalities) as well as performance support are usually available in an LMS, while data for other efficiency measures like percentage on-time completion, cycle times, and costs may be kept in spreadsheets or data warehouses. Content-related data (like time spent on task) may be found in an LMS or a system specially designed for the purpose. Likewise, data on communities of practice or performance support tools may also be on a dedicated platform or integrated into other platforms.

Bottomline, the data exist so you will have actual YTD results. There may be a challenge to extract it and manipulate at scale, but you don’t have to create it or estimate it. Simply enter it into the YTD results column.

The numerical comparison to plan is also straightforward, but we suggest some standard practices to make the comparisons more user friendly:

•  If the measure is a number, express the comparison to plan as a percentage of plan. For example, if plan is 1,000 unique participants and there have been 650 YTD, enter 65 percent in the column for YTD compared to plan. This tells the reader that 65 percent of plan has been achieved. When the measure is a number, readers are used to this type of comparison.

•  If the measure is a percentage, express its comparison to plan in terms of the difference in percentage points. For example, if plan is for 80 percent utilization and YTD utilization is 76 percent, enter “4% below plan” or “4% below” in the YTD compared to plan column. This saves the reader the effort of doing the math. We recommend this approach rather than showing the percentage of plan (76% ÷ 80% = 95%) because most readers find percentages of percentages to be confusing. It is easy to lose track of whether the percentage is the measure itself or a comparison of the measure to something else.

Table 12-6 is an example of an operations report showing the two ways of expressing the comparison of YTD results to plan.

Table 12-6. Case 1: Two Ways to Express YTD Comparison for Efficiency Measures

While the numerical comparison to plan is straightforward, the characterization of being on, above, or below plan can be nuanced. Recall the discussion at the start of the Reporting YTD Results section about comparing to annual plan or YTD plan. All the comparisons in Table 12-6 are to the annual plan. Alternatively, you could try to answer the question: “Are we about where we expected to be at this point in the year?” Since there is no actual YTD plan for comparison, the answer is subjective. For example, suppose plan for the year is to have 1,000 total participants take courses. YTD through June, there have been 400 total participants, which is 40 percent of plan. Would you say this measure is on plan or not? Clearly, plan for the year has not been achieved, but that is to be expected. Usually there are no YTD plans to compare with the YTD results, so the analyst is left to determine if 40 percent is in line with plan.

For measures like participants that increase throughout the year, the best approach is to look up the YTD results compared to plan percentage for the last several years at the same point in time. If for the past several years, the measure has been close to 40 percent at the end of June, then we could say that we are where we expected to be or we are on plan. If the percentage is typically 50 percent by this point, then this year is running behind plan. This method will work for any efficiency measure that accumulates through the year; for example, number of participants, hours, courses, and costs. If you choose to take this approach to compare YTD results to plan, include a note telling the reader you are comparing progress against where you expected to be at this time of the year. See the example in Table 12-7.

Table 12-7. Case 2: Two Ways of Expressing YTD Comparison for Efficiency Measures

Reporting YTD Results for Effectiveness Measures

Like efficiency measures, YTD data should be available for effectiveness measures so there is no need to estimate them. In most organizations the data will be stored in a data warehouse or on spreadsheets. If the organization uses a partner to collect and store the data, it will be in their database. So, the good news is that YTD results should be readily accessible.

The bad news is that effectiveness measures for Levels 1, 3, and 4 are typically gathered through surveys, which means that the data represent a sample rather than the entire population. Whenever a sample is involved, the user of the report will need to know the sample size or the level of confidence to decide whether to take action based on the sample or wait for more data. We discussed this issue at length in chapter 4 and will not repeat it here. However, we reiterate the importance of conveying the statistical significance of the sample data so the user can make an informed decision. Options include listing the sample size or level of confidence for each measure in the notes column, under the effectiveness measures, or at the bottom of the report.

Alternatively, any YTD results that are not statistically significant could be designated as “NSS” for “not statistically significant” or color coded perhaps in yellow, indicating they should not be used for decision making yet. (Note that efficiency measures generally do not have this same issue since they typically represent the entire population—participants, hours, dollars, and so forth.) An example is shown in Table 12-8 using the NSS approach. Typically, this is only required early in the year or in program deployment for programs with large target audiences, but it may always pose an issue for programs with small target audiences.

In Table 12-8, the first course received only 13 responses, so the average is not statistically significant and marked “NSS.” The follow-up survey has not been administered yet, so there are no data and thus the “N/A” comment for actual application. Likewise, no impact data have been collected so “N/A” is entered in the YTD outcome measure column. The Level 2 measure for learning represents results from all 56 people who have completed the first course, so there is no issue of statistical significance.

Table 12-8. Insufficient Sample Size in a Program Report

We offer the same general guidance as previously for efficiency measures when expressing the comparison to plan for measures where the unit of measure is a percentage. For example, if plan calls for an 80 percent application rate and YTD results are 77 percent, enter “3% below” in the YTD Compared to Plan column. The guidance is similar if the unit of measure is the Likert scale (a number). In this case, express the comparison in terms of points above or below the plan. In other words, do not calculate a percentage by dividing YTD results by plan. For example, if plan is 4.5 on a 5-point Likert scale and YTD results are 4.3, enter “0.2 points below plan” or “0.2 below” in the YTD compared to plan column. If the measure is in units (like cost), divide the YTD results by plan and express as a percentage. The next example from an operations report shows all three approaches (Table 12-9).

This concludes our examination of reporting YTD results for all three types of measures, leaving the creation of forecasts as the last section in this chapter.

Making Forecasts

Unlike reporting YTD estimates for efficiency and effectiveness measures, you will have to make forecasts. While you can use programs with formulas to automatically create a forecast, a manager or analyst should always review them and make the final determination.

Table 12-9. Three Ways of Expressing the YTD Comparison for Effectiveness Measures

A forecast answers the second question a manager should always ask when presented with YTD results: “Will we make plan by the end of the year?” which really means, “Will we make plan without taking any unplanned or special actions?” In other words, if things continue to go as they have gone so far this year, and if we do just what we are already planning on doing for the remainder of the year, is that enough for us to end the year on plan? In many cases, the answer is no, meaning that the leadership team will have to come up with options, cost them out, and decide what to implement (if anything) during the remainder of the year.

The forecast is important because it answers this question. If forecast is the same as plan, no special actions are required and things can continue as planned. If forecast is better than plan, that means management can focus their attention elsewhere for the time being. If forecast is worse than plan, management may need to take action to address the performance gap.

The good news about forecasts is that L&D can influence how the year turns out. If the application rate is below plan through June, there is still time to take action to get back on plan by year end. Contrast that with economic forecasts where the forecaster has no influence over the measure. If interest rates are forecast to be below plan, there is nothing an economist can do about it. There is almost always something we can do in our profession to influence the measure.

Conceptually, a forecast is composed of two parts. It starts with YTD results and then factors in what is expected to happen in the remainder of the year. So:

Forecast for the Period: YTD results + Forecast for the remainder of the year.

Since YTD results are either already known or estimated, the hard part is the forecast for the rest of the year. The following three factors provide general guidance:

1. What have you learned from the YTD results?

2. What new information do you have about the rest of the year?

3. (For outcome measures) What are the forecasts for the key efficiency and effectiveness measures?

Typically, as the year goes on you learn more about how easy or hard it is to achieve the planned results. For example, you may learn that it is far more difficult than anticipated to deliver high impact. In essence, you have learned that the original plan was too optimistic. This challenge is likely to persist throughout the rest of the year, so the forecast for the remainder of the year should be below plan.

The second factor is new information that was not available when you set the plan. New information almost always becomes available, which is why plans can become dated just a few months after they are finished. For example, you might discover that facilitators planned for the second half will not be available or are not as qualified as you were led to believe. You may discover that some of the planned target audience should not receive the training, diminishing its overall impact. You might discover some managers no longer can dedicate the planned time to reinforcement. Any of these factors would lead you to conclude that the forecast for the rest of the year should be below plan. Alternatively, everything may be on plan YTD with no surprises and no new information about the rest of the year. In this case the forecast for the rest of the year should be on plan. And, yes, sometimes, there are pleasant surprises where you learn that something is going to be easier to achieve than you thought. Or, perhaps, you learn that more participants are going to go through the program than planned or that those not yet through it have much more supportive supervisors, a circumstance that should result in higher levels of application. Any of these should lead to an above plan forecast for the remainder of the year.

Third, pertaining only to forecasting outcome measures and following the chain of evidence philosophy, how do the forecasts for the key efficiency and effectiveness measures compare to plan for the remainder of the year? If they are forecast to be below plan, then the outcome may also be below plan. Likewise, if they are forecast to be above plan, perhaps the outcome will be as well. Through experience you will increase your professional judgment in considering these three factors, and they will become much easier to use.

With this as general introduction to forecasting, we now explore forecasting by type of measure. We begin with forecasts for outcome measures, which are more straightforward but less rigorous than forecasts for efficiency and effectiveness measures, where some additional forecasting techniques are available. We recommend creating forecasts for all three types of measures contained in the three types of TDRp management reports.

Making Forecasts for Outcome Measures

For outcome measures, we recommend two forecasts: organizational outcome and learning outcome. The forecast for an organizational outcome like sales will come from the head of sales. L&D may be able to access it through a corporate database or it may need to come directly from the sales organization, but it almost always exists. The forecast for the learning outcome, however, will have to be made, and ideally L&D will work with the goal owner to make it or at least get goal owner approval before it is shared.

As previously discussed, the starting point for the forecast is YTD results. If outcome data are available, there will be an actual YTD result. If outcome data are not available, L&D will have an estimated YTD result (a number) or simply characterize it as being above, below, or on plan. In any case, thought has already been given to the YTD result. So, our attention will now turn to what is likely to happen in the remainder of the year.

All three factors are relevant when forecasting. The program manager should be meeting with the goal owner regularly to discuss YTD results and prospects for the remainder of the year. An important part of this discussion is factors 1 and 2. As the year progresses, both parties should be learning lessons and receiving new information so together they should be a good position to make a forecast for the rest of the year, especially in the second half of the year (for a year-long deployment). They may also be able to employ factor 3 to help make the forecast.

There is no automated process for generating forecasts for outcome measures. You must use subjective, professional judgment, which all managers have to do as part of their job. Organizations always want to know how the year is going to end, especially for strategic programs in support of important goals.

Like the YTD results discussion, we examine three different scenarios:

1. Quantitative outcome measures when outcome data are available

2. Quantitative outcome measures when outcome data aren’t available

3. Qualitative outcome measures when outcome data aren’t available.

We begin with quantitative measures, which are more complex than qualitative measures.

Quantitative Outcome Measures

First, we will examine making forecasts for outcome measures when YTD outcome data are available. Then we will explore our options when YTD outcome data are not available.

Outcome Data Are Available

If outcome data are available for YTD results, then begin with the YTD percentage contribution. How does it compare to plan? Do you understand why it is higher or lower? It may simply be a matter of timing, meaning that participants have not yet had time to apply what they learned.

Discuss with the goal owner to see if you can come to agreement on what to use for the year (or the remainder of the year). Then multiply it by the goal owner’s forecast for the organizational outcome measure. Suppose the planned percentage contribution was 20 percent, but YTD results are 15 percent. Decide whether to use 15 percent for the entire year or assume that it will rise for the remainder of the year. If you agree to use the 15 percent and the goal owner is now forecasting a 9 percent increase in sales, the forecast for the learning outcome measure is 15 percent × 9 percent = 1.35 percent, which is less than the planned 2 percent. Rounding, enter “1.4%” under the forecast column and “0.6% below plan” for the forecast compared to plan column, since the 15 percent is less than the planned 20 percent contribution (and since 1.4 percent is less than 2.0 percent). See the example in Table 12-10.

Table 12-10. Quantitative Outcome Measure Forecasts When YTD Outcome Data Are Available

Outcome Data Are Not Available

If outcome data are not available, begin with the estimate for YTD results. This has already been characterized as on plan, below plan, or above plan and perhaps a number has been estimated. Apply the three factors to learn from the YTD results, consider new information about the remainder of the year, and following a chain of evidence methodology, compare the forecasts for key efficiency and effectiveness measures to plan. If YTD results are on plan, no issues have arisen, no new information is available, and the key measures are forecast to be on plan for the rest of the year, then the forecast may be on plan for the remainder of the year and for the full year. In this case, assume the originally agreed-upon percentage contribution from learning is still on plan and multiply by the goal owner’s forecast for the full year. Enter this number for impact of learning under the forecast column and characterize it as “on plan” in the forecast compared to plan column.

If you expect the rest of the year to be under plan, the goal owner and L&D need to agree on a new, lower percentage contribution from learning for the year. This will be subjective (since no outcome data are available) but directionally correct. Perhaps the plan was for learning to contribute 70 percent toward achieving the goal and now it appears that 50 percent is a better forecast. Multiply the goal owner’s estimate for the full-year organizational outcome by the new percentage and enter under the forecast column. Change the forecast compared to plan column to “below plan.”

It is also possible that YTD results are better than plan or the rest of the year looks considerably better than planned. In this case, the goal owner and L&D should agree on a higher (but still conservative) percentage contribution from learning to multiply by the goal owner’s estimate for outcome. Enter the new percentage under the forecast column and enter “above plan” in the forecast compared to plan column. See examples of these three cases in Table 12-11.

Table 12-11. Example 1: Quantitative Outcome Measure Forecasts When Outcome Data Are Not Available

Another approach is to not enter the new learning outcome measure in the forecast column and simply characterize it versus plan, which was also an option explored for YTD results. You might take this approach if you have little confidence in creating a numeric outcome measure. See examples of these scenarios from a summary report in Table 12-12.

Table 12-12. Example 2: Quantitative Outcome Measure Forecasts When Outcome Data Are Not Available

Qualitative Outcome Measures (When No Outcome Data Are Available)

If outcome data are not available, then begin with the qualitative comparison of YTD results to plan. Consider the three factors and discuss with the goal owner. Given what both of you have learned so far this year and any new information you now have about the rest of the year, and given the forecasts for the underlying efficiency and effectiveness measures, what is your best thinking about the rest of the year? Is it likely to be better, the same, or worse than YTD or plan? If the remainder of the year is expected to be the same as the YTD results, the forecast is simply the same as YTD results (either below plan, on plan, or above plan). Forecasting the outcome for the entire year is more interesting when the remainder is expected to be different. Table 12-13 shows the thought framework for forecasting.

The forecast for the entire year depends on the weighting of the YTD results and the remainder of the year. If it is early in the year or deployment, you can assign more weight to the remainder of the year. For example, if the outcome measure is running below plan YTD but most of the deployment is yet to come, and the forecast for the remainder of the year is expected to be on plan, the forecast for entire year would also be on plan. If plan had been for medium impact, then enter “medium” in the forecast column and “on plan” in the forecast compared to plan column. If it is later in the year or deployment cycle, you may not have enough time left in the fiscal year to offset below-plan performance in the first part of the year. Table 12-14 shows an example from a summary report with various possibilities.

Table 12-13. Thought Framework for Forecasting When Outcome Data Is Not Available

YTD Results

Forecasts for the Rest of the Year

Forecast for the Entire Year

Below plan

•  Below plan

•  On plan

•  Above plan

•  Below plan

•  Below or on plan

•  Depends on time below and above

On plan

•  Below plan

•  On plan

•  Above plan

•  On or below plan

•  On plan

•  On or above plan

Above plan

•  Below plan

•  On plan

•  Above plan

•  Depends on time above or below

•  Above or on plan

•  Above plan

Table 12-14. Example Showing Qualitative Outcome Measure Forecasts (When Outcome Data Isn’t Available)

Given all these variables, our advice is to be conservative and check a proposed forecast against the organizational outcome for reasonableness. If the goal owner is forecasting sales to double by year end, do you really want to forecast the outcome measure for learning to triple? Considering all the factors, what would a safe characterization be for the forecast?

This concludes the discussion of creating forecasts for outcome measures. Last, we tackle forecasts for efficiency and effectiveness measures.

Making Forecasts for Efficiency Measures

Forecasting efficiency measures is much simpler than forecasting outcome measures. The starting point, of course, is the YTD results, which will always be available. Use the first two factors presented earlier—What have you learned from the YTD results? What new information do you have about the rest of the year?—to think about how the remainder of the year will likely compare to plan or to the first part of the year. Unlike outcome measures, the forecast for some efficiency measures can be automated.

We start by recognizing there are two basic types of efficiency measures. As noted in the YTD results section, the value of some efficiency measures accumulates or increases throughout the year. Examples are number of participants, number of courses, number of hours, and costs. Each starts at 0 for the year and may continue to increase until the end of the year (or program). For this type of measure, we can employ a simple formula to annualize or forecast the value for the entire year (all 12 months) based on the YTD value. The formula for annualization of partial-year data is:

Annualized Value: YTD value × (12 ÷ Current month).

For example, if 500 employees took the training through May, which is the fifth month of the fiscal year, the formula forecasts 1,200 will participate by December 31. By formula:

Annualized Value: 500 × (12 ÷ 5) = 1,200.

This simple formula, which can easily automate to calculate the value for the forecast column, assumes that the rest of the year will be exactly like the YTD period. In other words, the monthly rate will not change. In our example, the monthly rate is 100 participants. The formula projects that same rate for every month remaining in the year. This assumption is good for some measures but not for others. So, before using it, you need to check for something called seasonality, which is simply a predictable pattern in the data based on month or season. For example, most retailers have a seasonal sales pattern where sales in the last three months of the year vastly exceed sales in any other three-month period. Moreover, the same pattern repeats every year.

We can apply this same concept to learning (Table 12-15). It is not uncommon for people to put off some learning until the last quarter, when they rush to complete it. This could lead to a repeatable pattern in the data, in which case seasonality exists. Our recommendation is to calculate the average monthly values for the measure in question for at least the last five years. Then examine the data. Are the monthly values essentially the same? If so, there is no seasonality and the simple formula will work to create a forecast. If they are not the same, is there a pattern? Are some months consistently higher than others? If yes, then seasonality is present, and a different but still simple formula must be used.

It is clear that the amount of training done in the last four months in Table 12-15 is much higher than the other months, so seasonality is definitely present. The table also shows the percentage by month (Each month’s value ÷ Total) and the cumulative percentage (Current month’s percentage + Sum of all previous months). We need the cumulative percentage for the next formula, which should be used whenever seasonality is present:

Table 12-15. Example of Seasonality in the Number of Total Participants

Annualized Value: YTD value ÷ Cumulative percentage.

Suppose we have YTD results through August, which show 6,450 participants, and we want to use the formula to annualize that number. The row for August shows the cumulative percentage to be 47 percent, meaning that on average 47 percent of the total year’s participants have gone through by the end of August. By formula:

Annualized Value: 6,450 ÷ 0.47 = 13,723.

So, taking the typical seasonal pattern into account, our forecast is 13,723 participants, which is much higher than we would have gotten using the first formula (6,450 × [12 ÷ 8] = 9,675). This makes sense because we know many employees don’t take the training until the last four months.

Now you have a simple formula that is appropriate for the efficiency measures that accumulate throughout the year. How about other types of efficiency measures that do not accumulate? For example, measures like utilization rates, cycle times, percentage of learning by modality, and percentage on-time completions. For these, we resort to our basic formula for the full-year forecast:

Full-Year Forecast: YTD results + Forecast for the remainder of the year.

Since YTD results are available, we will need to make a forecast for the rest of the year and add the two together. The forecast for the rest of the year will be made starting with YTD results and considering our first two factors (What have you learned from the YTD results? What new information do you have about the rest of the year?)

Once the forecast is made for the rest of the year, we can use a weighted average to combine the two to get the forecast for the full year. Weights are generally just the number of months. So, the weight for YTD results would be the number of months for which you have data (m) divided by 12, and the weight for the remainder of the year is (12–m) divided by 12. By formula:

Full-Year Forecast: (YTD results × [m ÷ 12]) + (Forecast for remainder of year × [12 − m] ÷ 12).

For example, suppose the utilization rate for a suite of online courses is 40 percent for the first four months. The forecast for the last eight months is 70 percent, given the marketing campaign that is planned and the new employee portal that is scheduled to go live next month. By formula:

Full-Year Forecast:

= (40% × [4 ÷ 12]) + (70% × [12 – 4] ÷ 12)

= (40% × [4 ÷ 12]) + (70% × [8 ÷ 12])

= 13.33% + 46.67%

= 60%

So, the full year forecast for the utilization rate is 60 percent. This formula can be partially automated but does require the forecast for the remainder of the year.

The inclusion of forecasts and comparisons to plan for efficiency measures is illustrated in Table 12-16.

Table 12-16. Forecasts for Efficiency Measures in an Operations Report

Making Forecasts for Effectiveness Measures

Forecasting effectiveness measures is much like forecasting efficiency measures. The starting point is the YTD results, which will always be available, although in this case it may not always be statistically significant. Use the first two factors to think about how the remainder of the year will likely compare to plan or to the first part of the year. Unlike some efficiency measures, the forecast for effectiveness measures cannot be fully automated, but we can use formulas for weighted average to calculate the full-year forecast.

Unlike some efficiency measures, most effectiveness measures do not accumulate through the year and should not display any seasonal pattern. (Net benefits would be the exception but don’t use an annualization formula for them.) Instead, a forecast must be made for the remainder of the year based on YTD results and the two factors, and then this forecast must be added to the YTD results. In this case we can use the formula for weighted averages, but the weights will not be months. Instead, the number of participants is used. The weight for YTD results is the YTD number of participants divided by the total planned for the full year, and the weight for the remainder of the year is the number of participants planned for the remainder of the year divided by the number of participants planned for the full year. By formula:

Full-Year Forecast:

[YTD results × (YTD number of participants ÷ Forecasted number of participants for the year)] +

[Forecast for remainder of year × (Planned remaining participants ÷ Forecasted number of participants for the year)].

For example, suppose the Level 3 application rate is 50 percent for the first seven months, but there are plans in place to increase reinforcement, so the forecast for the last five months is 80 percent. If 400 participants have completed the training so far and an additional 600 are scheduled for the last five months, the forecasted total for the year will be 1,000. By formula:

Full-Year Forecast for Application Rate:

= [50% × (400 ÷ 1,000)] + [80% × (600 ÷ 1,000)]

= [50% × 0.4] + [80% × 0.6]

= 20% + 48%

= 68%

So, the full-year forecast is for a 68 percent application rate.

This methodology will work for Levels 1–4. For ROI, we recommend a simple average of the programs completed so far plus the forecasted ROI for programs to be completed by year end. Typically, ROI is calculated or forecasted for a small number of projects, so it is easy to find the average, which will be the forecast for the full year. For example, suppose we have ROI for three completed programs of 15 percent, 30 percent, and 50 percent, and we are forecasting ROI of 25 percent and 80 percent for two more. The simple average of the five programs is 40 percent. If weighting is desired because some programs are much larger than others, weight them by total cost. Table 12-17 shows how to include forecasts and comparisons to plan for effectiveness measures.

Table 12-17. Forecasts for Effectiveness Measures in an Operations Report

Conclusion

Congratulations! You have completed the two most difficult chapters in the book. We have shared a lot of technical guidance on how to create and use plans, report YTD results, and make forecasts. The good news is that you will not need any of this to begin your measurement and reporting journey. However, you will need it if you progress to creating and using management reports, so we wanted to provide it for you. No need to memorize anything—just remember that it’s here in chapters 11 and 12 waiting for you when the need arises.

Next, we turn to how you and your organization can implement the shift to what we call a management mindset, in chapter 13.