8

Evaluating Job Aids

There is a tendency to shortchange evaluation when it comes to job aids and ignore its importance. In some cases, the client is in a hurry and doesn’t have the patience to come back and evaluate the results. In other cases, the consultant or designer believes that evaluation is too difficult or—for some soft-skill issues—impossible to measure. Both perspectives are wrong and shortsighted. Evaluation is important for many reasons.

Evaluating Results

For starters, you need to evaluate the job aid to determine the kinds of improvements required, especially if you have a client who resists evaluating. One approach for dealing with clients who push back when you speak of evaluation is instead to talk in terms of “next steps” or “refining the prototype.” Often a client who bristles at the idea of evaluation easily acquiesces to the concept of determining what comes next in the path of continuous improvement. And that is part of what evaluation is all about—determining what progress has been made, how to make it better, and what should happen next. Evaluation is necessary so you can generate a job aid that does what it is supposed to do. Even if the client sees little value in evaluating the job aid, evaluation is something that you need to find a way to make happen.

Basic Rule 16

Take the time to evaluate your job aids. You need to do this to improve the design, demonstrate effectiveness, and show that job aids can be a superior approach to many other options that management might push.

Summative Evaluation

At this point, it’s helpful to do a quick review of terms. In the previous chapter you learned about formative evaluation, which assesses how well designed a solution is. A formative evaluation of a job aid would seek ways to improve the design. In contrast, summative evaluation looks at the impact—especially in terms of performance and organizational results—of a solution. Once you’ve deployed your job aid, you’ll probably want to do some form of summative evaluation to assess how successful the job aid was in reducing performance problems.

You might hear people argue that it’s too difficult or impossible to conduct a summative evaluation that measures the impact of job aids, especially ones dealing with soft skills. That is nonsense! Evaluation of job aids, even ones that are bundled with other solutions or involve soft skills, is very doable. Assessing the job aid’s impact on worker performance goes to the very heart of why the job aid was designed. Why create the job aid if you aren’t interested in determining if worker performance is improved? Assessing the organizational impact is actually very easy in many instances. Furthermore, because job aids typically have high ROIs, it makes sense for you to evaluate the success of your job aid efforts for political purposes.

Noted

Job aids are usually much less expensive than many other initiatives, such as classes, team building, process redesigns, job restructuring, or coaching programs, yet they can often be just as effective. Therefore, job aids are capable of achieving astronomically high ROI ratios. It is worth your time to determine the ROI for some of your job aid projects.

Planning for Evaluation

The first thing to understand about evaluating job aids is that if your job aid initiative started out with a legitimate trigger (such as a front-end analysis, conversion of training content, or training support), then measuring the impact of the job aid is significantly easier. If there is no legitimate trigger and the initiative was started because a job aid “just seemed like a good idea” or “management insisted on it,” then evaluation becomes much more difficult.

The trigger provides a clear target to aim for. This matters because the evaluation process actually starts before any job aid has ever been designed or distributed. Effective evaluation demands that you determine the critical targets that the organization seeks to improve. These targets can be either organizational goals or performance results. In either case, you have something to provide direction and a baseline or goal to measure progress against.

For instance, suppose that you’ve been directed to develop a job aid as a means of shortening an existing training course. The training department’s task analysis has shown that some of the activities in the course can be replaced by job aids. The purpose for doing this is to shorten the technical training course from 10 days to seven. Because you’re clear on the trigger for this process (shortening the length of the course by replacing some of the practice exercises with job aids), you now have something to measure against. You could determine if using the job aids does indeed reduce the length of the course (without reducing performance), and you could calculate the savings produced by shortening the course (and producing effective workers quicker). This is possible because the trigger to the job aid process provided a goal for tracking progress and a result to measure the impact of the job aid against.

Noted

The rationale for beginning a job aid project is critical because it not only determines if the project is legitimate, but also shapes how easy evaluation will be for you. A job aid initiated on a whim or without a clear focus will be much more subjective, and it will be harder to measure the ultimate impact on the organization or performer.

Related to this point is the importance of a task analysis. If there is no task analysis done as part of the job aid development process, or if the task analysis is shoddy and sloppy, then it is extremely difficult to isolate the skills and steps you are evaluating, unnecessarily complicating the evaluation process. A strong task analysis makes it possible for your evaluation to target particular skills or steps to measure and reliably assume that the job aid focuses on that specific performance issue. Additionally, the task analysis may have identified the impact of particular steps (or the degree to which some steps are critical to the overall task result), which also helps the evaluation process.

In any case, it is important to recognize that actions you take, or don’t take, at the beginning of the process—the trigger that starts work on the job aid and the task analysis within the development process—can make your evaluation of the results much easier or harder.

The Five Levels of Evaluation

One typology for training evaluation was developed by Donald Kirkpatrick (1975). Kirkpatrick’s approach has four levels of evaluation:

1. reaction

2. learning

3. application on the job

4. business results.

Level 1, or reaction, deals with participant or user comments. Level 1 evaluation is very valuable for the formative evaluation process. Designers typically rely heavily on performer feedback with initial prototypes, and this is a form of Level 1 evaluation.

Level 1 data, however, are not likely to be your best source of data for overall results: You aren’t usually as interested in whether performers like the job aid. Instead, you’ll want to know what impact it had on the organization or the performance gap. Resist the temptation to utilize only Level 1 evaluation in assessing job aids.

Level 2 measures the degree of learning or skills acquired. This is likely to be less relevant for you as a job aid designer. Remember, job aids are not a substitute for training; they do not teach new skills. A job aid can be used within a course to build confidence in a set of new skills, enhance memory, or reinforce what performers have learned. Consequently, there may be times when you as a designer will do a Level 2 evaluation of a job aid. However, in these instances it will be difficult (but not impossible) to separate the impact of the job aid from the impact of other solutions, such as the initial skills training.

Level 3 measures performance on the job. Level 3 involves a determination of whether performers have changed how they do the work and what results they get. Are employees doing better work? Has the performance gap been closed? Regardless of what kind of evaluation you do, you will usually plan on evaluating at Level 3. It isn’t very helpful if workers like the job aid or remember key material presented by it, but there is no change in performance. The purpose of the job aid is to improve performance. Therefore, a Level 3 evaluation is the most common type of evaluation you’ll need to conduct for most job aid assessments. You could choose to evaluate other elements of the job aid, but, from the beginning, you should give thought to how you will track the change in performance on the job.

Level 4 focuses on business results. In this case, you’re determining whether sales went up, customer retention improved, costs decreased, or rework was minimized. The higher in the organization your client is, the more likely it is that your client will want you to demonstrate an impact on the business. Although Level 4 evaluation is generally regarded as very forbidding to many designers and trainers, the real key to a comparatively easy Level 4 evaluation process is to begin evaluation at the start of the design process. If you have a clear, legitimate trigger to start the entire job aid design process, such as a demonstrated performance gap or a critical business need, then you will have a target to measure against.

To Kirkpatrick’s four levels of evaluation, we can add what has been referred to by Jack Phillips (2003) and others as Level 5, or ROI. The distinction between Kirkpatrick’s Level 4 and Phillips’ Level 5 is that Level 4 looks at whether organizational goals were met, and Level 5 assesses whether those goals were worth achieving. Stated otherwise, sometimes it may cost an organization so much to achieve a goal that it cost more than it gained. For instance, you could create a job aid to remind a fleet of truck drivers to periodically check their temperature gauges because the model of truck they are driving has a tendency to overheat. However, using the job aid leads to an increase in accidents on the road, which more than outweighs the possible benefits of detecting the overheated engine early.

ROI is an especially valuable measure for job aids. You may often discover that a job aid doesn’t get the best business result as another possible solution, yet because of the significantly lower cost, it may end up with a much higher ROI.

Isolating the Impact

A common refrain from many designers and trainers—and one used to justify why evaluation beyond Level 1 is futile—is that it’s too difficult to measure the impact of a job aid given the range of other factors in play on the job. In fact, evaluation is much easier than most trainers recognize.

If the job aid design is triggered by a front-end analysis or root-cause assessment, then evaluating the resulting impact is comparatively straightforward. The data from the front-end analysis allow you to distinguish the impact. If you’ve conducted a thorough task analysis and you’re confident in your data, then that task analysis usually allows you to determine how much impact a step has in achieving a particular result.

A wide range of methods can be used to isolate the impact of a solution on the job site, as Phillips (2003) notes. One of the more obvious ones available to most trainers is to utilize a control population. You’re not likely to be able to disseminate the job aid to all workers simultaneously. Use this logistical constraint to your advantage; the phased dissemination of the job aid creates natural control groups. Provide the job aid to one shift or location first, and then compare results with groups that haven’t gotten the job aid yet.

Deciding What to Evaluate

As mentioned earlier, your task analysis will have highlighted a particular task or set of steps that the performer needs assistance with. Here’s an example to help clarify this concept. Imagine that the managers in the records department have complained about how long it takes to complete a request to provide a customer purchase record, such as when the sales department wants to see a buyer’s history with the company or if the service department needs to verify a customer purchase and review history before authorizing a return. Since the implementation of the new database system, it takes significantly longer for the records staff to complete a request to provide a record on a specific customer or transaction. The managers want the record request cycle to be 30 percent shorter, from an average of 10 minutes down to seven minutes.

Your task analysis identifies the steps involved in this job—from the initial request received by the records clerk to the point that the customer record is provided to the employee who requested it. You discover that when too many computer screens are open at one time, the system tends to crash and all the entered data are lost, forcing the records clerk to start over. The clerks have been trained that the old computers can barely handle the new database system so they should avoid having many screens open or the system will crash. But they get caught up in the work and forget about the number of open screens.

You design a job aid with the help of the IT department. When either the system is close to crashing (because of the memory used) or when the number of screens open exceeds a safe amount, the software will flash a warning on the monitor to close some screens or risk a crash. The validation and troubleshooting phases demonstrate that this reminder inserted into the software works.

How could you evaluate the success of your job aid? You’ve got a number of options. For starters, you could observe several performers during the validation phase to determine how much faster the record request cycle is and then generalize that to the rest of the record clerks. Or, you could ask a sample of performers to record the number of crashes before and after the introduction of the job aid. If you identify what a typical computer crash costs in terms of time, you could extrapolate that to the entire department. In either case, you could document how much shorter the record request cycle is, and how close it is to seven minutes.

If the organization can demonstrate that a shorter record request cycle has an impact on such measures as customer satisfaction, sales volume, costs associated with record retrieval, and costs passed onto the company because of the lag in returns, then quantifying the ROI for the job aid would be straightforward. It shouldn’t be difficult to compare the ROI for the job aid with alternatives that were considered by managers, such as retraining all the records clerks or replacing all the computers. Slightly more challenging measures would involve asking managers or performers after the job aid has been rolled out to identify improvements and whether they’re due to the job aid.

Noted

Evaluating the impact of a job aid isn’t difficult, but the key to evaluation starts at the beginning of the process. You need to identify either organizational targets through the front-end analysis or key skills and tasks through the task analysis prior to the start or at the beginning of the development process. Those not only offer targets to aim for, but also focus on something that matters to the organization and provide something specific to measure against. The trigger for the job aid design process can be critical in identifying what to focus on.

Evaluating Bundled Job Aids

Sometimes job aids are part of a larger package. Examples would include a combination of training, coaching, process improvement, and job aids as support. How can you evaluate the impact of a job aid when there are so many confounding variables?

Although the integrated nature of the job aid does make it difficult to isolate the impact from the job aid, it isn’t as difficult as it might first seem. It is rare that all initiatives occur simultaneously; more often, the various solutions are phased in at different points. Because of resource constraints (not enough trainers, can’t take everyone out of the call center at the same time to participate in the team building session), it is rare that a multiple-solution project calls for implementing everything simultaneously.

Additionally, it usually isn’t feasible to provide everyone with the job aid at the same time. For various reasons, one location gets the job aid before the other regions. This is often due to the limited resources available to disseminate the job aid. You might be the only one available to orient employees and disseminate copies even though there are eight locations in the organization.

Staggered rollouts offer an important advantage for evaluation, however: You have an automatic control population available. You can compare one shift or location (with the job aid) versus the results of a second shift or location (that hasn’t received the job aid yet).

Return on Expectations

One evaluation approach that can be especially useful for evaluating job aids is return on expectations, or ROE (Hodges 2001). ROE is another means of determining the impact of a particular solution or defining the ROI. To gauge the ROE, the designer asks the client to identify what improvement has occurred since the rollout of the job aid. The designer asks the client’s opinion on what percentage of that improvement is attributable to the job aid. That percentage of improvement can then be used to identify the organizational value or even ROI.

Noted

Evaluating a job aid that has been bundled with other solutions (or even other job aids) is a little trickier but still very manageable. Under these circumstances, it’s important to identify at the outset a task or skill specific to the job aid. You should also look for ways to isolate the impact of the job aid. This shouldn’t be too difficult if implementation of the job aid is staggered across the organization.

It is important that the designer doesn’t ask leading questions. However, if this process is done in an evenhanded manner, the results tend to be very credible within the organization because the client is usually regarded as knowledgeable about these numbers.

Noted

If you are going to use ROE, you must take a very open and unbiased approach to gathering your data from the client. If you are perceived as asking leading questions or trying to provoke positive responses, then your ROE data will lack credibility within the organization.

Getting It Done

Now that you’ve completed this chapter, it’s time to do some thinking about how to apply what you’ve learned. Try your hand at Exercise 8-1.

In the next chapter, you’ll examine the most common mistakes designers make with job aids and think about ways you can use job aids to not only close performance gaps in your organization, but also help with your own work.

Exercise 8-1. Evaluating Job Aids

1. Think of an instance where it would make sense to disseminate a job aid to only part of the workforce. For instance, different locations and limited resources make it unlikely you could give it to everyone with the required support simultaneously, thereby creating a natural control group. How might you use this control group to help in your evaluation of a job aid?

2. Review one of the job aids you’ve developed in an earlier exercise or use one of the job aid examples from this book. Identify which of the five evaluation levels would be appropriate to use in evaluating that job aid. How would you go about conducting the evaluation?

3. What do you see as the most challenging organizational barriers to evaluation within your organization for any job aids you develop? How could you overcome those obstacles?

4. What evaluation knowledge and skills do you need to improve so you can effectively evaluate the job aids you produce? How will you acquire those competencies?