7

Learning from Success and Failure

The Importance of Effective Evaluations

You can’t continue to go blindly into the future without having some sense of whether your previous workshops were successful or not.

—Rachel Vacek, Web Services Coordinator, University of Houston Libraries

Documenting our successes (and our failures) by engaging in program evaluation can be like going to the dentist: almost all of us believe it is something we should do, and many of us keep it on our perpetual list of things to do—tomorrow. Talking with colleagues who are leaders in library and nonprofit workplace learning and performance programs shows that a few are quite advanced in how they approach the topic; those wonderful outliers who are far ahead of the rest of us routinely draw from some of the best resources, including Kirkpatrick’s four levels of evaluation, and still say they wish they were doing even more.1 Many others follow the more common path of having learners respond to a series of questions about whether they felt the instructor was effective, whether the training facilities or online coursework were conducive to learning, and whether they believe what they learned can be applied in their own workplace—without ever taking the time to see whether what was learned actually was used to the benefit of the learners and the organizations and customers they serve. A few others just keep thinking about tomorrow.

The consequences of ignoring our leadership roles in conducting evaluations or failing in any way to document successes often come when budgets are tight and libraries, nonprofits, and corporations are looking to cut costs in any way they can; those who cannot prove the benefits that workplace learning programs provide soon find their training budgets being slashed or completely eviscerated. The unfortunate irony, of course, is that training is often what is needed to help these organizations “do more with less,” effectively do less with less, and help build the skills of existing staff. We are, therefore, going to spend considerable time in this chapter reviewing what we have read and what we have heard from colleagues involved in training programs. We also are sharing stories that offer a wide variety of reactions to the question of whether evaluations are providing worthwhile results to trainer-teacher-learners and all they serve.

Kirkpatrick’s Levels of Evaluation

A first-rate starting point for any discussion about evaluation is the work of those who have preceded us; there is, after all, no reason any of us needs to re-create what already is in place. One writer often quoted in corporate training programs but not so recognizable among library and small-to-midlevel nonprofit organizations is Donald L. Kirkpatrick. Spend any time reading ASTD publications including the monthly T+D magazine or nearly any other in-depth examination of what contributes to a well-run learning program, and you will not go far before coming across references to his work—and for good reason. Kirkpatrick surmises that there are four levels through which we can evaluate the effectiveness of learning: reaction, learning, behavior, and results. What he looks for—and recommends we look for—are levels of change, since training and learning opportunities are generated to facilitate change among individuals and within organizations.

Kirkpatrick’s Level 1 evaluation is the level most familiar to those of us engaged in evaluating and documenting the results of what we provide. It measures how the audience reacts to the training. Commonly called “smile sheets” because they are designed to tell us whether learners are smiling when they complete the lessons we provide, Level 1 evaluations tell little about whether any learning actually took place. Instead, the participants’ initial reactions to the training are evaluated through questions that ask whether the participants are happy with the workshop, whether they believe the handouts are useful, and whether the room or method of online delivery is comfortable.

With Level 2, we begin to look for tangible effects by measuring what knowledge, skills, or attitudes are improved as a result of the learning that occurs. This can be measured through quizzes, more formal tests, or surveys administered before and after a learning opportunity has been completed. Although a Level 2 evaluation can prove that learning has occurred, it does not confirm that the learning will be applied back on the job, so Kirkpatrick takes us to a third level, where we look for behavioral changes in the workplace during the weeks and months following completion of a learning opportunity. If, for example, a learner completes a customer service course, we might look for proof that the learner has effectively demonstrated an ability to de-escalate an interaction with a hostile customer. If a learner has completed a workshop or course on a new piece of software, we look for proof that the software is being used more efficiently or effectively to the benefit of the individual, the organization, and those served by the organization.

In a Level 4 evaluation, we reach the pinnacle of showing why workplace learning and performance efforts are worth far more than anyone will ever invest in them. We attempt, at this stage, to show long-term results to the organization and those it serves. If, for example, we have offered that customer service course, we might seek evidence that circulation of the library’s materials or use of its online resources has increased because of improved levels of service provided by the learner. We try to establish, through surveys or focus groups or other evaluation methods, that there has been an increase in the way a library’s or nonprofit organization’s customers express their satisfaction with the service they are receiving compared to how they rated customer service before the training occurred. We look for evidence that training provides other measurable, long-term benefits.

Most important, we work to develop a meaningful way to attach a dollar value or other quantifiable benefit that shows that the effort and expenditures produced worthwhile results. We might, for example, document that worker’s compensation costs have decreased after employees completed health and safety or ergonomic training, or that the costs of litigating sexual harassment charges have dropped significantly because harassment-prevention training reduced the problem in the workplace. Because this is often the most difficult level to evaluate and document within libraries and nonprofit organizations, few of our colleagues tell us that they routinely engage in this effort. That does not, however, mean that it is not worth pursuing. If we do not ask, what chance do we, ourselves, have of learning?

Breakthrough Training and Results

Because training is about change and obviously should document results, we are strong advocates of looking beyond facts and figures so we can see how our work ultimately affects those it is meant to serve. As we try to determine how our work affects learners, their organizations, and their customers, we start—rather than end—with simple results, including documenting how many people were served. The more important part of the process is to ask what ultimately happens to the learners and their customers as a result of the knowledge they acquired. We need to document how many of our colleagues actually attempted to use what they learned when they returned to their worksite, and then find ways to track how that learning experience filtered through our organizations to begin seeking, documenting, and learning from the more sustainable results a successful workplace learning and performance program produces.

We might, for example, ask whether a customer service training workshop led to better service to the customers themselves: As a result of receiving improved service, were they able to make some sort of measurable and positive change in their own lives? Did they find a resource they would not otherwise have found? Did they use that resource to their own—not the library’s or nonprofit organization’s—benefit? Did it produce something notable such as a job interview with a company they might otherwise not have found, or were they able to locate and utilize a service that added to the quality of their own lives? Did the improved customer service itself provide a model that the organization’s customers were able to adapt and spread in their own business, volunteer, or community activities?

We certainly cannot pretend that this level of evaluation and documentation is easy or even necessary for every learning opportunity we provide. We would, on the other hand, be underestimating the value to be celebrated if we did not at least consider the possibility that these questions require our attention; they help us understand potential results we may never have considered for all the effort we put into the work we are doing. Those seeking more detailed guidance on using “outcome measurement” within libraries will find plenty of encouragement in Demonstrating Results, by Rhea Joyce Rubin, a consultant and former library director who has written extensively about conducting evaluations and measuring results.2

Another innovative and inspiring work is The Six Disciplines of Breakthrough Learning.3 This trainer’s guide supports the idea that evaluation starts even before the first day of learning—a concept also championed by Robert Brinkerhoff, an educator and consultant whose work is often cited by colleagues managing corporate workplace learning and performance programs.4 In the Six Disciplines model, no one registers for a learning opportunity without first discussing that opportunity with the manager or supervisor who works with the learner; that initial discussion is designed to establish what the learning opportunity offers, what the learner expects to bring back to the workplace, how the manager sees the learning coinciding with the learner’s and the organization’s goals, and what will be done to support the learner who is returning to the workplace. Using a simple follow-up method described in the book, the authors, for example, were able to document results even at the most elementary level, including a much greater awareness among managers as to what goals their employees were pursuing; 60 percent of the managers who did not use the follow-up were unaware, whereas 100 percent of those using the method had a clear idea of the goals employees were attempting to reach.5

Brinkerhoff offers us a sobering appraisal that demonstrates why we should be more concerned with evaluation and anything else that improves the results our efforts produce: 15 percent of those who attend training sessions do not use their learning at all; 15 percent produce “a concrete and valuable result”; and the remaining 70 percent generally use some of what they learn but soon abandon what they acquire because their efforts produce no results or they simply stop trying to apply what they learned.6 The rhetorical question this produces is, of course, where else other than in training programs would we accept a 15 percent success rate and an 85 percent failure rate?

The follow-up tool described in Six Disciplines—Friday5s—is worth examining if we want to alter those dismal results. Once learners complete the formal workshops or courses being offered and then return to their worksites, they engage in weekly follow-up exercises for up to three months. Receiving e-mail each Friday, they spend approximately five minutes documenting what they have accomplished that week by applying what they learned, set goals for what they will accomplish during the following week, and send copies of their responses to their supervisors as well as to those managing the learning process. Although the Friday5s system is an incredibly well-developed tool that has been refined over several years, the concept could easily be adapted for use within any library or nonprofit organization willing to make the commitment to assuring that what is learned does not literally remain abandoned and unused within the physical or virtual classroom but is, instead, nurtured by those who will most benefit from its use. Applying this sort of process to produce and document results reminds us, as the authors quote a corporate leadership training executive saying, that “we are not in the business of providing classes, learning tools, or even learning itself. We are in the business of facilitating improved business results.”7

A great example of this level of engagement is demonstrated in the North Carolina Master Trainer project, which we describe in more detail in chapter 8. To apply to the program, a staff member must have a letter from the library director stating how the skills learned will be used to benefit the library and its customers. The director must also commit to providing support the staff member needs to be successful in the program—including time away from work to complete Master Trainer projects. Throughout the extensive program, facilitators communicate individually with participants’ managers to let them know what changes to look for and expect, and how they can support the participant. The managers are also surveyed periodically to determine what improvements or changes they see on the job as a result of the training.

There is no reason we cannot adapt this belief in libraries and nonprofit organizations so that the goals against which we evaluate success extend beyond providing learning opportunities; we, as leaders, need to embrace the possibility that we facilitate results that ultimately are as meaningful to the customers our organizations serve as they are to the learners themselves. There is also no reason we cannot take the next step recommended by those who have documented results from learners: work to assure that what is learned will actually be supported in the workplace when the learners return. If we are supporting our colleagues’ participation in face-to-face or online learning opportunities and working to create the possibility of both formal and informal learning occurring within our organizations, we need to remember that results come from far more than just putting together a great lesson and then moving on to something else. Learners who are not supported in their workplace soon return to their previous behavior if that is what is rewarded.

“The reality is that … non-training, performance system factors are the principal determinants of impact from training and can, if they are not aligned and integrated, easily overwhelm even the very best training,” Brinkerhoff suggests, and to ignore him is to miss an opportunity worth taking. “Our vast experience in evaluating training programs and the conclusions of many research studies on training transfer and impact all lead to the same conclusions: the principal barriers to achieving greater application of learning and subsequent business results lie in the performance environment of the trainees, not in flaws (though there may be some) in the training programs and interventions themselves.”8

What Our Colleagues Say and Do

Jay Turner is one of our most diligent colleagues in his pursuit of effective evaluations. He is also one of the few we have encountered who begins planning for the process at the same time he begins planning a workshop or course. He completes at least a rudimentary level of evaluation for every learning opportunity he provides to the staff at the Gwinnett County Public Library.

Discussing how he develops e-learning sessions, he notes that “a successful e-learning program corresponds with the library’s business drivers, so that it meets a real need. It’s not training just for the sake of training.” If he is going to include a Level 2 evaluation, he tries to “test the students before, during, and after the program to see if they learned the content”:

If my goal is to have learners walk away with a tangible skill, then an even higher level of feedback is in order. Perhaps I’ll test them and then, say, a month later, check in with a random sample of attendees and see if or how they are using their new skill imparted from the e-learning program. I like to wait at least two weeks before doing this level of follow-up to ensure that the halo has worn off and the learning has really taken hold.

It is, he admits, “difficult to get in the habit of higher-level feedback, but there’s no other way to know if training is making a difference. I’ll be completely honest and say that I don’t go to Level 3 and 4 evaluations on everything. It really depends on what business drivers we’re trying to address.” He does, on the other hand, seek to document those behavioral and result-based elements produced through the Level 3 and Level 4 evaluations for any offering that addresses “any library system-wide performance concern, such as training for policies, guidelines, and new-hire onboarding: I think it’s important to look at learning and development programs as works in progress; you have to continually evaluate your offerings to see if you are hitting your goals for learning and if staff members are truly benefiting from your efforts.”

Some of us who are aware of Kirkpatrick’s work never get around to engaging in his higher levels of engagement simply because we feel we lack the time and do not sense a pressing need for it. Catherine Vaughn, for example, conducts simple evaluations within the Lee County Library System—“but not routinely”—and notes that she has “never been asked or is it expected of me to conduct ‘formal’ evaluations” one or two months after learners have returned to work. She distributes questionnaires to learners, often before a session begins, so they can make notes of positive or negative reactions while a workshop or course is under way. She also informally checks with learners’ supervisors to see whether what she offers is making any difference: “I determine success when products are used, e.g., our intranet, the number of complaints has declined because positive comments are up from our customers. … The biggest one is when staff aren’t as frustrated and don’t complain as much—‘I don’t get it’ or ‘This is hard to understand,’ ‘Why did we have to change, the old system was fine.’”

As she considers possibilities, she suggests that she would like to “have each supervisor evaluate their employees on certain skills and topics that were discussed in the workshop that employee recently participated in, and [she] would like to know more about how learners apply lessons to their jobs and what, in learning opportunities, are not meeting their needs.”

“At this point,” Vaughn continues, “I see that we don’t come full circle and that leaves us open to questions like, ‘Is this session something we need to be doing or is it to fill time?’ ‘Can we eliminate this session or do we keep it because we have offered it forever?’ ‘Are staff learning new skills or improving the ones they have so that it benefits everyone—employees and customers?’”

“I am only one person and am currently stretched in four directions—library CE [continuing education] coordinator, volunteer coordinator, task force facilitator, and County Prepare Training certified instructor,” she adds, describing a situation faced by many in library and nonprofit organization training programs.

Other colleagues, reflecting on their use or lack of use of formal evaluations, note that they have tried it both ways. Louise Whitaker regularly conducted evaluations, stopped for a year, and was in the process of reinstituting a revised form of evaluation for the Pioneer Library System when we spoke to her. She took a yearlong hiatus from conducting evaluations because “people were too nice: they never said what they really thought about the training, so everything was always good. Not every training is good; let’s be honest. The training doesn’t fit everyone’s needs all the time, and when they say it [always] does, something is wrong.”

The revised form moves Pioneer closer to Kirkpatrick’s model:

We’re going to tie the evaluations more immediately after the training and online so they can be anonymous and then follow up with a second evaluation a couple of weeks after the training to see if they’re actually using any skills that were talked about during the training. We hope that will give us an idea—if they’re not using the skills, is it because there was nothing there that applied to the job, or are they unclear on how to use them?

Jason Puckett, instruction librarian for user education technologies at Georgia State University Library in Atlanta, is among those who readily admit to feeling that they should be doing more to evaluate the effects of learning opportunities: “Part of the problem is that I don’t like to take up my very limited class time with evaluation, but then if I’m a guest speaker in someone’s class like I often am, I don’t have a way to follow up afterward. I’m about to start doing some online workshops and I’ll be collecting e-mail addresses to send a follow-up evaluation survey made on SurveyMonkey or something similar.”

Rachel Vacek, web services coordinator of the University of Houston Libraries, believes that evaluations are important: “You can’t continue to go blindly into the future without having some sense of whether your previous workshops were successful or not.” She also, however, cites the same constraints voiced by her Georgia State University Library colleague and says that “sometimes you just want to present information.”

Peter Bromberg takes a measured approach to evaluating programs, as shown by his comments about working for the South Jersey Regional Library Cooperative:

I pay close attention to big events and workshops that we do and I usually do a custom online evaluation form for those events. From the standard paper evaluations, I pretty much want to get a sense of thumbs up or down, why, and what other classes they’d like me to schedule. I’m really not positioned to look closely or monitor whether or not they apply what they’ve learned in the long term. That’s more appropriate at the library level. My big-picture goal is to help library staff acquire the skills and abilities they need to create great customer experiences for the library users of South Jersey.

Reflecting on the efficaciousness of the evaluation form he used, Bromberg says, “The form I use now I pretty much inherited eight years ago. I’m not crazy about it, but I haven’t changed it—which,” he jokingly added, “gives you some sense of how close attention I pay to the evaluation process.”

The question of whether evaluations are necessary and used effectively by organizations leads to some interesting ruminations among those who initially say they believe they should be doing more. Janet Hildebrand, for example, begins by saying that the Contra Costa County Library system does survey staff about training and learning needs and has done follow-up with colleagues involved in special projects such as a computer competency effort: each work unit involved in the project continued to work with team members in the six months following completion of the all-staff trainings. Overall, however, her initial reaction during a discussion about evaluation was that “we’re not strong on formal evaluation tools and could do better in this area.”

Reflecting on the obvious successes the training program in Contra Costa County produces leads her to suggest that those successes happen because

this open learning environment, the enthusiasm and participation of so many peer trainers, the verbal articulations of appreciation and amazement from new staff build the trust staff need to continue to want to learn and therefore they continue to learn and teach others. What comes out of that is a courage to take on new things and an ease of organizational growth and change. So the details matter less, and the testing of learning is not the point really.

She ultimately does agree that there must be some method of accounting for performance as long as learning rather than the testing of learning remains the focus of our efforts: “We are each responsible for our learning. If an individual does not participate, or is not meeting those expectations, then the supervisor must arrange for the training and coaching that the employee needs and requests, and the employee is accountable for his own learning. On that level, we do expect evaluation to be clear, and proactive, and formal.”

Because there is no denying the power of a story to suggest success, informal documentation remains an important part of the evaluation process. Turner, for example, recalls offering an online one-on-one session to help a colleague at Gwinnett learn more about using PowerPoint in the workplace if the colleague would share, with others, what she learned:

At the end of that hour session, this librarian was absolutely stoked. I could hear it in her voice. … Months pass. I don’t hear much from her, so I naturally assume all is well and that she would reach out to me if she needed any more help. Well, a while later, she asks to borrow my portable projector because she was doing a PowerPoint presentation for her branch. I was ecstatic. She sent me the PowerPoint to review before the presentation and I could see that she used tips we went over in the one-hour session.

Vaughn, in a similar vein, recalls a staff member who attended a reference workshop session on science sources. One month after the workshop, this employee was on a library reference desk when “a frantic mother and her elementary school child came into the library … looking for information on manatees and needed a picture also.” The staff member remembered learning about the e-resource and was able to provide the information and some pictures that met the child’s needs. “The staff member told me that, had it not been for the workshop, she would have just tried the encyclopedia and not thought about e-resources,” Vaughn concludes.

Not everyone feels that evaluations are worth the time required to complete them. Management consultant Pat Wagner bluntly states: “Most trainer evaluations are useless. I’ve given customer service training where the evaluations said it was both the best and worse class people attended. They hated my stories; they loved my stories. The whole process was a waste of time.” As a consultant, she needs feedback during the session to make sure she is on track with what the person who hired her wants her to do. “If I get a pile of evaluations three weeks later, what can I do?” she asks.

Wagner suggests that evaluations can be tainted by what motivates a learner to attend a workshop or course. A person who is sent to training and does not really need what is being offered may give a good evaluation because he or she appreciates training, or he or she may give a bad evaluation because attending the training is a waste of time. The person who is sent to training and needs the training may give a good evaluation because he or she likes the instructor’s sense of humor or may give a bad evaluation because of a feeling—accurate or inaccurate—that the training is a disciplinary action.

Wagner thinks that the best evaluations come from trained trainers who are in the class. For her, the heart of evaluation is in demonstrating success to the person who hires her. To achieve this, she is explicit from the beginning in outlining the expected outcomes of the training. Because most of her clients are repeat customers, she keeps in touch with them to find out whether they believe her training has been successful.

Fulfilling Great Expectations

The bottom line is that, as leaders in workplace learning and performance, we need to be prepared to justify our expenses as well as the time and effort required to support our efforts so we can continue to provide learning opportunities to the learners in our organizations; the best way to do this is through measurable outcomes obtained via some form of evaluation and evidence that what we provide has far-reaching and significant effects. We need to have ways of assuring ourselves and those we serve—learners as well as those who benefit from what learners provide and produce—that our resources are not being wasted.

Everyone we interviewed does some form of evaluation, whether formally or informally. There are clearly a wide variety of opinions on using evaluations to measure the success of a training session or program. Furthermore, there is, as Janet Hildebrand suggests, a need to remember that the purpose of conducting evaluations goes far beyond simply testing for learning.

Once we have gathered the information provided through evaluations, we must pay attention to what is done with the data we have generated. If no one is looking at the data and using that information, then we have to wonder whether it is worth the time and effort required to gather it in the first place.

Academic librarians who serve as trainers are particularly challenged when attempting to evaluate their training sessions formally, as Jason Puckett notes. Since most library instruction is offered during the beginning of the semester, there is pressure to do a lot in a limited amount of time. When that constraint is coupled with the fact that academic librarians are reaching out to an audience of students whom they may see only one time, the challenge appears even greater. Puckett sums the challenge up nicely by asking whether even a small part of the single hour he has with his learners should be taken up with evaluations.

The problem we must overcome is that we know evaluations should be done even though many of us are already stretched too thin and wearing too many hats. The irony here is that it is only by conducting effective evaluations that we can gather the data needed to show administrators that training works and that dollars and time spent for learning provide wonderful results. As Peter Bromberg reminds us, our goal is to create great experiences for all who use the services of the organizations we staff. We should expect no less.

Notes

1. Donald and James Kirkpatrick, Evaluating Training Programs: The Four Levels (San Francisco: Berrett-Koehler, 2006).

2. Rhea Joyce Rubin, Demonstrating Results: Using Outcome Measurement in Your Library (Chicago: American Library Association, 2006).

3. Calhoun W. Wick, Roy V. H. Pollock, Andrew McK. Jefferson, and Richard D. Flanagan, The Six Disciplines of Breakthrough Learning: How to Turn Training and Development into Business Results (San Francisco: Pfeiffer, 2006).

4. Robert O. Brinkerhoff, Telling Training’s Story: Evaluation Made Simple, Credible, and Effective (San Francisco: Berrett-Koehler, 2006).

5. Wick et al., Six Disciplines, 128.

6. Brinkerhoff, Telling Training’s Story, 19.

7. Wick et al., Six Disciplines, 13.

8. Brinkerhoff, Telling Training’s Story, 38.