The L&D profession has made steady progress over the last 60 years defining processes and introducing new concepts, systems, and tools. Many practitioners, though, still don’t know where to start or how best to proceed with measurement, particularly with data analytics and reporting. For example, many are asked to create a measurement strategy or show the value of their investment in learning, but don’t know how. And most L&D professionals have limited resources, which makes the task all the more challenging, especially considering the more than 180 measures we have identified that are available just for L&D. (We cover 120 in this book!)
We propose to simplify measurement, analytics, and reporting by applying a framework called the Talent Development Reporting principles (TDRp). Like any good framework, TDRp provides a common language for measures and the reasons for measuring. It recommends grouping measures into three categories, which will facilitate both the discussion and selection of measures. It also recommends a framework and common language for reporting based on the reasons for measuring. Moreover, TDRp breaks new ground by recommending the adoption of three standard reports for the management of individual programs and the entire department. TDRp also provides practical guidance on how to use the measures and reports to meet the needs of the various stakeholders in your organization.
Let’s first offer a little history of how we got here before exploring TDRp in detail.
In 2010, a group of L&D leaders began discussing the need to create standards for measurement, reporting, and management within the industry. The discussions started casually among like-minded colleagues at conferences and centered on two key questions: Why is measurement capability in most organizations underdeveloped, and why does every organization spin its wheels creating its measurement approaches from scratch? The answer to both questions was the same: a lack of standards for measurement and reporting. Without standards, every talent function essentially had to start from scratch to first identify the measures that made sense in their organization and then design the appropriate reports that would provide insights for decisions and corrective action.
As the discussions widened to include a broader group of industry thought leaders, the realization hit: The Generally Accepted Accounting Principles (GAAP) employed by accountants in the United States (and International Financial Reporting Standards employed by accountants in the rest of the world) could be the inspiration for its mission. The rationale was: “If the GAAP framework for measurement and reporting works so well for accountants who also have hundreds of measures and numerous reports, why don’t we have something like it for the learning profession?”
Moreover, they argued, accounting is not the only profession that has adopted frameworks and standards to help them organize vast amounts of data and provide a common language. Chemistry uses the periodic table; biology classifies organisms into kingdoms, phylum, families, and species; medicine classifies humans (and animals) by systems and organs. The founders of TDRp thought the time had come to develop a framework for learning.
The working group consisted of about 30 thought leaders and prominent practitioners, with Kent Barnett and Tamar Elkeles leading the effort (see sidebar). They recruited the experts shown in appendix A, including your authors, who helped conduct the research of current practices and wrote the whitepaper. After numerous revisions to the whitepaper, the group agreed on the key assumptions and principles, the three types of measures, and the recommended management reports (one of the five types of reports).
The Origins of TDRp
By Kent Barnett
In the fall of 2010, Tamar Elkeles, at that time the CLO of Qualcomm, and I, then CEO of KnowledgeAdvisors, were at lunch celebrating the retirement of Frank Anderson. Frank, the outgoing president of Defense Acquisition University, was a visionary and highly respected learning leader, so it was the perfect place to launch a strategic industry initiative. During our conversation, we agreed it was time to create standards to help us measure the impact and performance of L&D. We realized that the financial world had standardized reporting. By looking at the income statement, balance sheet, and cash flow statement, one could analyze the financial performance of any organization. Shouldn’t we be able to do the same thing in learning?
Tamar and I agreed to co-chair a new council with the goal of creating standardized reporting for talent development. More than 30 thought leaders and leading organizations joined our effort, and out of that grew the Talent Development Reporting principles (TDRp). Most importantly, early on in the process Dave Vance accepted our offer to join us. As our work progressed, Dave took the lead and spearheaded the efforts to create the Center for Talent Reporting.
Ten years later, the TDRp framework is being adopted around the world and
Dave Vance has turned the Center for Talent Reporting into an integral part of our industry’s advancement.
The working group focused initially on L&D but quickly extended the principles to all core talent processes, defined as those processes that directly contribute to achieving high-level organizational outcomes. By mid-2012, we expanded TDRp to include talent acquisition, performance management, leadership development, capability development, and total rewards. (See appendix A for more detail.) In this book, we concentrate only on L&D. You can find the measures and sample reports for the other HR processes at CenterforTalentReporting.org.
Now that we had developed TDRp, it needed a home. The Center for Talent Reporting (CTR) was created in 2012 to be such a home and to advocate for TDRp’s adoption. CTR would also provide resources to help the profession implement TDRp, including webinars, written guidance, workshops, and an annual conference.
With this background, let’s turn to Measurement Demystified, which we wrote to help you, the L&D practitioner, better measure, analyze, report, and manage learning at both a program and department level, with the ultimate aim of delivering greater value to your organization.
Our approach outlined in the following chapters will work for both small and large organizations, even if yours is an L&D staff of only one or two. Typically, smaller organizations will have fewer programs, so the number of measures and reports will also be smaller. Larger organizations will have greater complexity and require greater effort, so they will have to set some priorities. Even so, the guidance remains the same: Start small and grow. The approach also works for all types of organizations—for profit, nonprofit, government, education, and the military.
Our outlook on each topic is very practical. We all have limited resources, including limited time and imperfect data. We all operate in an environment of continual change and uncertainty. As practitioners, our goal is to do the best we can with what we have to help our organizations succeed. Consequently, we use imperfect and often incomplete data because that is usually better than the alternative, which is to do nothing. We plan, estimate, and forecast knowing that we will be wrong but, if we do it smartly, the effort will be worthwhile and contribute to our organization’s success.
With that approach in mind, each chapter builds on the preceding chapters. You can jump directly to a chapter that interests you but, if you are not already familiar with all the reasons for measuring and the TDRp framework, we advise you to read chapter 1 first. Likewise, since we present a new framework for reporting, it will be helpful if you read chapter 8 before other chapters on reporting. After you are familiar with the framework and measures, you can use the book for performance support and go to the relevant section for definitions of measures or guidance on reports. Here is a description of the chapters:
In chapter 1, we start by discussing the many reasons to measure and then share the TDRp framework, classifying the reasons to measure into four categories to simplify communication and understanding. We provide a maturity model for measurement, employing the four broad reasons to measure, and classify measures into three types and reports into five types. The chapter ends with a discussion of the recently released International Organization for Standardization’s (ISO) Human Capital Reporting Standards and their integration with TDRp.
Chapter 2 completes our foundational discussion of measurement by explaining the importance of including all three types of measures in a measurement strategy. In chapter 3 we begin our detailed discussion of measures by introducing efficiency (or activity) measures, which are by far the most numerous in the profession. We provide definitions and recommendations on 107 of these foundational measures, including those benchmarked by the Association for Talent Development in its annual State of the Industry report and those recommended by the ISO.
Chapters 4 and 5 explore effectiveness and outcome measures, the subjects of many books on evaluation. We provide a comprehensive introduction to these important measures and a discussion of the key differences between the Kirkpatrick and Phillips approaches. We define each measure and detail the options for calculation. We also include a list of measures that commonly benchmarked.
In chapter 6, we provide guidance on how to create a robust measurement strategy, including all the key elements. Then in chapter 7, we incorporate what we’ve learned so far to guide the reader in selecting the right measures based on their purpose for measuring. We provide examples of recommended measures for common programs and improvement initiatives.
Chapter 8 revisits the TDRp framework to explore the five different types of reports, employing the measures we’ve described so far. We suggest how to select the proper report to meet a specific need. Chapter 9 focuses on one type of report, the management report, and details the three specific management reports recommended for use in managing learning programs and department initiatives.
Chapters 10, 11, and 12 complete the exploration of reporting, first by providing guidance on creating a reporting strategy, and second by providing instruction on how to create values for the selected measures, including planning and forecasting. Some readers will find chapters 11 and 12 challenging, not because the concepts or measurements are difficult, but because there are so many options.
We end by sharing implementation guidance in chapter 13 and pulling all the elements of the book together in chapter 14. In addition to a history of the TDRp adoption, the appendix includes an example document of roles and responsibilities for L&D and goal owners, a sample measurement and reporting strategy, a sample project implementation plan, and a glossary.
To see how the concepts fit together, review the chapter layout here (Figure I-1).
The glossary provides definitions for more than 190 terms; here we share some of the most basic and important terms we use in the book.
We use the term measure as a noun to be synonymous with metric and KPI (key performance indicator). At one time, KPI might have been reserved for only the few key or important measures, but today it is commonly used for any measure.
While many in the profession consider any operation involving numbers to be analytics, we reserve the term to mean higher-level analysis, often involving statistical tools. For example, we will not refer to determining the number of participants or courses as analytics. The same is true for reporting the average participant reaction for a program. In both cases, the value of the measure is simply the total or average of the measured values—no analysis required. In contrast, a detailed examination of any of the values of these measures, perhaps using their frequency distribution, the use of regression to forecast the value of a measure, or the use of correlation to discover relationships among measures will be referred to as data analytics or analysis.
Think of it this way: measurement provides the quantification of the measure, which is typically an input for analysis (the old term for analytics). There are exceptions, however, and sometimes analysis is required to determine the value of a measure (isolated impact, for example). In summary, simply measuring and reporting the value of a measure does not generally rise to the level of analytics; more than arithmetic is required to be considered analytics. (Figure I-2 describes the connections among measurement, analytics, methodologies, and reporting.)
With this context, we suggest the following definitions for these important terms:
• Measure (synonymous with metric and KPI). As a noun, it is the name associated with a particular indicator. For example, the number of participants is a measure. As a verb, it is the act of finding the value of the indicator.
• Measurement. The process of measuring or finding values for indicators.
• Analytics (synonymous with analysis). An in-depth exploration of the data, which may include advanced statistical techniques, such as regression, to extract insights from the data or discover relationships among measures.
The Institute for Operations Research and Management Science defines analytics as “the scientific process of transforming data into insights for making better decisions,” but we believe this definition is overly restrictive. We agree that the intent of analytics is to provide insights, but the effort is not always directed toward decision making. Sometimes, the goal is simply a better understanding of the data or of relationships among multiple measures. Furthermore, an analytics effort may not always provide insights or lead to better decisions, just as an experiment may not always produce the hypothesized result.
While terms such as program, initiative, course, and class are often used interchangeably, we will define each specifically, borrowing from academic terminology:
• Program. A course or series of courses with similar learning objectives designed to accomplish a business or HR goal or meet an organizational need. For example, a program to improve leadership may comprise four related courses over a six-month period. At the university level, a program leading to a degree in economics may require 12 courses over a four-year period.
• Course. A class or series of classes, an online module or series of online modules, prework, post-work, performance support, discussion boards, and other types of learning to convey related and integrated content. For example, a course on leadership may consist of four hours of prereading, two online modules, four instructor-led classes, an online discussion board, and performance support. In a corporate environment, each course will have a specific designation in the learning management system (LMS). At the university level, students will enroll in specific courses each term such as Economics 101.
• Class. Each physical or virtual meeting of students where content is conveyed by an instructor. A course may consist of just one class if the content can be conveyed in one sitting or it may require multiple classes to convey all the content. At the university level, a semester-long course like Econ 101 might meet for two classes per week for 10 weeks. It is also possible that the number of students enrolled in a course exceeds the optimum class size, which necessitates multiple classes even if the content can be conveyed in a single sitting. So, a one-hour instructor-led course for 150 employees will require six classes of 25 each. The analogy at the university level is 300 students taking Econ 101 where enrollment is limited to 100 per class. In this case there will be three sections with 100 in each.
• Online or e-learning module. A single session of computer, tablet, or mobile-based instruction that may last from five or 10 minutes to an hour or more. Each online module will typically require the user to log in, with completion recorded in the LMS.
• Initiative. May be used in place of program but may also designate a coordinated series of actions to improve the effectiveness or efficiency of the L&D department. For example, there may be an initiative to reduce complaints about the LMS, lower department costs, or improve the application rate of learning in general across all courses. In this book we will use the term program when the effort addresses business or HR goals, or organizational needs like onboarding or basic skills training. We will use the term initiative when the effort is not directly aligned to business or HR goals or organizational needs, but instead focuses more on improving the efficiency or effectiveness of L&D department processes and systems or all programs.
Here are several other key terms and their definitions, which we will use frequently:
• Learning and development. The name of the professional field and many training departments dedicated to increasing the knowledge, skills, and capabilities of the workforce. Other names for L&D departments include training, organization development, and talent development, although the last two may include additional responsibilities such as succession planning.
• Formal learning. Learning that is structured and organized or directed by someone other than the learner. This includes instructor-led training (ILT) where the instructor is physically located with the participants, virtual ILT (vILT) where the instructor is at a different location than the participants, e-learning, structured coaching, and structured mobile learning.
• Informal learning. Learning that is not structured, organized, or directed by someone else. The participant learns on their own though self-discovery. This includes social learning, knowledge sharing, on-the-job learning, unstructured coaching, and personal learning through Internet or library exploration.
• CLO (chief learning officer). The person ultimately responsible for learning in an organization. This position may also be named vice president of training or director of training. If the person also has responsibility for other aspects of talent, the position may be called chief talent officer (CTO) or chief human resources officer (CHRO).
• Employees or headcount. The unique count of all employees at a point in time. Part-time employees are counted as well as full-time employees. Note: If an organization uses many contingent workers (temporary employees and contract workers), consideration should be given to using the term workforce (employees plus contingent workers) in addition to, or as replacement for, number of employees.
• FTE (full-time equivalent). This is a way of measuring full-time effort (40 hours per week x 52 weeks per year) when some employees are part-time and do not work 40 hours per week or 52 weeks per year. For example, if two part-time employees each work half-time, the full-time equivalent of their effort is 1.0 FTE.
Finally, throughout this book, we discuss the connections among the four foundational elements of TDRp:
• Reporting, which we define as an approach to structure measures and analysis to share results with stakeholders.
• Measurement methodologies, which we define as a process and suite of standards and tools that guide how practitioners execute a specific approach. The learning measurement profession uses several well-known methodologies such as:
Kirkpatrick Four Levels of Evaluation
Phillips ROI Methodology
Brinkerhoff Success Case Method.
• Analytics and measurement, the third and fourth components we defined previously.
In the interplay of these relationships, measurement is at the base, supplying the inputs to our methodologies (for example, Kirkpatrick or Phillips), as well as the analytics we employ (Figure 2). In some cases, however, the methodologies will dictate what measures we must use. Or, as we mentioned earlier, analytics may be required to compute a specific measure, such as the isolated impact of learning.
Reporting provides a way to display our data, ascertain trends, and provide insights into progress against targets or goals. Reports will often trigger a request for a deeper dive. Depending on how we have structured the reports, we may be able to drill down and get answers to our questions. In other cases, the reports may require additional analysis to understand the root causes behind observed results.
Conversely, a learning leader might formulate a hypothesis such as, “learners with low levels of management support are less likely to apply the learning.” Through an impact study or ad hoc analysis, we can confirm or deny this hypothesis about manager support. The insights from the analysis may suggest ongoing reporting of new measures (for example, manager support). Moreover, the reports enable us to monitor results and determine if the hypothesis holds over time.
Understanding the interplay among the four elements of reporting, methodologies, analytics, and measures will help you see how you can navigate the implementation of TDRp within your own organization.
With all this in mind, let’s get started.