What is the best way to manage the process of designing and implementing a performance management system? Who should be involved in this process? How can you overcome the resistance that often builds up against such systems and build support for them instead? What are the most frequent problems that plague efforts to implement measurement systems, and how can you avoid or surmount them? This chapter discusses organization and management issues involved in designing and implementing performance management systems and suggests strategies for success in developing systems that are useful and cost-effective.
Expectations that performance management should contribute to higher levels of organizational performance and, in particular, better outcomes is almost universal among both proponents and critics of the performance movement (Ammons, 1995; Behn, 2003; Epstein, 1984; Halachmi & Bouckaert, 1996; Hatry, 2006; Heinrich & Marschke, 2010; Moynihan, 2008; Poister, 2003; Radin, 2006; Wholey & Hatry, 1992). While interest in performance management is largely predicated on the assumption that it will lead to performance improvement, however, empirical research testing this relationship has been relatively sparse to date. A few celebrated cases of effective performance management have been documented, such as CompStat in New York City (Bratton & Malinowski, 2008) and CitiStat in Baltimore (Behn, 2007), and a growing number of case studies demonstrate great potential for using performance information to manage for results (Sanger, 2008; Ammons & Rivenbark, 2008; Gore, 1993; Holzer & Yang, 2004; de Lancer Julnes, Berry, Aristigueta, & Yang, 2007; Barnow & Smith, 2004). For example, New York City's CompStat, a data-driven system, has been credited with effectively fighting crime (Smith & Bratton, 2001), and Baltimore's CitiStat, which expanded the concept to a wide array of municipal services fostering program improvements, won the 2004 Kennedy School of Government's Innovations in American Government Awards (Behn, 2006). Similarly, the ongoing North Carolina municipal benchmarking program has been credited with enabling individual cities participating in the project to make significant improvements in terms of efficiency and service delivery (Ammons & Rivenbark, 2008).
However, Ammons and Rivenbark (2008) contend that “most claims of performance measurement's value in influencing decisions and improving services tend to be broad and disappointingly vague,” and they point out that “hard evidence documenting performance measurement's impact on management decisions and service improvements is rare” (p. 305). Poister, Pasha, and Edwards (2013) state “the fairly meager empirical research to date on the impact of performance management practices has produced decidedly mixed results, with four of the eight studies employing cross-sectional analysis across comparable public organizations showing positive effects on performance. Therefore, the question of the effectiveness of performance measurement or management practices in actually improving performance is very much still at issue” (p. 628).
Beyond case studies, a small but growing stream of quantitative research has examined the impact of performance management–related approaches on performance itself, and that work has produced mixed results. Some of these studies have produced evidence that performance management can make a positive difference. The study by Poister et al. (2013) is a recent example finding evidence that both strategic planning and performance measurement, the principal components of performance management in public organizations, “do contribute to improved performance in small and medium-sized transit systems in the United States” (p. 632). Earlier, Walker, Damanpour, and Devece (2011) found that elements of performance management such as building ownership and understanding of organizational mission and goals, specification of appropriate performance measures and targets, devolution of control to service managers, and taking corrective action when results deviate from plans in English local government authorities led to higher core service performance scores constructed by the British Audit Commission. In the field of education, on the one hand, similar research has found positive effects of performance management practices on student test scores in England (Boyne & Chen, 2007) and Denmark (Andersen, 2008), as well as the United States (Sun & Van Ryzin, 2014). On the other hand, a recent longitudinal review of a large number of US cities “at the forefront” of performance measurement activities found significant advancement in the reporting of performance data but little support for the proposition that it has been a catalyst for improved performance. Clearly additional empirical research is needed to identify the kinds of contextual factors and implementation strategies that facilitate the ability of performance management systems to make a difference in government and nonprofit organizations.
In designing and implementing an effective performance management system, we need to consider the elements that are key for its success. Wholey (1999), an early proponent of performance management, attributed three essential elements to effective performance-based management:
This chapter begins by discussing ways to manage the process and implement these essential elements.
The practice of strategic planning is often used to develop reasonable agreement among key stakeholders regarding agency mission, goals, and strategies. Strategic planning is concerned with optimizing the fit between an organization and the environment in which it operates, and, most important, for public agencies that means strengthening performance in providing services to the public (Poister et al., 2013). Goal setting drives performance and serves as a motivator because it diverts energy and attention away from goal-irrelevant activities toward goal-relevant efforts and energizes people to put forth greater effort to achieve these goals (Latham, 2004; Fried & Slowik, 2004). This is especially important in the public sector, where problems of goal ambiguity can impede performance (Chun & Rainey, 2005). Therefore, setting goals, objectives, or targets regarding organization or program performance will keep the organization focused on priorities, outcomes, and results and thereby facilitate improvements in performance (Ammons, 2008; Kelly, 2003; Poister, 2003; Van Dooren, Bouckaert, & Halligan, 2010). Strategic planning establishes goals and objectives, many of which are likely to be performance related, and creating and implementing strategies designed to achieve them is expected to lead to improved performance (Bryson, 2011; Niven, 2003; Poister, 2010; Walker et al., 2011). (More on this process has been previously discussed in chapter 8 of this book.)
Good performance measures, particularly outcome measures, signal what the real priorities are, and they motivate people to work harder and smarter to accomplish organizational objectives. Tools like the balanced scorecard and logic models explored in chapter 8 help organizations decide on measures. For example, Neely and Bourne (2000) found that as more organizations started to use the balanced scorecard, they found enormous value in the act of deciding what to measure.
Quality performance measures require a balance between cost of data collection and the need to ensure that the data are complete, accurate, and consistent to document performance and support decision making at various organizational levels (Wholey, 2006). The technical quality of performance measurement systems will require the measures to be valid and reliable. Validity will mean that the performance indicators measure what is important to the recipients of the information, measure what they claim to measure, and are shared in a timely manner. Reliability requires consistency in that repeated measurements yield similar results.
Sterck and Bouckaert (2008) find the major challenges to collecting quality performance measures when (1) objectives are not specific enough and measurable; (2) there is high dependence on secondary data collection; (3) there is no established causal relationship between input, activities, outputs, and outcomes; (4) information on unit costs is missing; (5) difficulties arise in consolidating information; (6) data collection practices are expensive; and (7) time lags arise between inputs, activities, outputs, and outcomes. In order to alleviate these challenges, quality standards should be established, data quality monitored, and problems in data quality addressed. The last should be included in the process as outlined by Wholey (1999) in the previous section and number 2 on his list: implementing performance measurement systems of sufficient quality.
Measurement systems provide managers and decision makers with information regarding performance that they use to manage agencies and programs more effectively, redirecting resources and making adjustments in operations and service delivery systems to produce better results. And the performance data generated by measurement systems provide information to elected officials that can be useful in setting goals and priorities, making macrolevel budget decisions, and holding public agencies and managers accountable.
In the nonprofit sector as well, outcome measures can help agencies improve services and overall program effectiveness, increase accountability, guide managers in allocating resources, and help funding organizations make better decisions. At an operational level, performance measures provide feedback to staff, focus board members on policy and programmatic issues, identify needs for training and technical assistance, pinpoint service units and participant groups that need attention, compare alternative service delivery strategies, identify potential partners for collaboration, recruit volunteers, attract customers, set targets for future performance, and improve an agency's public image (Plantz, Greenway, & Hendricks, 1997).
Behn (2003, p. 586) counts eight purposes for performance measurement but contends that one of the eight, fostering improvement, is “the core purpose behind the other seven.” Those other seven—evaluate, control, budget, motivate, promote, celebrate, and learn—are means to the desired end and core purpose, which is to improve.
In studying the CitiStat programs, Behn (2006) found that to foster an improvement in performance, the meetings had to produce some decisions and some commitments. But unless there is follow-up, these decisions and commitments did not significantly improve agency operations.
The follow-up needs to have five critical traits:
Fostering improvements benefits from providing incentives. For example, Hatry (2006) recommends the use of rewards in performance contracts—rewards for meeting or exceeding targets and reduction of fees for failing to meet targets. The Food and Nutrition Services uses a variety of cash and nonmonetary awards to reinforce its desired results. It has unique internal peer and spirit awards, certificates of merit, spot awards, and time-off awards. In its quarterly newsletter, the region lists the names of award recipients for the quarter and includes information about the recipient and his or her contribution to performance (https://www.opm.gov/policy-data-oversight/performance-management/reference-materials/more-topics/what-a-difference-effective-performance-management-makes/).
But success does not come easily. It is a grand understatement to say that designing and implementing a performance measurement system in a public or nonprofit agency is a challenging process. Obviously the technical aspects of identifying appropriate performance criteria, defining valid and reliable indicators that are resistant to goal displacement and gaming, deciding on useful comparisons and reporting formats, and developing workable software support present many challenges from a methodological perspective, and these are largely the focus of this book. However, installing a system in an organization and building commitment to it, using it effectively on an ongoing basis, and embedding it in other management and decision-making processes present an even more daunting challenge.
Many governmental agencies and nonprofit organizations have set out to design a performance monitoring system but abandoned the effort before it was completed, or they completed the design but failed to move on to the implementation stage; still others have gone through the motions of installing a system but to no avail. Sometimes a promising measurement system is implemented in an agency but fails to take hold or be used in any meaningful way; then it may be maintained in a halfhearted way or be abandoned at some point. Still others are installed and maintained, but they never make a significant contribution to improved management, decision making, or performance.
Moynihan (2009) also points out that in contrast to purposeful use of the data by public managers and decision makers to improve performance, some actors in these performance regimes are likely to make passive use of performance data—minimally complying with procedural requirements without actually working to improve performance—or political use or even perverse use of performance information to support self-serving interests, which might actually work against achieving the kinds of results a program is intended to produce.
There are numerous reasons that these things happen. In some cases, the measurement system as designed does not meet the needs of the managers it is intended to serve. Or implementing the system and maintaining it on an ongoing basis may consume too much time and too many resources, and the information the system provides is not viewed as being worth the effort. There may be considerable resistance from within the organization to a new system, and the resulting lack of support and cooperation may stifle its effective implementation. Sometimes such systems wither before they get off the ground for lack of champions who can build support for them and persistently guide the organization through the process of system design and implementation.
In yet other cases, the public programs do not lend themselves readily to performance measurement (Bouckaert & Balk, 1991; Leeuw, 2000; Radin, 2006) because of complexity and or goal ambiguity or outputs and outcomes that are difficult to measure and difficult to control (Jennings & Haist, 2004). The problem may exist with the choice of measures as they may not generate useful information, may lack validity and reliability, are not actionable, focus on past performance and therefore are not timely, or are descriptive rather than prescriptive (Hatry, 2006; Heinrich, 2007; Marr, 2012; Poister, 2003).
Resource constraints also often present barriers to performance improvement (Boyne, 2003) by “limiting training and development activities, discouraging innovation and experimentation with newer programmatic approaches, or preventing implementation of new strategies aimed at generating increased outcomes” (Poister et al., 2013, p. 626). Finally, an organization may not have the kinds of performance-oriented management systems, such as performance budgeting processes, performance contracting or grant management, process improvement, or customer service processes, that may be required to make meaningful use of the performance data (Poister et al., 2013).
Installing a performance measurement system and embedding it in management processes involve bringing about organizational change, and this can be difficult. Even technically sound systems may face substantial problems in effective implementation. Obviously successful design and implementation will not occur automatically, but several factors can elevate the probability of success significantly. Best practices among both public agencies and private firms reveal the following ingredients for successful performance measurement programs:
In working to build these elements of success into a performance measurement program, public and nonprofit managers should (1) ensure strong leadership and support for the effort by involving a variety of stakeholders in developing the system, (2) follow a deliberate process in designing and implementing it, and (3) use project management tools to keep the process on track and produce a suitable measurement system.
In a small agency, a performance measurement system could conceivably be designed and implemented by a single individual, but this approach is not likely to produce a workable system in most cases. A wide variety of stakeholders usually have an interest in, and may well be affected by, a performance measurement system—for example:
Stakeholders in the Performance Measurement Process | |
Governmental agencies | Nonprofit organizations |
Agency or program managers and staff | Agency or program managers and staff |
Employees | Employees |
Labor unions | Volunteers |
Contractors, grantees, and suppliers | Contractors, grantees, and suppliers |
Elected officials | Governing board members |
Clients and customers | Clients and customers |
Advocacy groups | Advocacy groups |
Other governmental units | Local chapters |
Citizens and community organizations | Community organizations and the public |
Funding organizations | Funding organizations |
Management analysts and data specialists | Management analysts and data specialists |
Including at least some of these stakeholders in the design and implementation process will have two big advantages. First, they will raise issues and make suggestions that might not otherwise surface, and ultimately this will result in a better system. Second, because they have had a chance to participate in the process, voice their concerns, and help shape a system that serves their needs or at least is sensitive to issues that are important to them, they will be more likely to support the system that emerges. Thus, although it may be somewhat more cumbersome and time consuming, involving a variety of stakeholders in the process is likely to produce a more effective system and build ownership for that system along the way.
In a public or nonprofit organization of any size and complexity, therefore, it usually makes sense at the outset to form a working group to guide the process of designing and implementing a performance measurement system. Normally this group should be chaired by the top manager—the chief executive officer, agency head, division manager, or program director—of the organizational unit or program for which the system is being designed or another line or staff manager whom that individual delegates. Although the makeup of this working group, task force, or steering committee may vary, at a minimum it needs to include managers or staff from whatever agencies, subunits, or programs are to be covered by the performance measurement system. In the case of agencies or programs where service delivery is highly decentralized, it is advisable to include managers from field offices or local chapters in addition to those from the central office or headquarters. As Swiss (1991, p. 337) notes, a measurement system should be “designed to bring the most usable information to bear on the most pressing problems facing managers. Only the managers of each agency can say what their most pressing problems are and what kinds of information would be most useful in attacking them.” In addition, public agencies might well be advised to include an elected official or staff representative from the appropriate legislative body on the steering group; nonprofit agencies should include members of their governing boards in such a group.
The following are some other internal stakeholders who might be included in this steering group:
The steering committee also needs to have a resident measurement expert on board. In a large organization, this might be someone from a staff unit such as an office of planning and evaluation or a management analysis group. If such technical support is not available internally, this critical measurement expertise can be provided by an outside consultant, preferably one who is familiar with the agency or the program area in question.
In addition, it may be helpful to include external stakeholders in the steering group. With respect to programs that operate through the intergovernmental system, for example, representatives from either sponsoring or grantee agencies, or other agencies cooperating in program delivery, might make significant contributions. Private firms and or nonprofits working as contractors in delivery services should perhaps also be included. Furthermore, it may be helpful to invite consumer groups or advocacy groups to participate on the steering committee to represent the customer's perspective or the field at large.
Finally, if it is anticipated that the performance measurement issues may be particularly difficult to work through or that the deliberations may be fairly contentious, it may be advisable to engage the services of a professionally trained facilitator to lead at least some of the group's meetings.
The primary role of the steering committee should be to guide the process of developing the measurement system through to a final design and then to oversee its implementation. As is true of any such group process, the members need to be both open-minded and committed to seeing it through to the successful implementation of an effective system.
A recommended process for designing and implementing a performance management system was discussed in chapter 2 and is presented here again. Although these steps and the sequence can be modified and tailored to fit the needs of a particular agency or program, all the tasks listed, with the exception of the optional pilot, are essential in order to achieve the goal of implementing and using an effective measurement system on an ongoing basis. Thus, very early on in its deliberations, the steering group should adopt, and perhaps further elaborate, an overall process for designing and implementing a performance measurement system like the one shown here.
Process for Designing and Implementing Performance Management Systems
Developing such systems can be an arduous undertaking, and it is easy to get bogged down in the details of data and specific indicators and to lose sight of what the effort is really about. Thus, having agreed on the overall design and implementation process can help members of the steering group keep the big picture in mind and track their own progress along the way. It will also help them think ahead to next steps—to anticipate issues that might arise and prepare to deal with them beforehand. Along these lines, one of the most important steps in this process is the first one: clarifying the purpose and scope of the measurement system to be developed.
Clearly identifying the purpose of a particular system—for example, the tracking of the agency's progress in implementing strategic initiatives, as opposed to the monitoring of the effectiveness of a particular program or the measuring of workforce productivity on an ongoing basis, establishes a clear target that can then be used to discipline the process as the committee moves through it. In other words, for the steering group to work through the process deliberately and thus accomplish its objective more efficiently and effectively, it would do well to ask continually whether undertaking certain steps or approaching individual tasks in a particular way will advance its objective of developing a measurement system to serve this specific, clearly established purpose.
A clearly identified purpose will help the steering committee manage the design and implementation process as a project, using standard project management tools for scheduling work, assigning responsibilities, and tracking progress. Although in certain cases it may be possible to get a system up and running in fairly short order, more often it will take a year or two to design and implement a new system, and more complex systems may require three or four years to move into full-scale operation, especially if a pilot is to be conducted. This is a complicated process, and over the course of that period, the steering group (or some subgroup or other entity) will have to develop several products, including the following:
The committee will also conduct and evaluate the pilot, if necessary, and be responsible for at least early-stage evaluation and possible modification of the full-scale system once it is being used. It usually helps to sketch a rough schedule of the overall process over a year or multiyear period, stating approximate due dates when each of these products, or deliverables, will be completed. Although the schedule may change substantially along the way, thinking it through will give the steering group a clearer idea of what the process will involve and help establish realistic expectations about what will be accomplished by when.
Managing the project also entails detailing the scope of work by defining specific tasks and subtasks to be completed. The steering group might elaborate the entire scope of work at the outset, partly in the interest of developing a more realistic schedule; alternatively, it may detail the tasks one step at a time, projecting a rough schedule on the basis of only a general idea of what will be involved at each step. Detailing the project plan sooner rather than later is advantageous in that it will help clarify what resources, what expertise, what levels of effort, and what other commitments will be necessary in order to design and implement the system, again developing a more realistic set of expectations about what is involved in this process.
The project management approach also calls for assigning responsibilities for leading and supporting each step in the process. It may be that the steering group decides to conduct all the work by committee as a whole, but it might well decide on a division of labor whereby various individuals or subgroups take lead responsibility for different tasks. In addition, some individual or work unit may be assigned responsibility for staffing the project and doing the bulk of the detailed work between committee meetings. Furthermore, the steering group may decide to work through subcommittees or involve additional stakeholders in various parts of the process along the way. Typically the number of participants grows as the project moves forward and particular kinds of expertise are required at different points along the way, and a number of working groups may spin off the core steering committee in order to get the work done more efficiently and effectively. A further advantage of involving more participants along the way is that they may serve as “envoys” back to the organizational units or outside groups they represent and thus help build support for the system.
Finally, project management calls for monitoring activities and tracking progress in the design and implementation process along the way. This is usually accomplished by getting reports from working groups or subcommittees and comparing progress against the established schedule. It also means evaluating the process and deliverables produced, noting problems, and making adjustments as appropriate. The overall approach should be somewhat pragmatic, especially when members of the steering group have little experience in developing such systems, and no one should be surprised to have to make adjustments in the scope of work, schedule, and assignments as the group moves through the process. Nevertheless, managing the overall effort as a project from beginning to end will help the steering committee keep the process on track and work in a more deliberate manner to install an effective measurement system.
Software is available to track projects. For example, Microsoft Project tracks projects with due dates and the person responsible. The main modules of Microsoft Project are project work and project teams, schedules, and finances. It allows its users to create projects, track tasks, and report results.
As the work of governments and nonprofits continues to include multiple partners or networks, thought must be given to managing the performance of these networks and collaborations. Milward and Provan (2006) describe four uses for performance management in networks: service implementation, information diffusion, problem solving, and community capacity building. Understanding the kinds of networks and role of leaders in the networks is essential to developing a performance management system. For example, managing the accountability of a network should include determining who is responsible for which outcomes, but accountability for those delivering service in a network would be monitoring one's own agency involvement and accomplishments.
The Government Accountability Office (2014) studied the best practices in collaborative approaches in the federal government and found the key considerations and implementation approaches in table 15.1.
Table 15.1 Implementing Interagency Collaborations
Source: US Government Accountability Office (2014, p. 1).
Key Considerations for Implementing Interagency Collaborative Mechanisms | Implementation Approaches from Select Interagency Groups |
---|---|
Outcomes
|
|
Accountability
|
|
Leadership
|
|
Resources
|
|
In setting up a system that involves collaborations whether intergovernmental, with private contractors, or nonprofits, it will be useful to plan for the measurement of these accomplishments. The key considerations and implementation approaches found by the Government Accountability Office in interagency collaborations will be a useful starting point.
Structuring the design and implementation effort with committee oversight, using a deliberate process, and using project management tools constitutes a rational approach to installing an effective measurement system, but by no means does this approach guarantee success. Implementing any new management system is an exercise in managing change, and a performance measurement system is no different. This places the challenge of designing and implementing a measurement system outside a technical sphere and in the realm of managing people, the culture, organizations, and relationships. Indeed, research finds that even though decisions by public organizations to adopt measurement systems tend to be based on technical and analytical criteria, the ways in which systems are implemented are influenced more strongly by political and cultural factors (de Lancer Julnes & Holzer, 2001).
Clearly both technical and managerial issues are important in designing and implementing performance measurement systems. Proponents and observers of performance measurement in government have noted a number of problems in implementing such systems and proposed strategies to overcome them (Kamensky & Fountain, 2008; Poister, 2003; Aristigueta & Zarook, 2011; Hatry, 1999, 2002, 2006; Kassoff, 2001; Wholey, 2002). Others have summarized lessons learned by nonprofit agencies in developing measurement systems and made suggestions for ensuring success in implementing such systems in the nonprofit sector (Plantz, Greenway, & Hendricks, 1997; Sawhill & Williamson, 2001; Newcomer, 2008). Ammons and Rivenbark (2008, p. 307) suggest three factors that effectively encourage the productive use of performance information: “the collection of and reliance on higher-order measures—that is, outcome measures (effectiveness) and especially measures of efficiency—rather than simply output measures (workload); the willingness of officials to embrace comparison with other governments or service producers; and the incorporation of performance measures into key management systems.”
Although the process of developing performance measurement systems is similar for both public and nonprofit organizations, such efforts may be even more challenging for nonprofit managers because of several factors:
At the same time, because most nonprofit agencies are governed by boards of directors that are more closely focused on the work of their particular agencies than is the case with legislatures and individual public agencies, they may have an advantage in terms of ensuring alignment of the expectations of the managerial and governing bodies regarding performance as well as building meaningful commitments to use the measurement system.
Despite these differences between public and nonprofit agencies, for the most part both face similar kinds of issues in developing a measurement system, including problems concerning the information produced, the time and effort required to implement and support the system, the lack of subsequent use of the measurement system by managers and decision makers, the lack of stakeholder support for it, internal resistance to it, undesirable consequences that might arise from putting certain measures in place, and possible abuses of such a system. Thus, this concluding section presents thirty strategies that address these problems and help ensure the successful design and implementation of performance measurement systems in both public and nonprofit agencies.
Performance management systems will be used only if they provide worthwhile information to managers and decision makers, but many systems do not provide relevant and useful information. Sometimes they are simply not well conceived in terms of focusing on the kinds of results that are of concern to managers. If, for example, the measures are not consistent with an agency's strategic agenda, they are unlikely to be relevant to managers. In other cases, measures are selected on the basis of what data are readily available, but this approach rarely provides decision makers with a well-rounded picture of program performance. To ensure that measurement systems do provide relevant information that will help manage agencies and programs more effectively, those who commission measurement systems as well as those who take the lead in designing them should be sure to
Performance management systems may require too much time and effort, especially when they require original data collection instruments, new data collection procedures, or substantial data input from the field. Measurement systems are not free, and they should be viewed as an investment of resources that will generate worthwhile payoff. The objective is to develop a system that is cost-effective, but at the beginning of the development process, system planners often underestimate the time, effort, and expenditures required, which then turn out to be much greater than expected. This leads to frustration and can result in systems whose benefit is not worth the cost. To avoid this situation, system planners should take the following into consideration:
Even when they are relevant, performance measures can be ignored. They will not be used automatically. Although in some cases this is due to a lack of interest or outright resistance on the part of managers, it may also result from poor system design. Managers often feel overwhelmed by systems that include too many measures and seem to be unnecessarily complex. Another problem is that some systems track appropriate measures but do a poor job of presenting the performance data in ways that are understandable, interesting, and convincing. More generally, some systems simply are not designed to serve the purpose for which they were intended. The following guidelines are aimed at maximizing the useful content of performance data:
A wide variety of stakeholders have interests in performance measurement systems, and the perceived legitimacy of a system depends in large part on the extent to which these stakeholders buy into it. If stakeholders fail to buy into a measurement system because they don't think the measures are meaningful, the data are reliable, or the results are being used appropriately, it will lose credibility. The system will then be less than effective in influencing efforts to improve performance or in demonstrating accountability. Thus, in developing a measurement system, the agency should
Managers and employees may resist the implementation of performance measures because they feel threatened by them. Employees often view performance monitoring systems as “speed-up” systems intended to force them to work harder or allow the organization to reduce the workforce. Middle-level managers may see such systems as attempts to put increased pressure on them to produce added results and hold them accountable for standards beyond their control. Even higher-level managers may resist the implementation of measurement systems if they perceive them as efforts to force them to give up authority to those above and below them. Because the success of measurement systems depends on the cooperation of managers at all levels, and sometimes of rank-and-file employees as well, in feeding data to the system and working to register improvement on the measures, avoiding or overcoming this kind of internal resistance is critical. Thus, administrators wanting to install measurement systems should
Performance measurement systems can encourage undesirable behavior. As discussed in chapter 5, unbalanced sets of measures can focus undue attention on some performance criteria to the detriment of others, producing undesirable consequences. When managers and employees strive to perform well on less-than-optimal measures while ignoring other more important goals because they are not reflected in the measures, goal displacement occurs and overall performance suffers. In other instances, performance standards or incentives are poorly specified in ways that also allow certain entities to game the system in order to look good on the measures while not really achieving the true goals. Thus, in designing performance measurement systems, it is important to
Performance measurement systems can also be abused. Data indicating suboptimal performance, for example, can be used to penalize managers and staff unfairly, and performance data in general can be used either to reward or penalize certain managers and employees on a selective basis. Or, less blatant, authoritarian-style managers can use performance measures and the added power they provide over employees to micromanage their units even more closely in ways that are unpleasant for the employees and counterproductive overall. In order to avoid such problems, higher-level managers should
At this point in its evolution, prospects for performance management abound with the speculation that measurement is here to stay as the work of government become more complex and the delivery of services includes multistakeholders. The following list includes our expected changes to the performance management landscape:
The performance orientation of public management is here to stay. It is essential for successful government. Societies are now too complex to be managed only by rules of input and process and a public-spirited culture. The performance movement has increased formalized planning, reporting and control across many governments. This has improved the information available to managers and policy makers. But experience shows that this can risk leading to a new form of bureaucratic sclerosis. More attention needs to be given to keeping performance transactions costs in check, and to making optimal use of social and internalized motivators and controls. (Organisation for Economic Co-operation and Development, 2005, p. 81)
Performance management is essential to managing for results in government and nonprofit organizations. Although measurement can aid greatly in the quest to maintain and improve performance, it is by no means a panacea. High-quality performance measures can provide managers and policymakers with valid, reliable, and timely information on how well or how poorly a given program is performing, but then it is up to those managers and policymakers to respond deliberately and effectively to improve performance.
Clearly the time for performance management in the public and nonprofit sectors has arrived, and agencies are installing new measurement systems and fine-tuning existing systems on an ongoing basis. Yet a substantial amount of skepticism remains about both the feasibility and the utility of measurement systems (Radin, 2006; Moynihan, 2008), and numerous fallacies and misperceptions about the efficacy of performance measurement still prevail in the field (Ammons, 2002; Hatry, 2002, 2006). Nevertheless, tracking the results produced by public and nonprofit programs to include collaborative networks and using the information produced to attempt to improve performance as well as provide accountability to higher-level authorities is a commonsense approach to management based on simple but irrefutable logic.
Although many public and nonprofit agencies have developed and implemented performance management systems in recent years solely in response to mandates from elected chief executives, legislative bodies, and governing boards, many of these systems have proved to be beneficial to the agencies themselves, and many public and nonprofit managers have become converts. We can expect to see efforts continue to proliferate along these lines, and that should be good news for those who are interested in promoting results-oriented management approaches. However, it must always be understood that performance measurement is a necessary but insufficient condition for results-oriented management or results-oriented government. For measurement to be useful, it must be effectively linked to other management and decision-making processes, as discussed in chapter 1 of this book.
Thus, public and nonprofit managers at all levels, in cooperation with elected officials and governing bodies, must build and use effective measurement systems as components that are carefully integrated into processes for strategic planning and management, operational planning, budgeting, performance management, quality and productivity improvement, and other purposes. Without strong linkages to such vital management and decision-making processes, performance measurement systems may generate information that is nice to know but will not lead to better decisions, improved performance, or more effective accountability and control, which are necessary to engage in performance management.
This book has dealt with a number of components of performance management systems from a technical design perspective, and this concluding chapter has discussed issues concerning the implementation of such systems from an organizational and managerial perspective, all with an eye to helping you install the most effective system you can. Yet you need to understand that difficulties abound in this area, that real challenges are likely to persist, and that the perfect measurement system doesn't exist. Although you should work to implement the best measurement system possible and address the kinds of problems discussed in this chapter, you will also need to make the necessary pragmatic trade-offs regarding system quality and usefulness versus cost and level of effort in order to install a workable, affordable, and effective measurement system. Although this may not produce the perfect system, it will clearly be preferable to the alternatives of not having a workable system or having no system at all.