CHAPTER FIFTEEN
Designing and Implementing Effective Management Systems

What is the best way to manage the process of designing and implementing a performance management system? Who should be involved in this process? How can you overcome the resistance that often builds up against such systems and build support for them instead? What are the most frequent problems that plague efforts to implement measurement systems, and how can you avoid or surmount them? This chapter discusses organization and management issues involved in designing and implementing performance management systems and suggests strategies for success in developing systems that are useful and cost-effective.

Expectations that performance management should contribute to higher levels of organizational performance and, in particular, better outcomes is almost universal among both proponents and critics of the performance movement (Ammons, 1995; Behn, 2003; Epstein, 1984; Halachmi & Bouckaert, 1996; Hatry, 2006; Heinrich & Marschke, 2010; Moynihan, 2008; Poister, 2003; Radin, 2006; Wholey & Hatry, 1992). While interest in performance management is largely predicated on the assumption that it will lead to performance improvement, however, empirical research testing this relationship has been relatively sparse to date. A few celebrated cases of effective performance management have been documented, such as CompStat in New York City (Bratton & Malinowski, 2008) and CitiStat in Baltimore (Behn, 2007), and a growing number of case studies demonstrate great potential for using performance information to manage for results (Sanger, 2008; Ammons & Rivenbark, 2008; Gore, 1993; Holzer & Yang, 2004; de Lancer Julnes, Berry, Aristigueta, & Yang, 2007; Barnow & Smith, 2004). For example, New York City's CompStat, a data-driven system, has been credited with effectively fighting crime (Smith & Bratton, 2001), and Baltimore's CitiStat, which expanded the concept to a wide array of municipal services fostering program improvements, won the 2004 Kennedy School of Government's Innovations in American Government Awards (Behn, 2006). Similarly, the ongoing North Carolina municipal benchmarking program has been credited with enabling individual cities participating in the project to make significant improvements in terms of efficiency and service delivery (Ammons & Rivenbark, 2008).

However, Ammons and Rivenbark (2008) contend that “most claims of performance measurement's value in influencing decisions and improving services tend to be broad and disappointingly vague,” and they point out that “hard evidence documenting performance measurement's impact on management decisions and service improvements is rare” (p. 305). Poister, Pasha, and Edwards (2013) state “the fairly meager empirical research to date on the impact of performance management practices has produced decidedly mixed results, with four of the eight studies employing cross-sectional analysis across comparable public organizations showing positive effects on performance. Therefore, the question of the effectiveness of performance measurement or management practices in actually improving performance is very much still at issue” (p. 628).

Beyond case studies, a small but growing stream of quantitative research has examined the impact of performance management–related approaches on performance itself, and that work has produced mixed results. Some of these studies have produced evidence that performance management can make a positive difference. The study by Poister et al. (2013) is a recent example finding evidence that both strategic planning and performance measurement, the principal components of performance management in public organizations, “do contribute to improved performance in small and medium-sized transit systems in the United States” (p. 632). Earlier, Walker, Damanpour, and Devece (2011) found that elements of performance management such as building ownership and understanding of organizational mission and goals, specification of appropriate performance measures and targets, devolution of control to service managers, and taking corrective action when results deviate from plans in English local government authorities led to higher core service performance scores constructed by the British Audit Commission. In the field of education, on the one hand, similar research has found positive effects of performance management practices on student test scores in England (Boyne & Chen, 2007) and Denmark (Andersen, 2008), as well as the United States (Sun & Van Ryzin, 2014). On the other hand, a recent longitudinal review of a large number of US cities “at the forefront” of performance measurement activities found significant advancement in the reporting of performance data but little support for the proposition that it has been a catalyst for improved performance. Clearly additional empirical research is needed to identify the kinds of contextual factors and implementation strategies that facilitate the ability of performance management systems to make a difference in government and nonprofit organizations.

Managing the Process

In designing and implementing an effective performance manage­ment system, we need to consider the elements that are key for its success. Wholey (1999), an early proponent of performance manage­ment, attributed three essential elements to effective performance-based management:

  1. Developing a reasonable level of agreement among key stakeholders regarding agency mission, goals, and strategies
  2. Implementing performance measurement systems of sufficient quality
  3. Using performance information to improve program effectiveness, strengthen accountability, and support decision making

This chapter begins by discussing ways to manage the process and implement these essential elements.

Mission, Goals, and Strategies

The practice of strategic planning is often used to develop reasonable agreement among key stakeholders regarding agency mission, goals, and strategies. Strategic planning is concerned with optimizing the fit between an organization and the environment in which it operates, and, most important, for public agencies that means strengthening performance in providing services to the public (Poister et al., 2013). Goal setting drives performance and serves as a motivator because it diverts energy and attention away from goal-irrelevant activities toward goal-relevant efforts and energizes people to put forth greater effort to achieve these goals (Latham, 2004; Fried & Slowik, 2004). This is especially important in the public sector, where problems of goal ambiguity can impede performance (Chun & Rainey, 2005). Therefore, setting goals, objectives, or targets regarding organization or program performance will keep the organization focused on priorities, outcomes, and results and thereby facilitate improvements in performance (Ammons, 2008; Kelly, 2003; Poister, 2003; Van Dooren, Bouckaert, & Halligan, 2010). Strategic planning establishes goals and objectives, many of which are likely to be performance related, and creating and implementing strategies designed to achieve them is expected to lead to improved performance (Bryson, 2011; Niven, 2003; Poister, 2010; Walker et al., 2011). (More on this process has been previously discussed in chapter 8 of this book.)

Quality Performance Measures

Good performance measures, particularly outcome measures, signal what the real priorities are, and they motivate people to work harder and smarter to accomplish organizational objectives. Tools like the balanced scorecard and logic models explored in chapter 8 help organizations decide on measures. For example, Neely and Bourne (2000) found that as more organizations started to use the balanced scorecard, they found enormous value in the act of deciding what to measure.

Quality performance measures require a balance between cost of data collection and the need to ensure that the data are complete, accurate, and consistent to document performance and support decision making at various organizational levels (Wholey, 2006). The technical quality of performance measurement systems will require the measures to be valid and reliable. Validity will mean that the performance indicators measure what is important to the recipients of the information, measure what they claim to measure, and are shared in a timely manner. Reliability requires consistency in that repeated measurements yield similar results.

Sterck and Bouckaert (2008) find the major challenges to collecting quality performance measures when (1) objectives are not specific enough and measurable; (2) there is high dependence on secondary data collection; (3) there is no established causal relationship between input, activities, outputs, and outcomes; (4) information on unit costs is missing; (5) difficulties arise in consolidating information; (6) data collection practices are expensive; and (7) time lags arise between inputs, activities, outputs, and outcomes. In order to alleviate these challenges, quality standards should be established, data quality monitored, and problems in data quality addressed. The last should be included in the process as outlined by Wholey (1999) in the previous section and number 2 on his list: implementing performance measurement systems of sufficient quality.

Measurement systems provide managers and decision makers with information regarding performance that they use to manage agencies and programs more effectively, redirecting resources and making adjustments in operations and service delivery systems to produce better results. And the performance data generated by measurement systems provide information to elected officials that can be useful in setting goals and priorities, making macrolevel budget decisions, and holding public agencies and managers accountable.

In the nonprofit sector as well, outcome measures can help agencies improve services and overall program effectiveness, increase accountability, guide managers in allocating resources, and help funding organizations make better decisions. At an operational level, performance measures provide feedback to staff, focus board members on policy and programmatic issues, identify needs for training and technical assistance, pinpoint service units and participant groups that need attention, compare alternative service delivery strategies, identify potential partners for collaboration, recruit volunteers, attract customers, set targets for future performance, and improve an agency's public image (Plantz, Greenway, & Hendricks, 1997).

Program Improvement

Behn (2003, p. 586) counts eight purposes for performance measurement but contends that one of the eight, fostering improvement, is “the core purpose behind the other seven.” Those other seven—evaluate, control, budget, motivate, promote, celebrate, and learn—are means to the desired end and core purpose, which is to improve.

In studying the CitiStat programs, Behn (2006) found that to foster an improvement in performance, the meetings had to produce some decisions and some commitments. But unless there is follow-up, these decisions and commitments did not significantly improve agency operations.

The follow-up needs to have five critical traits:

  1. Subsequent assignments phrased as performance targets with due dates
  2. Assignments made by the authority figure running the meeting
  3. Assignments communicated at the meeting and may include written communication
  4. Agencies that prepare for CitiStat meetings with AgencyStat
  5. Review of the results of the assignment at the next CitiStat meeting

Fostering improvements benefits from providing incentives. For example, Hatry (2006) recommends the use of rewards in performance contracts—rewards for meeting or exceeding targets and reduction of fees for failing to meet targets. The Food and Nutrition Services uses a variety of cash and nonmonetary awards to reinforce its desired results. It has unique internal peer and spirit awards, certificates of merit, spot awards, and time-off awards. In its quarterly newsletter, the region lists the names of award recipients for the quarter and includes information about the recipient and his or her contribution to performance (https://www.opm.gov/policy-data-oversight/performance-management/reference-materials/more-topics/what-a-difference-effective-performance-management-makes/).

Barriers to Success

But success does not come easily. It is a grand understatement to say that designing and implementing a performance measurement system in a public or nonprofit agency is a challenging process. Obviously the technical aspects of identifying appropriate performance criteria, defining valid and reliable indicators that are resistant to goal displacement and gaming, deciding on useful comparisons and reporting formats, and developing workable software support present many challenges from a methodological perspective, and these are largely the focus of this book. However, installing a system in an organization and building commitment to it, using it effectively on an ongoing basis, and embedding it in other management and decision-making processes present an even more daunting challenge.

Many governmental agencies and nonprofit organizations have set out to design a performance monitoring system but abandoned the effort before it was completed, or they completed the design but failed to move on to the implementation stage; still others have gone through the motions of installing a system but to no avail. Sometimes a promising measurement system is implemented in an agency but fails to take hold or be used in any meaningful way; then it may be maintained in a halfhearted way or be abandoned at some point. Still others are installed and maintained, but they never make a significant contribution to improved management, decision making, or performance.

Moynihan (2009) also points out that in contrast to purposeful use of the data by public managers and decision makers to improve performance, some actors in these performance regimes are likely to make passive use of performance data—minimally complying with procedural requirements without actually working to improve performance—or political use or even perverse use of performance information to support self-serving interests, which might actually work against achieving the kinds of results a program is intended to produce.

There are numerous reasons that these things happen. In some cases, the measurement system as designed does not meet the needs of the managers it is intended to serve. Or implementing the system and maintaining it on an ongoing basis may consume too much time and too many resources, and the information the system provides is not viewed as being worth the effort. There may be considerable resistance from within the organization to a new system, and the resulting lack of support and cooperation may stifle its effective implementation. Sometimes such systems wither before they get off the ground for lack of champions who can build support for them and persistently guide the organization through the process of system design and implementation.

In yet other cases, the public programs do not lend themselves readily to performance measurement (Bouckaert & Balk, 1991; Leeuw, 2000; Radin, 2006) because of complexity and or goal ambiguity or outputs and outcomes that are difficult to measure and difficult to control (Jennings & Haist, 2004). The problem may exist with the choice of measures as they may not generate useful information, may lack validity and reliability, are not actionable, focus on past performance and therefore are not timely, or are descriptive rather than prescriptive (Hatry, 2006; Heinrich, 2007; Marr, 2012; Poister, 2003).

Resource constraints also often present barriers to performance improvement (Boyne, 2003) by “limiting training and development activities, discouraging innovation and experimentation with newer programmatic approaches, or preventing implementation of new strategies aimed at generating increased outcomes” (Poister et al., 2013, p. 626). Finally, an organization may not have the kinds of performance-oriented management systems, such as performance budgeting processes, performance contracting or grant management, process improvement, or customer service processes, that may be required to make meaningful use of the performance data (Poister et al., 2013).

Elements of Success

Installing a performance measurement system and embedding it in management processes involve bringing about organizational change, and this can be difficult. Even technically sound systems may face substantial problems in effective implementation. Obviously successful design and implementation will not occur automatically, but several factors can elevate the probability of success significantly. Best practices among both public agencies and private firms reveal the following ingredients for successful performance measurement programs:

  • Leadership is critical in designing, deploying, and maintaining effective performance measurement and management systems. Clear, consistent, and visible involvement by senior executives and managers is a necessary part of successful performance measurement and management systems.
  • A conceptual framework is needed for the performance measurement and management system. Every organization needs a clear and cohesive performance measurement framework that is understood by all levels of the organization and that supports objectives and the collection of results.
  • Effective internal and external communication and participation of key stakeholders are the keys to successful performance measurement. Effective communication with employees, process owners, customers, and stakeholders is vital to the successful development and deployment of performance measurement and management systems.
  • Performance measures should be developed primarily by the program that will be responsible for the data with input from upper management. Input and feedback should also be obtained from citizens, customers, and other stakeholders to make sure the measures include those of importance to the citizens.
  • Agencies and their programs need to track and distinguish types of measures. The logic models and the balanced scorecard discussed throughout this book are a useful technique for this discussion.
  • Accountability for results must be clearly assigned and well understood. High-performance organizations make sure that all managers and employees understand what they are responsible for in achieving organizational goals.
  • Data processing and analytic support are critical, and data processing personnel should be brought into the planning stages early. Data must be analyzed by knowledgeable individuals to ensure reliability and validity of data.
  • Performance measurement systems must provide intelligence for decision makers, not just compile data. Measures should be limited to those that relate to strategic goals and objectives and yield timely, relevant, and concise information that decision makers at all levels can use to assess progress in achieving goals.
  • Compensation, rewards, and recognition should be linked to performance measurements. Such a linkage sends a clear and unambiguous message to the organization as to what is important.
  • Performance measurement systems should be positive, not punitive. The most successful measurement systems are not “gotcha” systems, but rather are learning systems that help identify what works and what does not so as to continue with and improve what is working and repair or replace what is not working.
  • Results and progress toward program commitments should be openly shared with employees, customers, and stakeholders.

In working to build these elements of success into a performance measurement program, public and nonprofit managers should (1) ensure strong leadership and support for the effort by involving a variety of stakeholders in developing the system, (2) follow a deliberate process in designing and implementing it, and (3) use project management tools to keep the process on track and produce a suitable measurement system.

Leadership and Stakeholder Support and Involvement

In a small agency, a performance measurement system could conceivably be designed and implemented by a single individual, but this approach is not likely to produce a workable system in most cases. A wide variety of stakeholders usually have an interest in, and may well be affected by, a performance measurement system—for example:

Stakeholders in the Performance Measurement Process
Governmental agenciesNonprofit organizations
Agency or program managers and staffAgency or program managers and staff
EmployeesEmployees
Labor unionsVolunteers
Contractors, grantees, and suppliersContractors, grantees, and suppliers
Elected officialsGoverning board members
Clients and customersClients and customers
Advocacy groupsAdvocacy groups
Other governmental unitsLocal chapters
Citizens and community organizationsCommunity organizations and the public
Funding organizationsFunding organizations
Management analysts and data specialistsManagement analysts and data specialists

Including at least some of these stakeholders in the design and implementation process will have two big advantages. First, they will raise issues and make suggestions that might not otherwise surface, and ultimately this will result in a better system. Second, because they have had a chance to participate in the process, voice their concerns, and help shape a system that serves their needs or at least is sensitive to issues that are important to them, they will be more likely to support the system that emerges. Thus, although it may be somewhat more cumbersome and time consuming, involving a variety of stakeholders in the process is likely to produce a more effective system and build ownership for that system along the way.

In a public or nonprofit organization of any size and complexity, therefore, it usually makes sense at the outset to form a working group to guide the process of designing and implementing a performance measurement system. Normally this group should be chaired by the top manager—the chief executive officer, agency head, division manager, or program director—of the organizational unit or program for which the system is being designed or another line or staff manager whom that individual delegates. Although the makeup of this working group, task force, or steering committee may vary, at a minimum it needs to include managers or staff from whatever agencies, subunits, or programs are to be covered by the performance measurement system. In the case of agencies or programs where service delivery is highly decentralized, it is advisable to include managers from field offices or local chapters in addition to those from the central office or headquarters. As Swiss (1991, p. 337) notes, a measurement system should be “designed to bring the most usable information to bear on the most pressing problems facing managers. Only the managers of each agency can say what their most pressing problems are and what kinds of information would be most useful in attacking them.” In addition, public agencies might well be advised to include an elected official or staff representative from the appropriate legislative body on the steering group; nonprofit agencies should include members of their governing boards in such a group.

The following are some other internal stakeholders who might be included in this steering group:

  • A representative of the central executive office (e.g., the city manager's office, the secretary or commissioner's office)
  • Representatives from central office administrative or support units, such as the budget office, the personnel department, or a quality or productivity center
  • A systems person who is knowledgeable about information processing and the agency's existing systems
  • A representative from the labor union if the employees are unionized

The steering committee also needs to have a resident measurement expert on board. In a large organization, this might be someone from a staff unit such as an office of planning and evaluation or a management analysis group. If such technical support is not available internally, this critical measurement expertise can be provided by an outside consultant, preferably one who is familiar with the agency or the program area in question.

In addition, it may be helpful to include external stakeholders in the steering group. With respect to programs that operate through the intergovernmental system, for example, representatives from either sponsoring or grantee agencies, or other agencies cooperating in program delivery, might make significant contributions. Private firms and or nonprofits working as contractors in delivery services should perhaps also be included. Furthermore, it may be helpful to invite consumer groups or advocacy groups to participate on the steering committee to represent the customer's perspective or the field at large.

Finally, if it is anticipated that the performance measurement issues may be particularly difficult to work through or that the deliberations may be fairly contentious, it may be advisable to engage the services of a professionally trained facilitator to lead at least some of the group's meetings.

The primary role of the steering committee should be to guide the process of developing the measurement system through to a final design and then to oversee its implementation. As is true of any such group process, the members need to be both open-minded and committed to seeing it through to the successful implementation of an effective system.

Deliberate Process

A recommended process for designing and implementing a performance management system was discussed in chapter 2 and is presented here again. Although these steps and the sequence can be modified and tailored to fit the needs of a particular agency or program, all the tasks listed, with the exception of the optional pilot, are essential in order to achieve the goal of implementing and using an effective measurement system on an ongoing basis. Thus, very early on in its deliberations, the steering group should adopt, and perhaps further elaborate, an overall process for designing and implementing a performance measurement system like the one shown here.

Process for Designing and Implementing Performance Management Systems

  1. Step One: Clarify the purpose of the system.
  2. Step Two: Assess organizational readiness.
  3. Step Three: Identify relevant external stakeholders.
  4. Step Four: Organize system development process.
  5. Step Five: Identify key purposes and parameters for initiating performance management.
  6. Step Six: Define the components for the performance management system, performance criteria, and use.
  7. Step Seven: Define, evaluate, and select indicators.
  8. Step Eight: Develop data collection procedures.
  9. Step Nine: Specify system design.
    • Identify reporting frequencies and channels.
    • Determine analytical and reporting formats.
    • Develop software applications.
    • Assign responsibilities for maintaining the system.
  10. Step Ten: Conduct a pilot if necessary.
  11. Step Eleven: Implement the full-scale system.
  12. Step Twelve: Use, modify, and evaluate the system.
  13. Step Thirteen: Share the results with stakeholders.

Developing such systems can be an arduous undertaking, and it is easy to get bogged down in the details of data and specific indicators and to lose sight of what the effort is really about. Thus, having agreed on the overall design and implementation process can help members of the steering group keep the big picture in mind and track their own progress along the way. It will also help them think ahead to next steps—to anticipate issues that might arise and prepare to deal with them beforehand. Along these lines, one of the most important steps in this process is the first one: clarifying the purpose and scope of the measurement system to be developed.

Clearly identifying the purpose of a particular system—for example, the tracking of the agency's progress in implementing strategic initiatives, as opposed to the monitoring of the effectiveness of a particular program or the measuring of workforce productivity on an ongoing basis, establishes a clear target that can then be used to discipline the process as the committee moves through it. In other words, for the steering group to work through the process deliberately and thus accomplish its objective more efficiently and effectively, it would do well to ask continually whether undertaking certain steps or approaching individual tasks in a particular way will advance its objective of developing a measurement system to serve this specific, clearly established purpose.

Project Management

A clearly identified purpose will help the steering committee manage the design and implementation process as a project, using standard project management tools for scheduling work, assigning responsibilities, and tracking progress. Although in certain cases it may be possible to get a system up and running in fairly short order, more often it will take a year or two to design and implement a new system, and more complex systems may require three or four years to move into full-scale operation, especially if a pilot is to be conducted. This is a complicated process, and over the course of that period, the steering group (or some subgroup or other entity) will have to develop several products, including the following:

  • A clear statement of the scope and purpose of the measurement system
  • A description of the performance criteria to be captured by the system
  • Definitions of each measure to be incorporated in the system and documentation of constituent elements, data sources, and computations
  • Documentation of data collection procedures
  • A plan for ensuring the quality and integrity of the data
  • A plan for reporting particular results to specified audiences at certain frequencies
  • Prototype analytical and reporting formats
  • Software programs and hardware configurations to support the system
  • Identification of responsibilities for data collection and input, data processing, report preparation, system maintenance, and utilization
  • A plan for full-scale implementation of the measurement system

The committee will also conduct and evaluate the pilot, if necessary, and be responsible for at least early-stage evaluation and possible modification of the full-scale system once it is being used. It usually helps to sketch a rough schedule of the overall process over a year or multiyear period, stating approximate due dates when each of these products, or deliverables, will be completed. Although the schedule may change substantially along the way, thinking it through will give the steering group a clearer idea of what the process will involve and help establish realistic expectations about what will be accomplished by when.

Managing the project also entails detailing the scope of work by defining specific tasks and subtasks to be completed. The steering group might elaborate the entire scope of work at the outset, partly in the interest of developing a more realistic schedule; alternatively, it may detail the tasks one step at a time, projecting a rough schedule on the basis of only a general idea of what will be involved at each step. Detailing the project plan sooner rather than later is advantageous in that it will help clarify what resources, what expertise, what levels of effort, and what other commitments will be necessary in order to design and implement the system, again developing a more realistic set of expectations about what is involved in this process.

The project management approach also calls for assigning responsibilities for leading and supporting each step in the process. It may be that the steering group decides to conduct all the work by committee as a whole, but it might well decide on a division of labor whereby various individuals or subgroups take lead responsibility for different tasks. In addition, some individual or work unit may be assigned responsibility for staffing the project and doing the bulk of the detailed work between committee meetings. Furthermore, the steering group may decide to work through subcommittees or involve additional stakeholders in various parts of the process along the way. Typically the number of participants grows as the project moves forward and particular kinds of expertise are required at different points along the way, and a number of working groups may spin off the core steering committee in order to get the work done more efficiently and effectively. A further advantage of involving more participants along the way is that they may serve as “envoys” back to the organizational units or outside groups they represent and thus help build support for the system.

Finally, project management calls for monitoring activities and tracking progress in the design and implementation process along the way. This is usually accomplished by getting reports from working groups or subcommittees and comparing progress against the established schedule. It also means evaluating the process and deliverables produced, noting problems, and making adjustments as appropriate. The overall approach should be somewhat pragmatic, especially when members of the steering group have little experience in developing such systems, and no one should be surprised to have to make adjustments in the scope of work, schedule, and assignments as the group moves through the process. Nevertheless, managing the overall effort as a project from beginning to end will help the steering committee keep the process on track and work in a more deliberate manner to install an effective measurement system.

Software is available to track projects. For example, Microsoft Project tracks projects with due dates and the person responsible. The main modules of Microsoft Project are project work and project teams, schedules, and finances. It allows its users to create projects, track tasks, and report results.

Networks and Collaborations

As the work of governments and nonprofits continues to include multiple partners or networks, thought must be given to managing the performance of these networks and collaborations. Milward and Provan (2006) describe four uses for performance management in networks: service implementation, information diffusion, problem solving, and community capacity building. Understanding the kinds of networks and role of leaders in the networks is essential to developing a performance management system. For example, managing the accountability of a network should include determining who is responsible for which outcomes, but accountability for those delivering service in a network would be monitoring one's own agency involvement and accomplishments.

The Government Accountability Office (2014) studied the best practices in collaborative approaches in the federal government and found the key considerations and implementation approaches in table 15.1.

Table 15.1  Implementing Interagency Collaborations

Source:  US Government Accountability Office (2014, p. 1).

Key Considerations for Implementing Interagency Collaborative MechanismsImplementation Approaches from Select Interagency Groups
Outcomes
  • Have short-term and long-term outcomes been clearly defined?
  • Started group with most directly affected participants and gradually broadened to others
  • Conducted early outreach to participants and stakeholders to identify shared interests
  • Held early in-person meetings to build relationships and trust
  • Identified early wins for the group to accomplish
  • Developed outcomes that represented the collective interests of participants
  • Developed a plan to communicate outcomes and track progress
  • Revisited outcomes and refreshed interagency group
Accountability
  • Is there a way to track and monitor progress?
  • Developed performance measures and tied them to shared outcomes
  • Identified and shared relevant agency performance data
  • Developed methods to report on the group's progress that are open and transparent
  • Incorporated interagency group activities into individual performance expectations
Leadership
  • Has a lead agency or individual been identified?
  • If leadership will be shared between one or more agencies, have roles and responsibilities been clearly identified and agreed on?
  • Designated group leaders exhibited collaboration competencies
  • Ensured participation from high-level leaders in regular, in-person group meetings, and activities
  • Rotated key tasks and responsibilities when leadership of the group was shared
  • Established clear and inclusive procedures for leading the group during initial meetings
  • Distributed leadership responsibility for group activities among participants
Resources
  • How will the collaborative mechanism be funded?
  • How will the collaborative mechanism be staffed?
  • Created an inventory of resources dedicated toward interagency outcomes
  • Leveraged related agency resources toward the group's outcomes
  • Pilot tested new collaborative ideas, programs, or policies before investing resources

In setting up a system that involves collaborations whether intergovernmental, with private contractors, or nonprofits, it will be useful to plan for the measurement of these accomplishments. The key considerations and implementation approaches found by the Government Accountability Office in interagency collaborations will be a useful starting point.

Strategies for Success

Structuring the design and implementation effort with committee oversight, using a deliberate process, and using project management tools constitutes a rational approach to installing an effective measurement system, but by no means does this approach guarantee success. Implementing any new management system is an exercise in managing change, and a performance measurement system is no different. This places the challenge of designing and implementing a measurement system outside a technical sphere and in the realm of managing people, the culture, organizations, and relationships. Indeed, research finds that even though decisions by public organizations to adopt measurement systems tend to be based on technical and analytical criteria, the ways in which systems are implemented are influenced more strongly by political and cultural factors (de Lancer Julnes & Holzer, 2001).

Clearly both technical and managerial issues are important in designing and implementing performance measurement systems. Proponents and observers of performance measurement in government have noted a number of problems in implementing such systems and proposed strategies to overcome them (Kamensky & Fountain, 2008; Poister, 2003; Aristigueta & Zarook, 2011; Hatry, 1999, 2002, 2006; Kassoff, 2001; Wholey, 2002). Others have summarized lessons learned by nonprofit agencies in developing measurement systems and made suggestions for ensuring success in implementing such systems in the nonprofit sector (Plantz, Greenway, & Hendricks, 1997; Sawhill & Williamson, 2001; Newcomer, 2008). Ammons and Rivenbark (2008, p. 307) suggest three factors that effectively encourage the productive use of performance information: “the collection of and reliance on higher-order measures—that is, outcome measures (effectiveness) and especially measures of efficiency—rather than simply output measures (workload); the willingness of officials to embrace comparison with other governments or service producers; and the incorporation of performance measures into key management systems.”

Although the process of developing performance measurement systems is similar for both public and nonprofit organizations, such efforts may be even more challenging for nonprofit managers because of several factors:

  • Many nonprofit agencies rely heavily on the work of volunteers to deliver services, who may be particularly leery of attempts to evaluate their performance.
  • Local chapters often have a high degree of autonomy, and it may be more difficult to implement uniform reporting procedures for roll-up or comparison purposes.
  • Nonprofit agencies are often funded by a variety of sources and are often highly dependent on a changing mix of grants for funding, creating a more fluid flow of services that may be more difficult to track with ongoing monitoring systems.
  • Funders may provide nonprofits with the indicators, providing very little freedom for selection of measures and stakeholder involvement.
  • Many nonprofit agencies have relatively limited managerial and analytical resources to support performance measurement systems.

At the same time, because most nonprofit agencies are governed by boards of directors that are more closely focused on the work of their particular agencies than is the case with legislatures and individual public agencies, they may have an advantage in terms of ensuring alignment of the expectations of the managerial and governing bodies regarding performance as well as building meaningful commitments to use the measurement system.

Despite these differences between public and nonprofit agencies, for the most part both face similar kinds of issues in developing a measurement system, including problems concerning the information produced, the time and effort required to implement and support the system, the lack of subsequent use of the measurement system by managers and decision makers, the lack of stakeholder support for it, internal resistance to it, undesirable consequences that might arise from putting certain measures in place, and possible abuses of such a system. Thus, this concluding section presents thirty strategies that address these problems and help ensure the successful design and implementation of performance measurement systems in both public and nonprofit agencies.

Usefulness of the Information Produced

Performance management systems will be used only if they provide worthwhile information to managers and decision makers, but many systems do not provide relevant and useful information. Sometimes they are simply not well conceived in terms of focusing on the kinds of results that are of concern to managers. If, for example, the measures are not consistent with an agency's strategic agenda, they are unlikely to be relevant to managers. In other cases, measures are selected on the basis of what data are readily available, but this approach rarely provides decision makers with a well-rounded picture of program performance. To ensure that measurement systems do provide relevant information that will help manage agencies and programs more effectively, those who commission measurement systems as well as those who take the lead in designing them should be sure to

  1. Clarify mission, strategy, goals and objectives, and program structure as a prelude to measurement. Use this strategic framework to focus the scope of the performance measurement system on what is truly important to the organization and its stakeholders.
  2. Establish the purpose of the performance management system and the mechanisms through which performance information will be used to improve management and decision making regarding strategy, resource allocation, quality and process improvements, grants and contract management, stakeholder engagement, or other similar uses. Clearly define the critical elements with respect to how the information will be used to inform management and decisions aimed at improving performance.
  3. Develop logic models to identify the linkages between programmatic activity and outputs and outcomes, and use this framework to define appropriate measures. These logic models help sort out the myriad variables involved in a program and identify what the important results really are. When the focus is on agency rather than programmatic performance, develop balanced scorecards or use other goal structures as the performance framework. When the focus is on quality or process improvement, develop flowcharts and use them as performance frameworks.
  4. Be results driven rather than data driven in the search for relevant measures. Do not include measures simply because the data are already available. Use need and usefulness rather than data availability or ease of measurement as the principle criteria for selecting measures.
  5. Work toward omnidirectional alignment across various management processes. Ensure that programmatic and lower-level goals and objectives are consistent with strategic objectives, that budget priorities are consistent with strategic objectives, and that individual and organizational unit objectives derive ultimately from higher-level goals and objectives. Then develop performance measures that are directly tied to these objectives.
  6. Periodically review the measures and revise them as appropriate. Performance measurement systems are intended to monitor trends over time, which is why it is important to maintain consistency in the measures over the long run. However, this should not be taken to mean that the measures are cast in stone. Over time, the relevance of some measures may diminish substantially, and the needs for other indicators may emerge. In addition, the reliability of some indicators may erode and require adjustment or replacement. It is therefore imperative to review both the quality and the usefulness of the measures and make changes as needed.

Resource Requirements

Performance management systems may require too much time and effort, especially when they require original data collection instruments, new data collection procedures, or substantial data input from the field. Measurement systems are not free, and they should be viewed as an investment of resources that will generate worthwhile payoff. The objective is to develop a system that is cost-effective, but at the beginning of the development process, system planners often underestimate the time, effort, and expenditures required, which then turn out to be much greater than expected. This leads to frustration and can result in systems whose benefit is not worth the cost. To avoid this situation, system planners should take the following into consideration:

  1. Be realistic in estimating how long it will take to design and implement a particular measurement system. The design and implementation process itself involves a substantial amount of work, and creating realistic expectations at the outset about what it will require can help avoid disillusionment with the value of performance measurement overall.
  2. Develop a clear understanding of the full cost of supporting and maintaining a measurement system, and keep it reasonable in relation to the information produced. Conversely, try to ascertain your resource constraints at the outset and then work to maximize the information payoff from available resources. This approach creates fair expectations regarding what investments are necessary and is more likely to result in a system whose benefits exceed its costs.
  3. Use existing or readily available data whenever appropriate, and avoid costly new data collection efforts unless they are essential. Some potentially expensive data collection procedures may need to be instituted, but only when it is clear that they add real value to the measurement system.

Lack of Use

Even when they are relevant, performance measures can be ignored. They will not be used automatically. Although in some cases this is due to a lack of interest or outright resistance on the part of managers, it may also result from poor system design. Managers often feel overwhelmed by systems that include too many measures and seem to be unnecessarily complex. Another problem is that some systems track appropriate measures but do a poor job of presenting the performance data in ways that are understandable, interesting, and convincing. More generally, some systems simply are not designed to serve the purpose for which they were intended. The following guidelines are aimed at maximizing the useful content of performance data:

  1. Be clear about why you are developing performance measures and how you will use them. Tailor the measures, reporting frequencies, and presentation formats to the intended use so as to encourage use.
  2. Provide regular review sessions to engage select policymakers or managers at various levels to discuss the performance information and its implications for policy, programming, and management. Focus the discussion in part on identifying weak or eroding areas of performance, identifying the causes of problematic performance, and develop plans for corrective action.
  3. Focus on a relatively small number of important measures of success. Managers often feel inundated by large numbers of measures and by detailed reports and thus will often disregard them. There is no magic number of measures to include, and sometimes you will need additional measures to provide a more balanced portrait of performance or to balance other measures in the effort to avoid problems of goal displacement. Everything else being equal, though, it is preferable to have fewer measures rather than too many.
  4. Keep measures and presentations as simple and straightforward as possible. The KISS principle (Keep It Simple, Stupid) applies here because so many higher-level managers who are the intended audiences for the performance data will not have the time or the inclination to wade through complex charts, tables, and graphs.
  5. Emphasize comparisons in the reporting system. Showing trends over time, gauging actual performance against targets, breaking the data down across operating units, comparing results against other counterpart agencies or programs, breaking results out by client groups, or some combination of these is what makes the performance data compelling. Make sure that the comparisons you provide are the most relevant ones, given the intended users.
  6. Develop multiple sets of measures, if necessary, for different audiences. The data might be rolled up from operating units through major divisions to the organization as a whole, providing different levels of detail for different levels of management. Alternatively, different performance measures can be reported to managers with different responsibilities or to different external stakeholders.
  7. Identify “results owners,” the individuals or organizational units with responsibility for maintaining or improving performance on key output and outcomes measures. Holding particular people accountable for improving performance on specific measures encourages them to pay attention to the system.
  8. Informally monitor the usefulness and cost-effectiveness of the measurement system itself and make adjustments accordingly. Again, the system design is not cast in stone, and getting feedback from managers and other intended users helps you identify how the measurement system might be improved to better serve their needs.

Lack of Stakeholder Buy-In

A wide variety of stakeholders have interests in performance measurement systems, and the perceived legitimacy of a system depends in large part on the extent to which these stakeholders buy into it. If stakeholders fail to buy into a measurement system because they don't think the measures are meaningful, the data are reliable, or the results are being used appropriately, it will lose credibility. The system will then be less than effective in influencing efforts to improve performance or in demon­strating accountability. Thus, in developing a measurement system, the agency should

  1. Build ownership by involving stakeholders in identifying performance criteria, measures, targets, and data collection systems. This can be done by including some internal stakeholders, and even some external stakeholders, on the steering group developing the system and on subcommittees or other working groups it establishes. The steering committee can solicit input and feedback from other stakeholder groups as well.
  2. Consider clients, customers, and other relevant stakeholders throughout the process, and involve them when practical. In addition to ensuring that the resulting system will include measures that are responsive to customer needs and concerns, this will also develop buy-in from these important stakeholders.
  3. Generate leadership to develop buy-in for the measures, and demonstrate executive commitment to using them. One of the best ways to develop buy-in on the part of internal stakeholders, and sometimes external stakeholders as well, is to show that the agency's top managers are committed to the measurement system and that they are personally involved in developing and then using it.
  4. Develop and nurture a performance culture within the organization through leadership, incentive systems, and ongoing management processes to emphasize involvement with performance measures and commitment to using them with an eye toward maintaining and improving performance.

Internal Resistance

Managers and employees may resist the implementation of performance measures because they feel threatened by them. Employees often view performance monitoring systems as “speed-up” systems intended to force them to work harder or allow the organization to reduce the workforce. Middle-level managers may see such systems as attempts to put increased pressure on them to produce added results and hold them accountable for standards beyond their control. Even higher-level managers may resist the implementation of measurement systems if they perceive them as efforts to force them to give up authority to those above and below them. Because the success of measurement systems depends on the cooperation of managers at all levels, and sometimes of rank-and-file employees as well, in feeding data to the system and working to register improvement on the measures, avoiding or overcoming this kind of internal resistance is critical. Thus, administrators wanting to install measurement systems should

  1. Be sure to communicate to managers and employees how and why measures are being used. Take every opportunity to educate internal stakeholders about the purpose of a new system and explain what kinds of measures will be monitored and how they will be used to improve the performance of agency programs; doing so will serve to reduce fear of the unknown and help build credibility for the new system and a higher level of comfort with it in the organization.
  2. Provide early reassurance that the system will not produce across-the-board punitive actions such as budget cuts, layoffs, or furloughs. This is often a very real fear among managers and employees, and alleviating it early on will help preempt opposition and gain greater acceptance of any new management system. If reductions in force do in fact result from productivity gains, they can probably be accomplished through attrition rather than firing.
  3. Consider implementing the system in layers, or by division or program, to work out problems and demonstrate success. In addition to allowing time to work out the bugs before going full scale, implementing the system incrementally—and perhaps beginning in parts of the organization that are most likely to readily accept it—can also be an opportunity to show not only that the performance measures really can be useful but also that they are not harmful to the workforce.
  4. Make sure that program managers and staff see performance data first and have a chance to check and correct them before sending reports up to the executive level. Asking program managers to verify the data first not only strengthens the accuracy and integrity of the reporting system but also helps reinforce their role as process owners rather than self-perceived victims of it.
  5. Include fields in the reporting formats for explanatory comments along with the quantitative data. The use of such comment fields gives higher-level managers a much fuller understanding of why performance is going up or down while also giving program managers and staff a safeguard—that is, allowing them the opportunity to shape realistic expectations and point out factors beyond their control that might be negatively affecting performance.
  6. Delegate increased authority and flexibility to both program managers and staff administrators in exchange for holding them accountable for results. This is a critical mechanism for allowing monitoring systems to translate into positive action: holding managers responsible for bottom-line results while giving them wider discretion in how they manage to achieve those results. The added flexibility can also help managers accept a system that they may view as putting more pressure on them to perform.
  7. To the extent possible, tie the performance appraisal system, in­­centive system, and recognition program to the measurement system. Tying these rewards systems to the performance measures puts more muscle in the monitoring system by giving managers and employees added incentive to work harder and smarter in order to perform well on the measures. In tying rewards directly to measures, top management can build additional credibility for the system and positively reinforce improved performance.

Goal Displacement and Gaming

Performance measurement systems can encourage undesirable behavior. As discussed in chapter 5, unbalanced sets of measures can focus undue attention on some performance criteria to the detriment of others, producing undesirable consequences. When managers and employees strive to perform well on less-than-optimal measures while ignoring other more important goals because they are not reflected in the measures, goal displacement occurs and overall performance suffers. In other instances, performance standards or incentives are poorly specified in ways that also allow certain entities to game the system in order to look good on the measures while not really achieving the true goals. Thus, in designing performance measurement systems, it is important to

  1. Anticipate possible problems of goal displacement and gaming the system and avoid them by balancing measures. The most systematic approach here is to probe the likely impact of measures by asking the following question: If people perform to the extreme on this particular measure, what adverse impacts, if any, are likely to arise? Usually the antidote to goal displacement and gaming the system is to define additional measures that will counterbalance whatever potential adverse impacts are identified in this way. Along these lines, managers would do well to heed the adage, “Measure the wrong things, and that's what you will be held accountable for.”
  2. Install quality assurance procedures to ensure the integrity of the data, and impose sanctions to minimize cheating. Problems with the reliability of data can arise for a variety of reasons, ranging from sloppy reporting to willful cheating. Installing quality assurance procedures, perhaps tracing the data trail in a quality audit on a very small sample basis, is usually sufficient to keep the system honest in most cases, particularly when everyone knows there is a policy in place to impose serious sanctions when anyone is found to have falsified data or otherwise tried to cook the books.

System Abuse

Performance measurement systems can also be abused. Data indicating suboptimal performance, for example, can be used to penalize managers and staff unfairly, and performance data in general can be used either to reward or penalize certain managers and employees on a selective basis. Or, less blatant, authoritarian-style managers can use performance measures and the added power they provide over employees to micromanage their units even more closely in ways that are unpleasant for the employees and counterproductive overall. In order to avoid such problems, higher-level managers should

  1. Be wary of misinterpretation and misuse of measures. Higher-level managers should not only review the performance data that are reported up to their levels and then take action accordingly, but also monitor in informal ways how the measurement system is being used at lower levels in the organization. If they become aware that some managers are making inappropriate use of the measures or manipulating the system to abuse employees, they need to inform the abusers that behavior of that kind will no longer be tolerated.
  2. Use measurement systems constructively, not punitively, at least until it is clear that sanctions are needed. Top managers need to model this constructive use of performance measures to their subordinates and others down through the chain of command, relying on positive reinforcement to provide effective inducements to improve performance; they must also insist that their subordinates use the system to work with their employees in the same manner. When the data show that performance is subpar, the most productive response is to engage managers in an assessment of the source of the problem and approaches to remedying it rather than punishing people because they failed to achieve their goals.
  3. Above all, recognize and use the measures as indicators only. Although measures can be invaluable in enabling managers and others to track the performance of agencies and programs, they cannot tell the whole story by themselves. Rather, they are intended to serve as one additional source of information on performance; the data they generate are purely descriptive in nature and provide only a surface-level view of how well or poorly programs are actually doing. Thus, managers should learn to use performance data effectively and interpret the results within the fuller context of what they already know or can find out about a program's performance, but they should not let measures themselves dictate actions.

Prospects for Progress in Performance Management

At this point in its evolution, prospects for performance management abound with the speculation that measurement is here to stay as the work of government become more complex and the delivery of services includes multistakeholders. The following list includes our expected changes to the performance management landscape:

A Final Comment

Performance management is essential to managing for results in government and nonprofit organizations. Although measurement can aid greatly in the quest to maintain and improve performance, it is by no means a panacea. High-quality performance measures can provide managers and policymakers with valid, reliable, and timely information on how well or how poorly a given program is performing, but then it is up to those managers and policymakers to respond deliberately and effectively to improve performance.

Clearly the time for performance management in the public and nonprofit sectors has arrived, and agencies are installing new measurement systems and fine-tuning existing systems on an ongoing basis. Yet a substantial amount of skepticism remains about both the feasibility and the utility of measurement systems (Radin, 2006; Moynihan, 2008), and numerous fallacies and misperceptions about the efficacy of performance measurement still prevail in the field (Ammons, 2002; Hatry, 2002, 2006). Nevertheless, tracking the results produced by public and nonprofit programs to include collaborative networks and using the information produced to attempt to improve performance as well as provide ac­­countability to higher-level authorities is a commonsense approach to management based on simple but irrefutable logic.

Although many public and nonprofit agencies have developed and implemented performance management systems in recent years solely in response to mandates from elected chief executives, legislative bodies, and governing boards, many of these systems have proved to be beneficial to the agencies themselves, and many public and nonprofit managers have become converts. We can expect to see efforts continue to proliferate along these lines, and that should be good news for those who are interested in promoting results-oriented management approaches. However, it must always be understood that performance measurement is a necessary but insufficient condition for results-oriented management or results-oriented government. For measurement to be useful, it must be effectively linked to other management and decision-making processes, as discussed in chapter 1 of this book.

Thus, public and nonprofit managers at all levels, in cooperation with elected officials and governing bodies, must build and use effective measurement systems as components that are carefully integrated into processes for strategic planning and management, operational planning, budgeting, performance management, quality and productivity improvement, and other purposes. Without strong linkages to such vital management and decision-making processes, performance measurement systems may generate information that is nice to know but will not lead to better decisions, improved performance, or more effective accountability and control, which are necessary to engage in performance management.

This book has dealt with a number of components of performance management systems from a technical design perspective, and this concluding chapter has discussed issues concerning the implementation of such systems from an organizational and managerial perspective, all with an eye to helping you install the most effective system you can. Yet you need to understand that difficulties abound in this area, that real challenges are likely to persist, and that the perfect measurement system doesn't exist. Although you should work to implement the best measurement system possible and address the kinds of problems discussed in this chapter, you will also need to make the necessary pragmatic trade-offs regarding system quality and usefulness versus cost and level of effort in order to install a workable, affordable, and effective measurement system. Although this may not produce the perfect system, it will clearly be preferable to the alternatives of not having a workable system or having no system at all.

References

  1. Ammons, D. N. (1995). Overcoming the inadequacies of performance measurement in local government: The case of libraries and leisure services. Public Administration Review, 55, 37–47.
  2. Ammons, D. N. (2002). Performance measurement and managerial thinking. Public Performance & Management Review, 25(4), 344–347.
  3. Ammons, D. N. (Ed.). (2008). Leading performance management in local government. Washington, DC: International City/County Management Association Press.
  4. Ammons, D. N., & Rivenbark, W. C. (2008). Factors influencing the use of performance data to improve municipal services: Evidence from the North Carolina Benchmarking Project. Public Administration Review, 68, 304–318.
  5. Andersen, S. C. (2008). The impact of public management reforms on student performance in Danish schools. Public Administration, 86(2), 541–558.
  6. Aristigueta, M. P., & Zarook, F. N. (2011). Managing for results in six states: Progress in over a decade demonstrates that leadership matters. Public Performance and Management Review, 35, 177–201.
  7. Barnow, B., & Smith, J. (2004). Performance management of US job training programs: Lessons from the Job Training Partnership Act. Public Finance and Management, 4, 247–287.
  8. Behn, R. D. (2003). Why measure performance? Different purposes require different measures. Public Administration Review, 63, 586–606.
  9. Behn, R. D. (2006). The varieties of CitiStat. Public Administration Review, 66, 322–340.
  10. Behn, R. D. (2007). What all mayors would like to know about Baltimore's CitiStat performance strategy. Washington, DC: IBM Center for the Business of Government.
  11. Bouckaert, G., & Balk, W. (1991). Public productivity measurement: Diseases and cures. Public Productivity and Management Review, 15, 229–235.
  12. Boyne, G. A. (2003). Sources of public service improvement: A critical review and research agenda. Journal of Public Administration Research and Theory, 13, 367–394.
  13. Boyne, G., & Chen, A. A. (2007). Performance targets and public service improvement. Journal of Public Administration Research and Theory, 17(3), 455–477.
  14. Bratton, W, J., & Malinowski, S. W. (2008). Police performance management in practice: Taking Compstat to the next level. Policing, 2, 259–265.
  15. Bryson, J. M. (2011). Strategic planning for public and nonprofit organizations: A guide to strengthening and sustaining organizational achievement. San Francisco: Jossey-Bass.
  16. Chun, Y. H., & Rainey, H. G. (2005). Goal ambiguity and organizational performance in US federal agencies. Journal of Public Administration Research and Theory, 15, 529–557.
  17. de Lancer Julnes, P., Berry, F. S., Aristigueta, M. P., & Yang, K. (2007). International handbook of practice-based performance management. Thousand Oaks, CA: Sage.
  18. de Lancer Julnes, P., & Holzer, M. (2001). Promoting the utilization of performance measures in public organizations: An empirical study of factors affecting adoption and implementation. Public Administration Review, 61(6), 693–708.
  19. Epstein, P. D. (1984). Using performance measurement in local government: A guide to improving decisions, performance, and accountability. New York: Van Nostrand Reinhold.
  20. Fried, Y., & Slowik, L. H. (2004). Enriching goal-setting theory with time: An integrated approach. Academy of Management Review, 29, 404–422.
  21. Gore, A. (1993). From red tape to results: Creating a government that works better and costs less: Report of the National Performance Review. Darby, PA: Diane Books Publishing Company.
  22. Halachmi, A., & Bouckaert, G. (Eds.). (1996). Organizational performance and measurement in the public sector: Toward service, effort, and accomplishment reporting. Westport, CT: Quorum Books.
  23. Hatry, H. P. (1999). Performance measurement: Getting results. Washington, DC: Urban Institute Press.
  24. Hatry, H. P. (2002). Performance measurement: Fashions and fallacies. Public Performance & Management Review, 25(4), 352–358.
  25. Hatry, H. P. (2006). Performance measurement: Getting results. Washington, DC: Urban Institute Press.
  26. Heinrich, C. J. (2007). Evidence-based policy and performance management challenges and prospects in two parallel movements. American Review of Public Administration, 37, 255–277.
  27. Heinrich, C. J., & Marschke, G. (2010). Incentives and their dynamics in public sector performance management systems. Journal of Policy Analysis and Management, 29, 183–208.
  28. Holzer, M., & Yang, K. (2004). Performance measurement and improvement: An assessment of the state of the art. International Review of Administrative Sciences, 70, 15–31.
  29. Jennings, E. T., & Haist, M. P. (2004). Putting performance measurement in context. In P. W. Ingraham & L. E. Lynn Jr. (Eds.), The art of governance: Analyzing management and administration (pp. 173–194). Washington, DC: Georgetown University Press.
  30. Kamensky, J., & Fountain, J. (2008). Creating and sustaining a results-oriented performance management framework. In P. DeLancer Julnes, F. Berry, M. Aristigueta, & K. Yang (Eds.), International handbook of practice-based performance management (pp. 489–508). Thousand Oaks, CA: Sage.
  31. Kassoff, H. (2001). Implementing performance measurement in transportation agencies. In Performance measures to improve transportation systems and agency operations. Washington, DC: National Academy Press.
  32. Kelly, J. M. (2003). Citizen satisfaction and administrative performance measures. Urban Affairs Review, 38, 855–866.
  33. Latham, G. P. (2004). The motivational benefits of goal-setting. Academy of Management Executive, 18, 126–129.
  34. Leeuw, F. L. (2000). Unintended side effects of auditing: The relationship between performance auditing and performance improvement and the role of trust. In W. Raub & J. Weesie (Eds.), The management of durable relation. Amsterdam: Thelathesis.
  35. Lukensmeyer, C., & Hasselblad Torres, L. (2006). Public deliberation: A manager's guide to citizen engagement. Washington, DC: IBM Center for the Business of Government.
  36. Marr, B. (2012). Managing and delivering performance. New York: Routledge.
  37. Milward, B., & Provan, K. (2006). A manager's guide to choosing and using collaborative networks. Washington, DC: IBM Center for the Business of Government.
  38. Moynihan, D. P. (2008). The dynamics of performance management: Constructing information and reform. Washington, DC: Georgetown University Press.
  39. Moynihan, D. P. (2009). Through a glass, darkly. Public Performance and Management Review, 32(4), 592–603.
  40. Neely, A., & Bourne, M. (2000). Why measurement initiatives fail. Measuring Business Excellence, 4(4), 3–6.
  41. Newcomer, K. (2008). Assessing performance in nonprofit service agencies. In P. de Lancer Julnes, F. Berry, M. Aristigueta, & K. Yang (Eds.), International handbook of practice-based performance management (pp. 25–44). Thousand Oaks, CA: Sage.
  42. Niven, P. R. (2003, April 22). Adapting the balanced scorecard to fit the public and non-profit sectors. Primerus Consulting Report.
  43. Organisation for Economic Co-operation and Development. (2005). Modernising government: The way forward. Paris: Author.
  44. Plantz, M. C., Greenway, M. T., & Hendricks, M. (1997). Outcome measurement: Showing results in the nonprofit sector. In K. E. Newcomer (Ed.), Using performance measurement to improve public and nonprofit programs. New Directions for Evaluation, no. 75. San Francisco: Jossey-Bass.
  45. Poister, T. H. (2003). Measuring performance in public and nonprofit organizations. San Francisco: Jossey-Bass.
  46. Poister, T. H. (2010). The future of strategic planning in the public sector: Linking strategic management and performance. Public Administration Review, 70, 246–254.
  47. Poister, T. H., Pasha, O. Q., & Edwards, L. H. (2013). Does performance management lead to better outcomes? Evidence from the US public transit industry. Public Administration Review, 73, 625–636.
  48. Radin, B. A. (2006). Challenging the performance movement: Accountability, complexity, and democratic values. Washington, DC: Georgetown University Press.
  49. Sanger, M. B. (2008). From measurement to management: Breaking through the barriers to state and local performance. Public Administration Review, 68, S70–S85.
  50. Sawhill, J. C., & Williamson, D. (2001). Mission impossible? Measuring success in nonprofit organizations. Nonprofit Management and Leadership, 11(3), 371–386.
  51. Smith, D. C., & Bratton, W. J. (2001). Performance management in New York City: CompStat and the revolution in police management. In D. W. Forsythe (Ed.), Quicker, better, cheaper? Managing performance in American government (pp. 453–482). Albany, NY: Rockefeller Institute.
  52. Sterck, M., & Bouckaert, G. (2008). Performance information of high quality. How to develop a legitimate, functional and sound performance measurement system? In P. de Lancer Julnes, F. Berry, M. Aristigueta, & K. Yang (Eds.), International handbook of practice based performance management (pp. 433–454). Thousand Oaks: Sage.
  53. Sun, R., & Van Ryzin, G. G. (2014). Are performance management practices associated with better public outcomes? Empirical evidence from New York public schools. American Review of Public Administration, 44(3), 324–338.
  54. Swiss, J. E. (1991). Public management systems: Monitoring and managing government performance. Englewood Cliffs, NJ: Prentice-Hall.
  55. US Government Accountability Office. (2014). Implementation approaches used to enhance collaboration in interagency groups (GAO-14–220). Washington, DC: Author.
  56. Van Dooren, W., Bouckaert, G., & Halligan, J. (2010). Performance management in the public sector. New York: Routledge.
  57. Walker, R. M., Damanpour, F., & Devece, C. A. (2011). Management innovation and organizational performance: The mediating effect of performance management. Journal of Public Administration Research and Theory, 21, 367–386.
  58. Wholey, J. S. (1999). Performance-based management: Responding to the challenges. Public Productivity & Management Review, 22, 288–307.
  59. Wholey, J. S. (2002). Making results count in public and nonprofit organizations: Balancing -performance with other values. In K. E. Newcomer, E. T. Jennings, C. A. Broom, & A. Lomax (Eds.), Meeting the challenges of performance-oriented government. Washington, DC: Center for Accountability and Performance of the American Society for Public Administration.
  60. Wholey, J. S. (2006) Quality control: Assessing the accuracy and usefulness of performance measurement systems. In H. P. Hatry, Performance measurement: Getting results (pp. 267–286). Washington, DC: Urban Institute Press.
  61. Wholey, J. S., & Hatry, H. P. (1992). The case for performance monitoring. Public Administration Review, 52, 604–610.