Chapter 2
The Myths of Performance Measurement

Since the second edition was published I have become increasingly aware that key performance indicators (KPIs) in many organizations are a broken tool. Measures are often a random collection prepared with little expertise, signifying nothing. KPIs should be measures that link daily activities to the organization's critical success factors (CSFs), thus supporting an alignment of effort within the organization in the intended direction. I call this alignment the El Dorado of management. However, poorly-defined KPIs cost the organization dearly. Some examples are: measures gamed to the benefit of executive pay, which leads to the detriment of the organization; teams encouraged to perform tasks that are contrary to the organization's strategic direction; costly “measurement and reporting” regimes that lock up valuable staff and management time; and a six-figure consultancy assignment resulting in a “door stop” report or balanced scorecard that doesn't function well.

Let us now look at the myths surrounding performance measures.

Myth #1: Most Measures Lead to Better Performance

Every performance measure can have a negative consequence or an unintended action that leads to inferior performance. Well over half the measures in an organization may well be encouraging unintended negative behavior. In order to make measures work, one needs to anticipate the likely human behavior that will result from its adoption, and endeavor to minimize the potential negative impact.

KPIs are like the moon, they have a dark side. It is imperative that before a measure is used the measure is:

  • Discussed with the relevant staff: “If we measure this, what will you do?”
  • Piloted before it is rolled out.
  • Abandoned if its dark side creates too much adverse performance.

To emphasize the significance of this myth I have set aside Chapter 3 to cover unintended consequence—the dark side of measures.

Myth #2: All Measures Can Work Successfully in Any Organization, At Any Time

Contrary to common belief, it is a myth to think that all measures can work successfully in any organization, at any time. The reality is that there needs to be, as Spitzer has so clearly argued, a positive “context of measurement” for measures to deliver their potential. To this end I have established seven foundation stones that need to be in place in order to have an environment where measurement will thrive. These seven foundation stones are explained in length in Chapter 7 and are:

  1. Partnership with the staff, unions, and third parties
  2. Transfer of power to the front line
  3. Measure and report only what matters
  4. Source KPIs from the critical success factors
  5. Abandon processes that do not deliver
  6. Appointment of a home-grown chief measurement officer
  7. Organization-wide understanding of winning KPI definition

Myth #3: All Performance Measures Are KPIs

Throughout the world, from Iran to the United States and back to Asia, organizations have been using the term KPI for all performance measures. No one seemed to worry that the KPI had not been defined by anyone. Thus measures that were truly key to the enterprise were being mixed with measures that were completely flawed.

Let's break the term down. Key means key to the organization, performance means that the measure will assist in improving performance.

From the research I have performed, from workshop feedback across diverse industries and as a by-product of writing this book, I have come to the conclusion that there are four types of performance measures, and these four measures are in two groups as shown in Exhibit 2.1.

Exhibit 2.1 The Difference Between Result And Performance Indicators

The Two Groups of Measure The Two Types of Measures in Each Group
Result indicators reflect the fact that many measures are a summation of more than one team's input. These measures are useful in looking at the combined teamwork but, unfortunately, do not help management fix a problem as it is difficult to pin-point which teams were responsible for the performance or nonperformance. Result Indicators (RIs) and Key Result Indicators (KRIs)
Performance indicators are measures that can be tied to a team or a cluster of teams working closely together for a common purpose. Good or bad performance is now the responsibility of one team. These measures thus give clarity and ownership. Performance Indicators (PIs) and Key Performance Indicators (KPIs)

The differences between these measures are explained in Chapter 1.

Myth #4: By Tying KPIs to Remuneration You Will Increase Performance

It is a myth that the primary driver for staff is money and that an organization must design financial incentives in order to achieve great performance. Recognition, respect, and self-actualization are more important drivers. In all types of organizations, there is a tendency to believe that the way to make KPIs work is to tie KPIs to an individual's pay. But when KPIs are linked to pay, they create key political indicators (not key performance indicators), which will be manipulated to enhance the probability of a larger bonus. KPIs should be used to align staff to the organization's critical success factors and will show 24/7, daily or weekly how teams are performing. They are too important to be manipulated by individuals and teams to maximize bonuses. KPIs are so important to an organization that performance in this area is a given, or as Jack Welch says, “a ticket to the game.”1

Performance bonus schemes are often flawed on a number of counts. The balanced scorecard is often based on only four perspectives, ignoring the important environment and community and staff satisfaction perspectives. The measures chosen are open to debate and manipulation. There is seldom a link to progress within the organization's CSFs. Weighting of measures leads to crazy performance agreements, such as Exhibit 2.2.

Exhibit 2.2 Performance-Related Pay Systems That Will Never Work

Scorecard Perspective Perspective Weighting Performance Measure Measure Weighting
Financial Results 60% Economic value added 25%
Unit's profitability 20%
Market share growth 15%
Customer Focus 20% Customer satisfaction survey 10%
Dealer satisfaction survey 10%
Internal Process 10% Ranking in external quality survey 5%
Decrease in dealer delivery cycle time 5%
Innovation and Learning 10% Employee suggestions implemented 5%
Employee satisfaction survey 5%

The message is: find a way to manipulate these numbers and you will get your “bonus.” The damage done by such schemes is only found out in subsequent years.

Myth #5: We Can Set Relevant Year-End Targets

It is a myth that we know what good performance will look like before the year starts and, thus, it is a myth that we can set relevant annual targets. In reality, as former CEO of General Electric Jack Welch2 says, “it leads to constraining initiative, stifling creative thought processes and promotes mediocrity rather than giant leaps in performance.” All forms of annual targets are doomed to failure. Far too often management spends months arguing about what is a realistic target, when the only sure thing is that it will be wrong. It will be either too soft or too hard. I am a follower of Jeremy Hope's work. He and his co-author Robin Fraser were the first writers to clearly articulate that a fixed annual performance contract was doomed to fail. Far too frequently organizations end up paying incentives to management when, in fact, they have lost market share. In other words, rising sales did not keep up with the growth rate in the marketplace. As Hope and Fraser point out, not setting an annual target beforehand is not a problem as long as staff members are given regular updates about how they are progressing against their peers and the rest of the market. Hope argues that if you do not know how hard you have to work to get a maximum bonus, you will work as hard as you can.

Hope and Fraser's work pointed out that the annual budgeting process was doomed to fail. If you set an annual target during the planning process, typically 15 or so months before the last month of that year, you will never know if it was appropriate, given that the particular conditions of that year will never be guessed correctly. You often end up paying incentives to management when, in fact, you have lost market share. In other words, your rising sales did not keep up with the growth rate in the marketplace.

Myth #6: Measuring Performance Is Relatively Simple and the Appropriate Measures Are Obvious

There will not be a reader of this book who has not, at some time in the past, been asked to come up with some measures with little or no guidance. Organizations, in both the private and public sectors, are being run by management who have not yet received any formal education on performance measurement. Many managers have been trained in the basics of finance, human resources, and information systems. They also have been ably supported by qualified professionals in these three disciplines. The lost soul is performance measurement which has only scant mention in the curriculum of business degrees and in professional qualifications obtained by finance, human resources, and information systems professionals.

Performance measurement has been an orphan of business theory and practice. While writers such as Deming, Whetley and Kellner-Rogers, Hamel, Hope, and Spitzer have for some time been pointing out the dysfunctional nature of performance measurement, it has not yet permutated into business practice.

Performance measurement is worthy of more intellectual rigour in every organization on the journey from average to good and then great performance. The appointment of a chief measurement officer was first mentioned by Dean Spitzer3 who is an expert on performance measurement. The chief measurement officer would be part psychologist, part teacher, part salesman, and part project manager. They would be responsible for the setting of all performance measures, the assessment of the potential “dark side” of the measure, the abandonment of broken measures, and the leader of all balanced scorecard initiatives. Naturally this person would report directly to the CEO and have a status equivalent of the CFO, the CIO, or the GM HR befitting the diverse blend of skills required for this position.

Myth #7: KPIs Are Financial and Nonfinancial Indicators

I firmly believe that all KPIs in countries as diverse as Canada, the United States, the United Kingdom, and Romania are all nonfinancial. In fact I believe that there is not a financial KPI on this planet.

Financial measures are a quantification of an activity that has taken place; we have simply placed a value on the activity. Thus, behind every financial measure is an activity. I call financial measures result indicators, a summary measure. It is the activity that you will want more or less of. It is the activity that drives the dollars, pounds, or yen. Thus financial measures cannot possibly be KPIs.

When you put a pound or dollar sign to a measure you have not dug deep enough. Sales made yesterday will be a result of sales calls made previously to existing and prospective customers, advertising, product reliability, amount of contact with the key customers, and so on. I group all sales indicators expressed in monetary terms as result indicators.

Myth #8: You Can Delegate a Performance Management Project to a Consulting Firm

For the past 15 years or so many organizations have commenced performance measure initiatives, and these have frequently been led by consultants. Commonly, a balanced-scorecard approach has been adopted based on the work of Kaplan and Norton. The approach, as I will argue, is too complex and leads to a consultant-focused approach full of very clever consultants undertaking this exercise with inadequate involvement of the client's staff. Although this approach has worked well in some cases, there have been many failures.

The winning KPIs methodology clearly states, “You can do this in-house.” If you cannot, no one else can. KPI projects are in-house projects run by skilled individuals who know the organization and its success factors. They have been unburdened from the daily grind to concentrate on this important project. In other words, these staff members have moved their family photographs, the picture of the 17-hand stallion or their beloved dog, and put them on their desks in the project office. Leaving the daily chore of firefighting in their sphere of operations to their second-in-charge who has now moved into the boss's office, on a temporary basis of course!

The Myths Around the Balanced Scorecard

The groundbreaking work of Kaplan and Norton4 brought to management's attention the fact that an organization should have a balanced strategy and its performance needed to be measured in a more holistic way, in a balanced scorecard (BSC). Kaplan and Norton suggested four perspectives in which to review performance: financial, customer, internal process, and learning and growth. There was an immediate acceptance that reporting performance in a balanced way made sense and a whole new consultancy service was born. Unfortunately many of these initiatives have failed for reasons set out below.

BSC Myth #1: The Balanced Scorecard Was First Off the Blocks

Hoshin Kanri business methodology, a balanced approach to performance management and measurement, was around well before the balanced scorecard (BSC). It has been argued that the BSC originated from the adaptation based on Hoshin Kanri.

As I understand it, translated, Hoshin Kanri means a business methodology for direction and alignment. This approach was developed in a complex Japanese multinational where it is necessary to achieve an organization-wide collaborative effort in key areas.

One tenet behind Hoshin Kanri is that all employees should incorporate into their daily routines a contribution to the key corporate objectives. In other words, staff members need to be made aware of the critical success factors and then prioritize their daily activities to maximize their positive contribution in these areas.

In the traditional form of Hoshin Kanri, there is a grouping of four perspectives. It is no surprise that the balanced scorecard perspectives are mirror images (see Exhibit 2.3). An informative paper on the comparison between Hoshin Kanri and the balanced scorecard has been written by Witcher and Chau5, and it is well worth reading.

Exhibit 2.3 Similarities between Hoshin Kanri and Balanced Scorecard Perspectives

Hoshin Kanri Balanced Scorecard
Quality objectives and measures Customer focus
Cost objectives and measures Financial
Delivery objectives and measures Internal process
Education objectives and measures Learning and growth

BSC Myth #2: There Are Only Four Balanced Scorecard Perspectives

For almost 20 years the four perspectives listed in Kaplan and Norton's original work (Financial, Customer, Internal Process, and Learning and Growth) have been consistently reiterated by Kaplan and Norton through to present time.

I recommend that these four perspectives be increased by the inclusion of two more perspectives (Staff Satisfaction, and Environment and Community) and that the Learning and Growth perspective be reverted back to its original name, Innovation and Learning (see Exhibit 2.4).

Exhibit 2.4 The Suggested Six Perspectives of a Balanced Scorecard

FINANCIAL RESULTS
Asset utilization, sales growth, risk management, optimization of working capital, cost reduction
CUSTOMER FOCUS
Increase customer satisfaction, targeting customers who generate the most prefit, getting close to noncustomers
ENVIRONMENT AND COMMUNITY
Employer of first choice, linking with future employees, community leadership, collaboration
INTERNAL PROCESS
Delivery in full on time, optimizing technology, effective relationships with key stakeholders
STAFF SATISFACTION
Right people on the bus, empowerment, retention of key staff, candor, leadership, recognition
INNOVATION AND LEARNING
Innovation, abandonment, increasing expertise and adaptability, learning environment

BSC Myth #3: The Balanced Scorecard Can Report Progress to Both Management and the Board

One certainly needs to show the minister or board the state of progress. However it is important that governance information is shown rather than management information. The measures that should be reported to the board are key result indicators.

We need to ensure the “management-focused” performance measures (KPIs, result indicators, and performance indicators) are only reported to management and staff.

BSC Myth #4: Measures Fit Neatly into One Balanced Scorecard Perspective

When an organization adopts the balanced scorecard, which is certainly a step in the right direction, staff members are frequently in a dilemma over measures that seem to influence more than one balanced scorecard perspective. Where do I put this measure? Debates go on and often resolution is unclear.

Measures do not fit neatly into one or another perspective. In fact when you get a measure that transcends a few perspectives you should get excited as you are zeroing in on a possible KPI. To illustrate this point, let's look at where late planes in the sky should be reported. Should it be a customer, financial, or internal process? In fact this measure affects all six perspectives as shown in Exhibit 2.5.

Exhibit 2.5 How Late Planes Impacts Most If Not All Six Perspectives

Perspectives
Financial Customer satisfaction Staff satisfaction Innovation & learning Internal process Environment & community
late planes in the sky more than two late tick tick tick tick tick possible

BSC Myth #5: Indicators Are Either Lead (Performance Driver) or Lag (Outcome) Indicators

I am not sure where the lead/lag labels came from but I do know that they have caused a lot of problems and are fundamentally flawed. It assumes that a measure is either about the past or about the future. It ignores the fact that some measures, in particularly KPIs, are both about the past and the future.

I have lost count of the number of times I read Kaplan and Norton's6 original masterpiece to try and understand the lead lag indicators argument until I realized my difficulty in understanding lead lag indicators was a result of flawed logic.

I have presented to thousands of people on KPIs and I always ask “Is the late-planes-in-the-air KPI a lead or a lag indicator?” The vote count is always evenly split. It has clearly arisen out of past events and will have a major impact on future events—the late arrival will make the plane leave late.

I recommend that we dispense with the terms lag (outcome) and lead (performance driver) indicators. We should see measures as either a past, current (yesterday's or today's activities—the here and now), or future measure (monitoring now the planning and preparation for events/actions that should occur in the future), as shown in Exhibit 2.6.

Exhibit 2.6 Alternative to the Lead/Lag Debate

Past Measures Current Measures Future Measures
(past week/two weeks/month/quarter) (24/7 and daily) (next day/week/two weeks/month/quarter)
Number of late planes last week/last month Planes more than two hours late (updated continuously) Number of initiatives, to be commenced in months one, two, three to target areas which are causing late planes.
Date of last sales visit to key customers Key customer order cancellation (today) Date of next visit to key customers and date of next social interaction with key customers
New product sales in last month Quality defects found today in new products Number of improvements to new products to be implemented in next month, months two and three

Current measures refers to those monitored 24/7 or daily. I include yesterday's activities as the data may not be available any earlier (e.g., late/incomplete deliveries to key customers made yesterday).

Future measures are the record of a future commitment when an action is to take place (e.g., date of next meeting with key customer, date of next product launch, date of next social interaction with key customers). In your organization, you will find that your KPIs are either current- or future-oriented measures.

BSC Myth #6: Strategy Mapping Is a Vital Requirement

If strategy maps help management make some sense out of their strategy, then as a working document, they must be useful. However, I am concerned with the “simplified” use of cause and effect relationships, a major component of strategy mapping (see Exhibit 2.7). I believe it has led to the demise of many performance measurement initiatives. From these oversimplified relationships come the strategic initiatives and the cascading performance measures. Strategy mapping, in the wrong hands, can give birth to a monster.

c02ex007

Exhibit 2.7 Strategy Mapping

The “cause and effect” diagrams of strategic mapping, where initiatives/success factors neatly fit into a balanced scorecard perspective and create one or possibly two cause and effect relationships, is full of intellectual thought signifying nothing in many cases. It seems to argue that every action or decision has an effect elsewhere in the organization. That you can boil down “cause-and-effect” relationships, to one or two relationships. Jeremy Hope believed that strategy maps are seductive models of how we like to think organizations work and are dangerous weapons in the wrong hands. He summed it up beautifully in his whitepaper paper “Hidden Costs”:

“If you think an organization is a machine with levers that you can pull and buttons that you can press to cause a predictable action and counter-action elsewhere (as in a car engine), then cause-and-effect is an idea that works.

Jeremy Hope, Whitepaper “Hidden Costs” 2004

These strategy map diagrams are flawed on a number of accounts:

  • Success factors do not fit neatly within a perspective, the more important they are the more perspectives they impact and hence some success factors would need to be drawn across the whole page of a strategy map. This is clearly too untidy for the “strategy map” designers.
  • If you are bright enough, you can argue a totally different clausal route for your arrows in your strategic mapping. Every action a company takes has a myriad of impacts. To restrict oneself to one or two relationships in strategy mapping is at best too simplistic, at worst totally naive.

  • When I ask attendees to map the impact of late planes on the success factors of an airline they come up with at least twenty impacts. Strategy mapping cannot cope with multiple relationships and thus cannot cope with the reality of day-to-day business.
  • Actions that employees take, on a daily basis, are influenced by many factors, they cannot be simplified into one or two causal impacts. The secret is to understand those employee actions that lead either to success or failure and therefore direct the staff to move in the right direction, for example one consistent with interests of the organization's long-term strategy.

BSC Myth #7: Measures Are Cascaded Down the Organization

This was probably the most damaging process used in the balanced scorecard approach. It assumes that by analyzing a measure such as “return on capital employed” you could break it down in a myriad of measures relevant to each team or division.

It also assumes that each and every team leader with minimal thought processes would arrive at relevant performance measures. Kaplan and Norton ignored the crucial facts that the team leaders and the senior management team need to know about the organization's critical success factors and the potential for the performance measure to have a “dark side,” an unintended consequence.

Having first ascertained the organization's CSFs it is thus best to start the balanced scorecard from the ground up at the team level within the operations, level 4 in Exhibit 2.8. It is at the operational team level that KPIs will be found. Find me an accounting team with a winning KPI! Like many support functions, their team will work with PIs and RIs. This sends a clear message; finish the monthly and annual accounts quickly and spend more time helping the teams who are working directly on the organization's KPIs.

c02ex008

Exhibit 2.8 Interrelated Levels of Performance Measures in an Organization

By cascading up, not down, CEOs are saying that finding the right measures that link to the CSFs is important. It is the El Dorado of management when you have every employee, every day, aligning themselves with the organization's CSFs. Very few organizations have achieved this alignment, this magical alignment between effort and effectiveness, Toyota being a shining light.

BSC Myth #8: Performance Measures Are Mainly Used to Help Manage Implementation of Strategic Initiatives

The balanced scorecard approach sees the purpose of performance measures as helping implement the strategic initiatives. It is argued that in order to implement the strategies you report and manage the performance measures that best reflect progress, or lack of it, within the strategic initiatives. With the BSC approach each team beneath the Senior Management Team (SMT), in turn, then looks for measures they should use to be consistent with the summary measure the SMT are looking at. In other words measures cascade down from each other.

While this looks logical it leads to mayhem. The cascading of measures has led to a myriad of balanced scorecard applications with hundreds of measures in some form of matrix helping the organization go nowhere quickly.

I do not believe performance measures are on this planet to implement strategies. Performance measures are here to ensure that staff members spend their working hours focused primarily on the organization's critical success factors.

The winning KPI process states:

  • Measures are derived from the critical success factors first and then the success factors
  • There is no cascading down of measures
  • Monthly measures will never be important to management as they report progress too late
  • It is the critical success factors that influence the day-to-day running of the business not the strategic initiatives

Exhibit 2.9 shows that strategic initiatives, while their progress will be monitored, are not as fundamental to the business as monitoring the day-to-day alignment to the organization's CSFs.

c02ex009

Exhibit 2.9 How Strategy and the CSFs Work Together

Winning KPI methodology states that you derive the measures from the CSFs. Deriving your measures from your strategic initiatives will create a large number of unimportant measures, largely ignoring the important daily “business as usual” issues.

Many strategic initiatives are controlled by special project teams undertaking secretive work, such as acquiring new operations or technologies. They will monitor their progress through project reporting. These new initiatives will become “business as usual” only when the new business or product is part of daily activities.

While some strategic initiatives will impact directly on “business as usual,” the impact of these initiatives can be managed better through monitoring measures in the CSFs.

Notes