Why do KPIs need to be regularly reviewed?
Most leadership teams rely upon the provision of data in the form of key performance indicator reports which are either produced weekly or monthly, or accessible via a data reporting platform. This data is used to hold managers to account (by Board members), or used to allow the Executive team to assess how well policies and strategies are being delivered. The not unreasonable assumption is that this data is correct.
Over the last couple of years, we have come across a number of Executive team and Board level contacts who have discovered that the numbers being reported to them in their monthly KPI reports were not a true reflection of actual performance. From completeness of mandatory training in care and healthcare situations, to compliance on servicing and repairs in housing associations, to fundraising activity in charities, later analysis has shown that the data being reported was incomplete, misleading or plain wrong.
If the Executive team or Board member cannot rely upon the data being presented to them, they are clearly unable to fulfil the function that they are there to perform. Why does this happen and what can be done about it?
What can go wrong?
Our experience of working with many different types of organisation shows that KPIs can go wrong in one of three ways and often a combination of any two, or all three, of these errors.
- The performance indicators are technically unsound
– The wrong things are being measured, they are being measured in the wrong way, or they are being reported in the wrong way.
- The performance indicators are culturally unsound
– The measures encourage teams to manipulate the process to create good-looking numbers, teams feel forced, or encouraged, to manipulate numbers to create apparent success, or failure is deemed ‘unacceptable’ and therefore is not reported.
- Performance is bad, but is not recognised as so, or nothing done about it
– Managers do not recognise or identify problems early enough, or just fail to react to them.
Very few teams set out to deliberately lie to their leaders, but the pressure to deliver good numbers, or sometimes just any numbers, makes people do things that, in the cold light of day, look all wrong. This is often coupled with a misunderstanding from leaders about what impact the addition of a new measure or KPI has on the teams they are managing. Leaders tend to assume that the addition of a new target will focus staff on improving performance in that area. However, it often just focuses people on making sure that they achieve the target, even if the base level of performance does not improve. Why would they do that?
Well, the sad fact for leaders is that,
‘Just creating a new target does not automatically create the means of achieving that target’.
If leaders expect a new target to improve performance just by coming into existence, leaders must assume that, either staff did not know what ‘good’ looked like, or that they were not trying to deliver a good service in the first place. The first reason is often possible and can be addressed by work on ‘purpose’ – described later in this piece. The second reason is generally not true, unless the corporate culture is already poor, in which case adding a new target is unlikely to change behaviours or culture in a positive way.
Let us take a classic example from the housing sector, which is the time taken to relet a property when a previous tenant has moved out, known as the void period. Voids have a direct impact on the income of a housing provider, and on people waiting to move into a property, and so average void time is a universal target/KPI in housing. Staff and managers are under pressure to drive down the reported average voids period.
Many providers we know have a specific target for properties classified as ‘minor voids’. Other voids, the ‘major voids’, are deemed to be more complex and so have a longer time for completion. What could possibly go wrong? Well in one provider, the target for average minor voids period was ten days, and performance was consistently reported to leaders and the board as hitting the ten days target. What was missing was the fact that the percentage of major voids had increased from 10% of total properties to over 50% as real performance had diminished, but leaders were blissfully ignorant because the wrong things were being measured in the wrong way, and there was a pressure to hit the target that led staff to flex what complex meant.
In another organisation, where minor voids were due to be completed in an average ten days, and major voids in twenty, the defining factor was the cost of the work, with under £5,000 meaning that the void was minor, and over £5,000 major. In our diagnostic we found many examples of properties having a requirement for the replastering of walls identified on day 9 of the void. Plastering is expensive, and so took the properties over the £5,000 limit, giving the team another ten days to complete the void. The net effect for the organisation was that customers had to wait longer for new properties, the cost of each void was higher, and the provider lost rent for another ten days.
Finally, when the performance on both minor and major voids was inadequate, we came across one organisation where the property services team created a category of policy void, where a policy decision had to be made about the future of that property. The voids period of properties in this category was not reported to leaders or the board. So what happened? Yes, more and more properties were categorised as policy voids and disappeared completely from the KPIs.
In our work on voids, we generally recommend moving to a few simple measures, the total rent loss in that week due to empty properties, cost of voids work and customer satisfaction. We lose the major/minor definition, and make sure that everyone is focused on getting the right customers into the right properties as soon as possible with the minimum cost.
In a charity, we came across a team that wanted to raise funds by engaging the public in fundraising activity. In order to drive the right behaviours, it was felt that the correct measure would be to track the numbers of people engaged by each regional team, as the belief was that the more people engaged, the more money would be raised. Unfortunately, what it led to was staff signing up as many people as possible, irrespective of how much they were actually likely to raise, rather than focusing the staff on the people who would generate the most funds. Performance became a numbers game built around the number of names on a list, rather than the funds actually raised. Unsurprisingly, fundraising dipped and staff morale dropped, as staff knew they were doing the wrong things, but felt forced to do it by the reporting regime in place. The measure was changed to more of a basket of ‘activity’ measures based around definitions of the right things to do agreed between teams and leaders, plus the important measure of funds actually raised.
How do we start?
Developing a coherent and effective performance management system is not something that can be done piecemeal, with each team or each corporate level creating the metrics that they think will be useful for them or necessary for the Exec team or Board to review. A capable performance management system has to be built from the top down – what do the Board and Exec care about? How do these corporate priorities translate into divisional and team priorities? From that, we can identify the true purpose of each team and the important measures for the team.
Why talk about Purpose?
Ad Esse always starts work on a team’s performance measures by checking that the team understands their purpose (and that it fits with the leadership’s understanding), and once this is defined, we move to the measures that will tell the team that they are achieving their purpose, and then we move to the performance measures that will tell the team that they are doing the right things in the right way on a day-to-day basis. For example, if we were looking at a compliance team, they might say ‘Our purpose is to keep our customers safe and to ensure the organisation can demonstrate that we are meeting our legal obligations’. This would then lead to the creation of a basket of measures that would demonstrate all of this. Starting with purpose would avoid a team saying ‘we are here to help and encourage supporters to raise funds’ rather than ‘we are here to sign as many people up as supporters as possible’.
Building the measures
As we build the measures and metrics to be used by each team, we need to ask a series of questions about the proposed basket of measures for each process or team:
- Are the measures technically sound? Do they reflect what is important and will they tell us the whole story about performance?
– Beware of only measuring one dimension of what is important. If you are looking at a compliance regime to see if something has been tested on time, also look at whether the actions resulting from the test are recorded and acted upon. There is no point having a perfect test regime if you have hundreds of outstanding actions waiting to be completed.
– If we are measuring customer satisfaction with a service, do we get feedback from successful and unsuccessful service users? We may have major problems that we are unaware of if we discount the opinions of those who did not make it through the process.
– Are some of the items being processed missing? Like the voids examples earlier, do we know how many things went into the process and how many our KPI is reporting performance on? Tracking numbers in and out of processes as part of the KPIs is essential so that the Executive team and board can check the completeness of their measures.
- Are the measures culturally sound? Can we identify any behaviours that may result from the measure or target that will drive some of the wrong as well as right behaviours?
– Is the target achievable with the current process? If not, what do you think is going to change to allow staff to hit the new target? If you don’t know, then don’t be surprised if staff find a way of improving the metric that does not actually result in improved performance.
– Avoid league targets. Putting someone at the top or bottom of the list will only make staff think about how to move up the list, rather than improving service or performance.
– Again, beware of a single measure being tracked to the exclusion of anything else. This will always skew behaviour of staff and lead to unexpected consequences.
– Think about measures along the whole process. What do you need to know to tell you that the right things are being done, rather than just a KPI has improved.
- – Finally, use as much customer insight as you can. Manipulating metrics rarely fools customers, and the more information you gather from them, the better the sense-check you can get on your own internal metrics.
- Do we know how to react then the metrics are telling us that something is going wrong?
– Who is monitoring the numbers, who has the ownership of process performance and is empowered to raise the alarm and make changes if performance dips? If this is not clear then your KPIs serve no purpose at all. Make sure that staff and managers understand what the numbers are telling them, and that they know how to respond if it changes. All too often we have gone through KPI reports with managers only to find that they do not really understand what the numbers mean, or how they relate to the fundamentals of service, satisfaction and efficiency.
Measurement is hard. But don’t give up. A bit of thought and an understanding of purpose and the frailties of human behaviour can help you design a much more effective suite of KPIs.