Why measures often mislead
Here’s a task. Put one leg in a bucket of boiling water and the other in a bucket of freezing water. On average it’s the perfect temperature. Herein lies a major issue with measurement that I commonly observe: believing that aggregated data is an insightful measure of performance. Of course, aggregation has some value as a high-level performance indicator, but without interrogating the data beneath it can be very misleading.
As a powerful illustration, consider Simpson’s Paradox: the paradox in probability and statistics, in which a trend appears in different groups of data, but disappears or reverses when these groups are combined. (a concept I always explain in assignments).
As a true example, a University in the USA was taken to court by a young woman that claimed gender bias on the basis that the annual admission data showed that significantly more boys were being admitted than girls. Sounds fair, yes?
However, the analysis of the data showed that generally girls were applying for the most competitive courses whereas boys were more attracted to the less competitive courses. In reality, more girls were being admitted to both the competitive and less competitive courses, but when the numbers were aggregated there were more boys admitted. Simpson’s Paradox.
So, whenever I am shown aggregated data (which on most scorecards I see tends to be color-coded green, as with most objectives/KPIs, but that’s another blog), I always ask “but what does this mean?” Typically, the answer I get is “It shows we are performing well” – then I explain that maybe it does or maybe it doesn’t. I have no idea without looking at the underlying data.
The importance of analysis
And herein lies another major issue with measurement. The belief that the reported KPI score is sufficient information for decision-making purposes. It is not. The top-level KPI “number” does not provide the full picture of performance. It is only an “indicator” of performance. Once organizations have collected data, they must analyze it before they can work out what it means – how they may need to change things to improve the likelihood of success against key strategic goals. Too often, organizations simply collect and distribute performance data without conducting any meaningful analysis – or any analysis at all. ‘Performance management analytics provide tools and techniques enabling organizations to convert their performance data into relevant information and knowledge. Without it, the whole performance management exercise is of little or no value to the organization.
So, when I work with organizations to build scorecards, I always stress that those that work with measures need to understand at least the basics of how measures work. In assignments, I often have to explain the importance of understanding confidence levels and intervals when using surveys. This is not rocket science.
With at least the basics understood, then I progress to the basics of analytics. In time they can mature to a more advanced understanding of measurement and analytics. But simply understanding the basics means that organizations do not spend an endless amount of time collecting KPIs and then providing commentary that is at best of limited value or (not uncommon) downright dangerous as the so-called analysis leads to strategic, and often expensive, improvement interventions that are not addressing the problem (and perhaps exacerbating the problem).
It is a continued mystery to me how organizations are, as is typically the case, obsessed with measurement but don’t invest the time and money into teaching those that work with measures even the basics of the underpinning science. This is an issue we need to address.
As always feedback is welcome.