Tuesday, 9 March 2010

An example of a powerful metric in use

Mortality ratios

Yesterday’s posting about performance metrics in the UK’s NHS system provoked much comment and discussion. The BBC’s Panorama investigated the discrepancies between certain hospitals’ own assessment of their performance, and independent assessments of their performance. In rather too many cases the discrepancies were significant.

One metric that caught my attention was discussed by Professor Brian Jarman of Imperial College, London. The hospital standardized mortality ratio compares the number of deaths expected and the number of in-patients at a hospital who actually die. Professor Jarman’s response to the interviewer’s questions was particularly instructive. When asked about the relevance of the metric he replied that it could indicate:
  1. Data had been incorrectly entered
  2. Something was wrong with the calculation
  3. There is a problem with patient care
And he listed these possibilities in that order. In other words, the metric was an indicator that highlighted the need for further investigation. It was not an indicator that provided cut and dried proof of a problem. He went on to say that in one instance the ratio had highlighted a problem for ten years.

This is how performance metrics are designed to be used:
  • An alert to a potential problem
  • A need to investigate further
  • Trends (upwards, downwards, consistently bad, consistently good) tell you more than any one single metric.
From my own experience I know that the NHS is not the only example of this mis-match between reported performance and actual performance. If there is any justice for tax payers this will run and run.

No comments:

Post a Comment