A Disappointing Article

All that glitters in the fabled New England Journal of Medicine is not gold. A recent article by Dale and colleagues is a masterclass in producing pleasing-sounding statements, and truisms that go precisely nowhere, but impress the undiscerning reader.[1] They write an article in favour of using quality metrics to improve care. Then they show that process measures may focus attention on things that can be counted at the expense of more important things that cannot. So they say we should count “what’s important to patients”. Then they point out that the signal to noise ratio will not emerge in most cases where outcomes are used – patients value not dying from cancer, but you can never judge your clinician’s performance in screening by cancer death rates. They advocate a ‘balanced mixture’ of measures and advertise their own. But they do not say or prove that they have the right balance. And they admit that using payment to change behaviour is effete. But they say it is a good idea. The whole thing is a muddle. Truth is, no one knows how to use metrics in performance management. But we advocate for task-based (clinical process) measures to ensure that the essentials are in place. We think outcome measures are a poor idea except for patient satisfaction and maybe outcomes of a very small number of highly technical procedures.[2]

— Richard Lilford, CLAHRC WM Director

References:

  1. Dale CR, Myint M, Compton-Phillips AL. Counting Better – the Limits and Future of Quality-Based Compensation. New Engl J Med. 2016; 375(7): 609-11.
  2. Lilford RJ. Risk Adjusted Outcomes – Again! NIHR CLAHRC West Midlands News Blog. 24 April 2015.
Advertisements

3 thoughts on “A Disappointing Article”

  1. Maybe I am “the undiscerning reader”, but isn’t this a bit harsh?
    Firstly, this is a “Perspective” article, not a research submission. I certainly don’t agree with all of the perspectives offered, but a lot of what i understand of the author’s arguments resonate with me and also with much of what the Director has written on this topic.
    I particularly agree with the problem, identified by the authors but still evident in our system, of trying to build quality measurement and even worse “Pay for Performance” by only drawing on existing datasets, using data collected for another purpose. More than a decade ago I led the “Better Metrics” project with the then National Clinical Directors, precisely to address this problem, but with limited success.
    Secondly, as the Director has published, outcome measurement whilst beguiling and necessary for other purposes, can very rarely be used to measure the quality of clinical programmes in real time, and have limited utility in improvement programmes.
    The approach these authors advocate sounds similar to that previously proposed by the Director, based on a locally agreed basket of clinical process measures (impact measures if the evidential link to a subsequent outcome is robust). The basket should include balancing measures to try to identify early unintended consequences of intervention ( for example where an intervention has an impact on length of stay, tracking unplanned re-admissions would be a balancing measure.)
    The authors also recognise the need, still widely ignored across the NHS, to use appropriate methods to distinguish the “signal from the noise”. Board quality reports I see are still littered with meaningless Red Amber Green ratings based on no understanding of intrinsic variation.

    I would have thought that the fact that a group of clinicians are trying to employ basic clinical epidemiology tools to estimate effect size, before acceding to the requests of their commissioners would be welcomed.

    Finally, in the prevailing culture in which the authors practice any shift from fee for service towards one which attempts to incentivise improved clinical quality is surely a step in the right direction. Our culture is different, but the QOF, CQUINS and fines for performance seem unlikely to disappear. To try to improve them seems pragmatic. The reference to the insights from behavioural economics in changing practice are important, not least because this discipline challenges the place of crude financial incentives in designing improvement .

  2. I’m with Dr Crump. But I’m really writing to ask the Director: What does “effete” mean in this context? As in “using payment to change behaviour is effete”.

    Surely you don’t mean ineffective? It works for me every day.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s