Tag Archives: Fallacy

An Interesting Report of Quality of Care Enhancement Strategies Across England, Germany, Sweden, the Netherlands, and the USA

An interesting paper from the Berlin University of Technology compares the quality enhancement systems across the above countries with respect to measuring, reporting and rewarding quality.[1] This paper is an excellent resource for policy and health service researchers. The US has the most developed system of quality-related payments (P4P) of the five countries. England wisely uses only process measures to reward performance, while the US and Germany include patient outcomes. The latter are unfair because of signal to noise issues,[2] and the risk-adjustment fallacy.[3] [4] Above all, remember Lilford’s axiom – never base rewards or sanctions on a measurement over which service providers do not feel they have control.[5] It is true, as the paper argues, that rates of adherence to a single process seldom correlate with outcome. But this is a signal to noise problem. ‘Proving’ that processes are valid takes huge RCTs, even when the process is applied to 0% (control arm) vs. approaching 100% (intervention arm) of patients. So how could an improvement from say 40% to 60% in adherence to clinical process show up in routinely collected data?[6] I have to keep on saying it – collect outcome data, but in rewarding or penalising institutions on the basis of comparative performance – process, process, process.

— Richard Lilford, CLAHRC WM Director

References:

  1. Pross C, Geissler A, Busse R. Measuring, Reporting, and Rewarding Quality of Care in 5 Nations: 5 Policy Levers to Enhance Hospital Quality Accountability. Milbank Quart. 2017; 95(1): 136-83.
  2. Girling AJ, Hofer TP, Wu J, et al. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling study. BMJ Qual Saf. 2012; 21: 1052-6.
  3. Mohammed MA, Deeks JJ, Girling A, et al. Evidence of methodological bias in hospital standardised mortality ratios: retrospective database study of English hospitals. BMJ. 2009; 338: b780.
  4. Lilford R, & Pronovost P. Using hospital mortality rates to judge hospital performance: a bad idea that just won’t go away. BMJ. 2010; 340: c2016.
  5. Lilford RJ. Important evidence on pay for performance. NIHR CLAHRC West Midlands News Blog. 20 November 2015.
  6. Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ. 2010; 341: c4413.

Another Potential Problem with SMRs

Readers of this CLAHRC WM News Blog will know that the director has pointed out the limitations of standardised mortality ratios (SMRs) for a decade. He has explicated the case-mix adjustment fallacy [1]; the constant risk-adjustment fallacy [2]; and the signal to noise issue.[3] Now another potential problem with case-mix adjustment has come to light – the issue of Simpson’s paradox. This paradox arises when an association found in multiple groups is reversed when these groups are aggregated. This can happen when baseball batters are compared. Consider a scenario where batter 1 receives many more pitches than batter 2 in year one, and vice-versa in year two. In such a scenario, batter 1 can have a better strike rate in both years, but a lower strike rate if these rates are simply aggregated. In a brilliant editorial, Drs Perla Marang-van de Mheen and Kaveh Shojania show how this can happen when outcomes are aggregated over doctors and hospitals.[4] The problem of Simpson’s paradox would also arise in meta-analyses if all the good and bad outcomes were simply added up before applying a simple statistical test. Of course, the standard statistical methods avoid this problem and the Director wonders whether there is a statistical approach that could be used in baseball and comparison of SMRs.

— Richard Lilford, CLAHRC WM Director

References:

  1. Lilford R, Mohammed MA, Spiegelhalter D, Thomson R. Use and misuse of process and outcome data in managing performance of acute medical care: avoiding institutional stigma. Lancet. 2004; 363(9415): 1147-54.
  2. Mohammed MA, Deeks JJ, Girling A, Rudge G, Carmalt M, Stevens AJ, Lilford RJ. Evidence of methodological bias in hospital standardised mortality ratios: retrospective database study of English hospitals. BMJ. 2009; 338: b780.
  3. Girling A, Hofer TP, Wu J, Chilton P, Nicholl J, Mohammed MA, Lilford RJ. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling study. BMJ Qual Saf. 2012; 21(12):1052-6.
  4. Marang-van de Mheen P & Shojania KG. Simpson’s paradox: how performance measurement can fail even with perfect risk adjustment. BMJ Qual Saf. 2014; 23: 701-5.