Tag Archives: Case note review

Measuring Quality of Care

Measuring quality of care is not a straightforward business:

  1. Routinely collected outcome data tend to be misleading because of very poor ratios of signal to noise.[1]
  2. Clinical process (criterion based) measures require case note review and miss important errors of omission, such as diagnostic errors.
  3. Adverse events also require case note review and are prone to measurement error.[2]

Adverse event review is widely practiced, usually involving a two-stage process:

  1. A screening process (sometimes to look for warning features [triggers]).
  2. A definitive phase to drill down in more detail and refute or confirm (and classify) the event.

A recent HS&DR report [3] is important for two particular reasons:

  1. It shows that a one-stage process is as sensitive as the two-stage process. So triggers are not needed; just as many adverse events can be identified if notes are sampled at random.
  2. In contrast to (other) triggers, deaths really are associated with a high rate of adverse events (apart, of course, from the death itself). In fact not only are adverse events more common among patients who have died than among patients sampled at random (nearly 30% vs. 10%), but the preventability rates (probability that a detected adverse event was preventable) also appeared slightly higher (about 60% vs. 50%).

This paper has clear implications for policy and practice, because if we want a population ‘enriched’ for high adverse event rates (on the ‘canary in the mineshaft’ principle), then deaths provide that enrichment. The widely used trigger tool, however, serves no useful purpose – it does not identify a higher than average risk population, and it is more resource intensive. It should be consigned to history.

Lastly, England and Wales have mandated a process of death review, and the adverse event rate among such cases is clearly of interest. A word of caution is in order here. The reliability (inter-observer agreement) in this study was quite high (Kappa 0.5), but not high enough for comparisons across institutions to be valid. If cross-institutional comparisons are required, then:

  1. A set of reviewers must review case notes across hospitals.
  2. At least three reviewers should examine each case note.
  3. Adjustment must be made for reviewer effects, as well as prognostic factors.

The statistical basis for these requirements are laid out in detail elsewhere.[4] It is clear that reviewers should not review notes from their own hospitals, if any kind of comparison across institutions is required – the results will reflect the reviewers rather than the hospitals.

Richard Lilford, CLAHRC WM Director


  1. Girling AJ, Hofer TP, Wu J, et al. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling studyBMJ Qual Saf. 2012; 21(12): 1052-6.
  2. Lilford R, Mohammed M, Braunholtz D, Hofer T. The measurement of active errors: methodological issues. Qual Saf Health Care. 2003; 12(s2): ii8-12.
  3. Mayor S, Baines E, Vincent C, et al. Measuring harm and informing quality improvement in the Welsh NHS: the longitudinal Welsh national adverse events study. Health Serv Deliv Res. 2017; 5(9).
  4. Manaseki-Holland S, Lilford RJ, Bishop JR, Girling AJ, Chen YF, Chilton PJ, Hofer TP; UK Case Note Review Group. Reviewing deaths in British and US hospitals: a study of two scales for assessing preventability. BMJ Qual Saf. 2016. [ePub].

A Good Summary on Preventable Death

Identifying preventable deaths is an obvious target for quality improvement. But how to do it – case-note review, routine data, or proxy measures. For a summary of problems see a recent succinct summary by Helen Hogan.[1] Case note review suffers from poor reliability and summary statistics from poor signal to noise ratios. The CLAHRC WM Director has long argued for proxy measures in the form of adherence to evidence-based tenets of good care – that is to say, clinical process measures.[2]

— Richard Lilford, CLAHRC WM Director


  1. Hogan H. The problem with preventable deaths. BMJ Qual Saf. 2016; 25: 320-3.
  2. Brown C, Hofer T, Johal A, Thomson R, Nicholl J, Franklin BD, Lilford RJ. An epistemology of patient safety research: a framework for study design and interpretation. Part 3. End points and measurement. Qual Saf Health Care. 2008. 17;170-7.

Patient Safety Really is Improving

Research carried out by CLAHRC WM colleagues showed, mainly on the basis of process measures, that hospital care in the UK became safer over the ‘Blair Decade’.[1] [2] Now an even larger Dutch study, 2005-2013,[3] has produced corroborating findings with respect to adverse events. Both studies were based on case-note review. The Dutch study found an approximately one-third reduction in adverse events on retrospective review of nearly 16,000 case-notes. So, there are now two separate studies that have used a consistent methodology over time and both suggest that care is becoming safer. This is probably the result of national initiatives and diffusion of safety ideas among clinicians. Indeed one of the reasons put forward for failure to find a statistically significant effect from the Safer Patients Initiative in the UK was the system-wide temporal trend, or ‘rising tide’.[4] There are good arguments to conduct a further follow-up of safety in UK hospitals to see if the improvement noted over the first decade of the millennium has been sustained. This might be the last chance, since case-note review may become more difficult as the future case record is fragmented across hospital IT systems.

— Richard Lilford, CLAHRC WM Director


  1. Benning A, Ghaleb M, Suokas A. Large scale organisational intervention to improve patient safety in four UK hospitals: mixed method evaluation. BMJ. 2011; 342:d195.
  2. Benning A, Dixon-Woods M, Nwulu U, et al. Multiple component patient safety intervention in English hospitals: controlled evaluation of second phase. BMJ. 2011; 342:d199.
  3. Baines R, Langelaan M, de Bruijne M, Spreeuwenberg P, Wagner C. How effective are patient safety initiatives? A retrospective patient record review study of changes to patient safety over time. BMJ Qual Saf. 2015; 24: 561-71.
  4. Chen Y, Hemming K, Stevens AJ, Lilford RJ. Secular trends and evaluation of complex interventions: the rising tide phenomenon. BMJ Qual Saf. 2015. [ePub].

Ranking Hospitals on Preventable Deaths – an Article that Everyone Should Read

The UK government plans to rank hospitals on avoidable mortality based on case reviews of 2,000 deaths in English hospitals each year. They plan to use the method developed by Hogan et al.,[1] which designates a death as preventable when the reviewer concludes that the probability that the death could have been prevented exceeds 50% (P>0.5).

A recent article [2] criticises the use of a single threshold of preventability (e.g. P< or >=0.5) to determine whether or not a death was preventable. CLAHRC WM, in collaboration with Tim Hofer of University of Michigan, has advocated the use of a six-point Likert or a sliding scale to overcome the loss of information from dichotomising preventability.[3]

While preventable mortality rates provide real information on hospital performance, reviewer reliability is rather low (i.e. inter-observer variability is high).[3] This means that the signal can easily be obscured by noise.[4] Further modelling work may shed light on the extent to which this reduces the accuracy of league table approaches to identify outliers.Meanwhile, it is clear that while measuring preventable deaths overcomes some of the problems associated with measuring all deaths,[5] it nevertheless is no panacea. Measurement of preventability should probably be used as a learning tool, rather than as a performance metric.

— Richard Lilford, CLAHRC WM Director


  1. Hogan H, Healey F, Neale G, et al. Preventable deaths due to problems in care in English acute hospitals: a retrospective case record review study. BMJ Qual Saf. 2012; 21(9): 737-45.
  2. Abel G, Lyratzopoulos G. Ranking hospitals on avoidable death rates derived from retrospective case record review: methodological observations and limitations. BMJ Qual Saf. 2015; 24: 554-7.
  3. Manaseki-Holland S, Lilford RJ, Bishop J, et al. Reviewing deaths in British and US hospitals: a study of case-note reviews. 2015. [Submitted].
  4. Girling AJ, Hofer TP, Wu J, et al. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling study. BMJ Qual Saf. 2012; 21(12): 1052-6.
  5. Lilford R & Pronovost P. Using hospital mortality rates to judge hospital performance: a bad idea that just won’t go away. BMJ. 2010; 340: c2016.

Accuracy of the Recording of the Cause of Deaths in Hospitals is low

Prosaic as it might sound, the above article is unusually important [1] – the medical examiner system in England will now examine case notes of many deaths, and case note review is the only real method to assess the technical quality of medical care – so called trigger tools miss most of the important stuff. This systematic review finds that between half and three quarters of stated causes of death are wrong. This is not surprising given that implicit case note review has high measurement error. The gold standard here is an external systematic review, such as the proposed medical examiner system. Such a method still misses many of the real causes and post-mortem is the ultimate standard. The authors provide a lot of guidelines for future investigators proposing case note reviews.

— Richard Lilford, CLAHRC WM Director


  1. Rampatige R, Mikkelsen L, Hernandez B, Riley I, Lopez AD. Systematic review of statistics on causes of deaths in hospitals: strengthening evidence for policy makers. Bull World Health Organ. 2014;92:807-16.

Preventable hospital deaths and other measures of safety

Readers of this blog may well know the views of the CLAHRC WM Director on using hospital mortality to compare hospital safety.[1] [2] Following the recommendations in the Keogh review, published in 2013, there was greater interest in looking at preventable hospital deaths in order to improve the NHS.

Helen Hogan and colleagues have recently published findings of a retrospective case record review that looked for relationships between preventable hospital deaths and eight other measures of safety in ten English acute hospital trusts.[3] Of the eight measures of safety they looked at, only MRSA bacteraemia rate had a significant association with proportion of preventable deaths (P<0.02). Hospital Standardised Mortality Ratios (HSMRs), widely used in the UK to measure safety, was not significantly associated (P=0.97). Additionally, the difference in the proportion of preventable deaths between hospitals was not statistically significant (P=0.94), varying from 3–8%. The authors are planning a larger study in order to establish these findings, with 24 additional UK hospitals.

— Richard Lilford, Director CLAHRC WM


  1. Girling AJ, Hofer TP, Wu J, Chilton PJ, Nicholl JP, Mohammed MA, Lilford RJ. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling study. BMJ Qual Saf. 2012; 21(12): 1052-6.
  2. Lilford RJ, Pronovost P. Using hospital mortality rates to judge hospital performance: a bad idea that just won’t go away. BMJ. 2010; 340: c2016.
  3. Hogan H, Healey F, Neale G, Thomson R, Vincent C, Black N. Relationship between preventable hospital deaths and other measures of safety: an exploratory study. Int J Qual Health Care. 2014; 26(3): 298-307.