Tag Archives: Hospital deaths

Patient’s experience of hospital care at weekends

The “weekend effect”, whereby patients admitted to hospitals during weekends appear to be associated with higher mortality compared with patients who are admitted during weekdays, has received substantial attention from the health service community and the general public alike.[1] Evidence of the weekend effect was used to support the introduction of ‘7-day Service’ policy and associated changes to junior doctor’s contracting arrangement by the NHS,[2-4] which have further propelled debates surrounding the nature and causes of the weekend effect.

Members of the CLAHRC West Midlands are closely involved in the HiSLAC project,[5] which is an NIHR HS&DR Programme funded project led by Professor Julian Bion (University of Birmingham) to evaluate the impact of introducing 7-day consultant-led acute medical services. We are undertaking a systematic review of the weekend effect as part of the project,[6] and one of our challenges is to catch up with the rapidly growing literature fuelled by the public and political attention. Despite that hundreds of papers on this topic have been published, there has been a distinct gap in the academic literature – most of the published papers focus on comparing hospital mortality rates between weekends and weekdays, but virtually no study have compared quantitatively the experience and satisfaction of patients between weekends and weekdays. This was the case until we found a study recently published by Chris Graham of the Picker Institute, who has unique access to data not in the public domain, i.e. the dates of admission to hospital given by the respondents.[7]

This interesting study examined data from two nationwide surveys of acute hospitals in 2014 in England: the A&E department patient survey (with 39,320 respondents representing a 34% response rate) and the adult inpatient survey (with 59,083 respondents representing a 47% response rate). Patients admitted at weekends were less likely to respond compared to those admitted during weekdays, but this was accounted for by patient and admission characteristics (e.g. age groups). Contrary to the inference that would be made on care quality based on hospital mortality rates, respondents attending hospital A&E department during weekends actually reported better experiences with regard to ‘doctors and nurses’ and ‘care and treatment’ compared with those attending during weekdays. Patients who were admitted to hospital through A&E during weekends also rated information given to them in the A&E more favourably. No other significant differences in the reported patient experiences were observed between weekend and weekday A&E visits and hospital admissions. [7]

As always, some cautions are needed when interpreting these intriguing findings. First, as the author acknowledged, patients who died following the A&E visits/admissions were excluded from the surveys, and therefore their experiences were not captured. Second, although potential differences in case mix including age, sex, urgency of admission (elective or not), requirement of a proxy for completing the surveys and presence of long-term conditions were taken into account in the aforementioned findings, the statistical adjustment did not include important factors such as main diagnosis and disease severity which could confound patient experience. Readers may doubt whether these factors could overturn the finding. In that case the mechanisms by which weekend admission may lead to improved satisfaction Is unclear. It is possible that patients have different expectations in terms of hospital care that they receive by day of the week and consequently may rate the same level of care differently. The findings from this study are certainly a very valuable addition to the growing literature that starts to unfold the complexity behind the weekend effect, and are a further testament that measuring care quality based on mortality rates alone is unreliable and certainly insufficient, a point that has long been highlighted by the Director of the CLAHRC West Midlands and other colleagues.[8] [9] Our HiSLAC project continues to collect and examine qualitative,[10] quantitative,[5] [6] and economic [11] evidence related to this topic, so watch the space!

— Yen-Fu Chen, Principal Research Fellow


  1. Lilford RJ, Chen YF. The ubiquitous weekend effect: moving past proving it exists to clarifying what causes it. BMJ Qual Saf 2015;24(8):480-2.
  2. House of Commons. Oral answers to questions: Health. 2015. House of Commons, London.
  3. McKee M. The weekend effect: now you see it, now you don’t. BMJ 2016;353:i2750.
  4. NHS England. Seven day hospital services: the clinical case. 2017.
  5. Bion J, Aldridge CP, Girling A, et al. Two-epoch cross-sectional case record review protocol comparing quality of care of hospital emergency admissions at weekends versus weekdays. BMJ Open 2017;7:e018747.
  6. Chen YF, Boyal A, Sutton E, et al. The magnitude and mechanisms of the weekend effect in hospital admissions: A protocol for a mixed methods review incorporating a systematic review and framework synthesis. Systems Review 2016;5:84.
  7. Graham C. People’s experiences of hospital care on the weekend: secondary analysis of data from two national patient surveys. BMJ Qual Saf 2017;29:29.
  8. Girling AJ, Hofer TP, Wu J, et al. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling study. BMJ Qual Saf 2012;21(12):1052-56.
  9. Lilford R, Pronovost P. Using hospital mortality rates to judge hospital performance: a bad idea that just won’t go away. BMJ 2010;340:c2016.
  10. Tarrant C, Sutton E, Angell E, Aldridge CP, Boyal A, Bion J. The ‘weekend effect’ in acute medicine: a protocol for a team-based ethnography of weekend care for medical patients in acute hospital settings. BMJ Open 2017;7: e016755.
  11. Watson SI, Chen YF, Bion JF, Aldridge CP, Girling A, Lilford RJ. Protocol for the health economic evaluation of increasing the weekend specialist to patient ratio in hospitals in England. BMJ Open 2018:In press.

Declining Readmission Rates – Are They Associated with Increased Mortality?

I have always been a bit nihilistic about reducing readmission rates to hospitals.[1][2] However, I may have been overly pessimistic. A new study confirms that it is possible to reduce readmission rates by imposing financial incentives.[3] Importantly, this does not seem to have caused an increase in mortality – as might occur if hospitals were biased against re-admitting sick patients in order to avoid a financial penalty. “False null result” (type two error), do I hear you ask? Probably not, since the data are based on nearly seven million admissions. In fact, 30 day mortality rates were slightly lower among hospitals that reduced readmission rates.

— Richard Lilford, CLAHRC WM Director


  1. Lilford RJ. If Not Preventable Deaths, Then What About Preventable Admissions? NIHR CLAHRC West Midlands News Blog. 6 May 2016.
  2. Lilford RJ. Unintended Consequences of Pay-For-Performance Based on Readmissions. NIHR CLAHRC West Midlands News Blog. 13 January 2017.
  3. Joynt KE, & Maddox TM. Readmissions Have Declined, and Mortality Has Not Increased. The Importance of Evaluating Unintended Consequences. JAMA. 2017; 318(3): 243-4.

Yet Again, Low Proportion of Hospital Deaths Judged Preventable

Hogan and colleagues have reported another study on preventable mortality based on case-note review among 34 hospitals.[1] Only 3.6% of deaths were thought to have been preventable on the balance of probability. Preventability rates did not vary widely between hospitals.

Of course, this might be something of an underestimate because deaths where the probability of preventability was less than 50% are not included. The CLAHRC WM Director calculates preventability as the sum of all cases that may have been preventable, weighted by the probability that they were preventable. He also likes to adjust for the reviewer effect to minimise the influence of unusually ‘hawkish’ reviewers.

Despite these precautions, preventability is “in the eye of the reviewer,”[2] and may be over-estimated because of hindsight bias, or under-estimated because some practices that may increase the risk of death cannot be discerned from case-notes.

— Richard Lilford, CLAHRC WM Director


  1. Hogan H, Zipfel R, Neuburger J, Hutchings A, Darzi A, Black N. Avoidability of hospital deaths and association with hospital-wide mortality ratios: retrospective case record review and regression analysis. BMJ. 2015; 351: h3239.
  2. Hayward RA, Hofer TP. Estimating hospital deaths due to medical errors: preventability is in the eye of the reviewer. JAMA. 2001; 286(4): 415-20.

Measuring Quality of Care

McGlynn and Adams [1] repeat a point frequently made by the CLAHRC WM Director – before using outcomes to judge the quality of care, first model plausible effects.[2] [3] Only a small fraction of an outcome may be amenable to improved care.

The rate of hospital deaths in the UK is about 3%. Allowing a generous 20% of those to be preventable sets an upper headroom for improvement of 0.6%. So don’t expect quality of care to show up in mortality statistics. Or, to take another example, about 1% of hospital patients suffer a preventable medication related adverse event.[4] So don’t expect improved medicine management to show up in quality of life scores among the hospital population.

— Richard Lilford, CLAHRC WM Director


  1. McGlynn EA, Adams JL. What makes a good quality measure? JAMA. 2014; 312(15): 1517-8.
  2. Yao GL, Novielli N, Manaseki-Holland S,Chen YF, van der Klink M, Barach P, Chilton PJ, Lilford RJ. Evaluation of a predevelopment service delivery intervention: an application to improve clinical handovers. BMJ Qual Saf. 2012; 21(s1): i29-38.
  3. Girling AJ, Hofer TP, Wu J, Chilton PJ, Nicholl JP, Mohammed MA, Lilford RJ. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling study. BMJ Quality & Safety. 2012; 21: 1052-6.
  4. de Vries EN, Ramrattan MA, Smorenburg SM, Gouma DJ, Boermeester MA. The incidence and nature of in-hospital adverse events: a systematic review. Qual Saf Health Care. 2008; 17(3): 216-23.

Preventable hospital deaths and other measures of safety

Readers of this blog may well know the views of the CLAHRC WM Director on using hospital mortality to compare hospital safety.[1] [2] Following the recommendations in the Keogh review, published in 2013, there was greater interest in looking at preventable hospital deaths in order to improve the NHS.

Helen Hogan and colleagues have recently published findings of a retrospective case record review that looked for relationships between preventable hospital deaths and eight other measures of safety in ten English acute hospital trusts.[3] Of the eight measures of safety they looked at, only MRSA bacteraemia rate had a significant association with proportion of preventable deaths (P<0.02). Hospital Standardised Mortality Ratios (HSMRs), widely used in the UK to measure safety, was not significantly associated (P=0.97). Additionally, the difference in the proportion of preventable deaths between hospitals was not statistically significant (P=0.94), varying from 3–8%. The authors are planning a larger study in order to establish these findings, with 24 additional UK hospitals.

— Richard Lilford, Director CLAHRC WM


  1. Girling AJ, Hofer TP, Wu J, Chilton PJ, Nicholl JP, Mohammed MA, Lilford RJ. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling study. BMJ Qual Saf. 2012; 21(12): 1052-6.
  2. Lilford RJ, Pronovost P. Using hospital mortality rates to judge hospital performance: a bad idea that just won’t go away. BMJ. 2010; 340: c2016.
  3. Hogan H, Healey F, Neale G, Thomson R, Vincent C, Black N. Relationship between preventable hospital deaths and other measures of safety: an exploratory study. Int J Qual Health Care. 2014; 26(3): 298-307.