Tag Archives: Hospitals

Measuring the Quality of Health Care in Low-Income Settings

Measuring the quality of health care in High-Income Countries (HIC) is deceptively difficult, as shown by work carried out by many research groups, including CLAHRC WM.[1-5] However, a large amount of information is collected routinely by health care facilities in HICs. This data includes outcome data, such as Standardised Mortality Rates (SMRs), death rates from ’causes amenable to health care’, readmission rates, morbidity rates (such as pressure damage), and patient satisfaction, along with process data, such as waiting times, prescribing errors, and antibiotic use. There is controversy over many of these endpoints, and some are much better barometers of safety than others. While incident reporting systems provide a very poor basis for epidemiological studies (that is not their purpose), case-note review provides arguably the best and most widely used method for formal study of care quality – at least in hospitals.[3] [6] [7] Measuring safety in primary care is inhibited by the less comprehensive case-notes found in primary care settings as compared to hospital case-notes. Nevertheless, increasing amounts of process information is now available from general practices, particularly in countries (such as the UK) that collect this information routinely in electronic systems. It is possible, for example, to measure rates of statin prescriptions for people with high cardiovascular risk, and anticoagulants for people with ventricular fibrillation, as our CLAHRC has shown.[8] [9] HICs also conduct frequent audits of specific aspects of care – essentially by asking clinicians to fill in detailed pro formas for patients in various categories. For instance, National Audits in the UK have been carried out into all patients experiencing a myocardial infarction.[10] Direct observation of care has been used most often to understand barriers and facilitators to good practice, rather than to measure quality / safety in a quantitative way. However, routine data collection systems provide a measure of patient satisfaction with care – in the UK people who were admitted to hospital are surveyed on a regular basis [11] and general practices are required to arrange for anonymous patient feedback every year.[12] Mystery shoppers (simulated patients) have also been used from time to time, albeit not as a comparative epidemiological tool.[13]

This picture is very different in Low- and Middle-Income Countries (LMIC) and, again, it is yet more difficult to assess quality of out of hospital care than of hospital care.[14] Even in hospitals routine mortality data may not be available, let alone process data. An exception is the network of paediatric centres established in Kenya by Prof Michael English.[15] Occasionally large scale bespoke studies are carried out in LMICs – for example, a recent study in which CLAHRC WM participated, measured 30 day post-operative mortality rates in over 60 hospitals across low-, middle- and high-income countries.[16]

The quality and outcomes of care in community settings in LMICs is a woefully understudied area. We are attempting to correct this ‘dearth’ of information in a study in nine slums spread across four African and Asian countries. One of the largest obstacles to such a study is the very fragmented nature of health care provision in community settings in LMICs – a finding confirmed by a recent Lancet commission.[17] There are no routine data collection systems, and even deaths are not registered routinely. Where to start?

In this blog post I lay out a framework for measurement of quality from largely isolated providers, many of whom are unregulated, in a system where there is no routine system of data and no archive of case-notes. In such a constrained situation I can think of three (non-exclusive) types of study:

  1. Direct observation of the facilities where care is provided without actually observing care or its effects. Such observation is limited to some of the basic building blocks of a health care system – what services are present (e.g. number of pharmacies per 1,000 population) and availability (how often the pharmacy is open; how often a doctor / nurse / medical officer is available for consultation in a clinic). Such a ‘mapping’ exercise does not capture all care provided – e.g. it will miss hospital care and municipal / hospital-based outreach care, such as vaccination provided by Community Health Workers. It will also miss any IT based care using apps or online consultations.
  2. Direct observation of the care process by external observers. Researchers can observe care from close up, for example during consultations. Such observations can cover the humanity of care (which could be scored) and/or technical quality (which again could be scored against explicit standards and/or in a holistic (implicit) basis).[6] [7] An explicit standard would have to be based mainly on ‘if-then’ rules – e.g. if a patient complained of weight loss, excessive thirst, or recurrent boils, did the clinicians test their urine for sugar; if the patient complained of persistent productive cough and night sweats was a test for TB arranged? Implicit standards suffer from low reliability (high inter-observer variation).[18] Moreover, community providers in LMICs are arguably likely to be resistant to what they might perceive as an intrusive or even threatening form of observation. Those who permitted such scrutiny are unlikely to constitute a random sample. More vicarious observations – say of the length of consultations – would have some value, but might still be seen as intrusive. Provided some providers would permit direct observation, their results may represent an ‘upper bound’ on performance.
  3. Quality as assessed through the eyes of the patient / members of the public. Given the limitations of independent observation, the lack of anamnestic records of clinical encounters in the form of case-notes, absence of routine data, and likely limitations on access by independent direct observers, most information may need to be collected from patients themselves, or as we discuss, people masquerading as patients (simulated patients / mystery shoppers). The following types of data collection methods can be considered:
    1. Questions directed at members of the public regarding preventive services. So, households could be asked about vaccinations, surveillance (say for malnutrition), and their knowledge of screening services offered on a routine basis. This is likely to provide a fairly accurate measure of the quality of preventive services (provided the sampling strategy was carefully designed to yield a representative sample). This method could also provide information on advice and care provided through IT resources. This is a situation where some anamnestic data collection would be possible (with the permission of the respondent) since it would be possible to scroll back through the electronic ‘record’.
    2. Opinion surveys / debriefing following consultations. This method offers a viable alternative to observation of consultations and would be less expensive (though still not inexpensive). Information on the kindness / humanity of services could be easily obtained and quantified, along with ease of access to ambulatory and emergency care.[19] Measuring clinical quality would again rely on observations against a gold standard,[20] but given the large number of possible clinical scenarios standardising quality assessment would be tricky. However, a coarse-grained assessment would be possible and, given the low quality levels reported anecdotally, failure to achieve a high degree of standardisation might not vitiate collection of important information. Such a method might provide insights into the relative merits and demerits of traditional vs. modern health care, private vs. public, etc., provided that these differences were large.
    3. Simulated patients offering standardised clinical scenarios. This is arguably the optimal method of technical quality assessment in settings where case-notes are perfunctory or not available. Again, consultations could be scored for humanity of care and clinical/ technical competence, and again explicit and/or implicit standards could be used. However, we do not believe it would be ethical to use this method without obtaining assent from providers. There are some examples of successful use of the methods in LMICs.[21] [22] However, if my premise is accepted that providers must assent to use of simulated patients, then it is necessary to first establish trust between providers and academic teams, and this takes time. Again, there is a high probability that only the better providers will provide assent, in which case observations would likely represent ‘upper bounds’ on quality.

In conclusion, I think that the basic tools of quality assessment, in the current situation where direct observation and/or simulated patients are not acceptable, is a combination of:

  1. Direct observation of facilities that exist, along with ease of access to them, and
  2. Debriefing of people who have recently used the health facilities, or who might have received preventive services that are not based in these facilities.

We do not think that the above mentioned shortcomings of these methods is a reason to eschew assessment of service quality in community settings (such as slums) in LMICs – after all, one of the most powerful levers to improvement is quantitative evidence of current care quality.[23] [24] The perfect should not be the enemy of the good. Moreover, if the anecdotes I have heard regarding care quality (providers who hand out only three types of pill – red, yellow and blue; doctors and nurses who do not turn up for work; prescription of antibiotics for clearly non-infectious conditions) are even partly true, then these methods would be more than sufficient to document standards and compare them across types of provider and different settings.

— Richard Lilford, CLAHRC WM Director

References:

  1. Brown C, Hofer T, Johal A, Thomson R, Nicholl J, Franklin BD, Lilford RJ. An epistemology of patient safety research: a framework for study design and interpretation. Part 1. Conceptualising and developing interventions. Qual Saf Health Care. 2008; 17(3): 158-62.
  2. Brown C, Hofer T, Johal A, Thomson R, Nicholl J, Franklin BD, Lilford RJ. An epistemology of patient safety research: a framework for study design and interpretation. Part 2. Study design. Qual Saf Health Care. 2008; 17(3): 163-9.
  3. Brown C, Hofer T, Johal A, Thomson R, Nicholl J, Franklin BD, Lilford RJ. An epistemology of patient safety research: a framework for study design and interpretation. Part 3. End points and measurement. Qual Saf Health Care. 2008; 17(3): 170-7.
  4. Brown C, Hofer T, Johal A, Thomson R, Nicholl J, Franklin BD, Lilford RJ. An epistemology of patient safety research: a framework for study design and interpretation. Part 4. One size does not fit all. Qual Saf Health Care. 2008; 17(3): 178-81.
  5. Brown C, Lilford R. Evaluating service delivery interventions to enhance patient safety. BMJ. 2008; 337: a2764.
  6. Benning A, Ghaleb M, Suokas A, Dixon-Woods M, Dawson J, Barber N, et al. Large scale organisational intervention to improve patient safety in four UK hospitals: mixed method evaluation. BMJ. 2011; 342: d195.
  7. Benning A, Dixon-Woods M, Nwulu U, Ghaleb M, Dawson J, Barber N, et al. Multiple component patient safety intervention in English hospitals: controlled evaluation of second phase. BMJ. 2011; 342: d199.
  8. Finnikin S, Ryan R, Marshall T. Cohort study investigating the relationship between cholesterol, cardiovascular risk score and the prescribing of statins in UK primary care: study protocol. BMJ Open. 2016; 6(11): e013120.
  9. Adderley N, Ryan R, Marshall T. The role of contraindications in prescribing anticoagulants to patients with atrial fibrillation: a cross-sectional analysis of primary care data in the UK. Br J Gen Pract. 2017. [ePub].
  10. Herrett E, Smeeth L, Walker L, Weston C, on behalf of the MINAP Academic Group. The Myocardial Ischaemia National Audit Project (MINAP). Heart. 2010; 96: 1264-7.
  11. Care Quality Commission. Adult inpatient survey 2016. Newcastle-upon-Tyne, UK: Care Quality Commission, 2017.
  12. Ipsos MORI. GP Patient Survey. National Report. July 2017 Publication. London: NHS England, 2017.
  13. Grant C, Nicholas R, Moore L, Sailsbury C. An observational study comparing quality of care in walk-in centres with general practice and NHS Direct using standardised patients. BMJ. 2002; 324: 1556.
  14. Nolte E & McKee M. Measuring and evaluating performance. In: Smith RD & Hanson K (eds). Health Systems in Low- and Middle-Income Countries: An economic and policy perspective. Oxford: Oxford University Press; 2011.
  15. Tuti T, Bitok M, Malla L, Paton C, Muinga N, Gathara D, et al. Improving documentation of clinical care within a clinical information network: an essential initial step in efforts to understand and improve care in Kenyan hospitals. BMJ Global Health. 2016; 1(1): e000028.
  16. Global Surg Collaborative. Mortality of emergency abdominal surgery in high-, middle- and low-income countries. Br J Surg. 2016; 103(8): 971-88.
  17. McPake B, Hanson K. Managing the public-private mix to achieve universal health coverage. Lancet. 2016; 388: 622-30.
  18. Lilford R, Edwards A, Girling A, Hofer T, Di Tanna GL, Petty J, Nicholl J. Inter-rater reliability of case-note audit: a systematic review. J Health Serv Res Policy. 2007; 12(3): 173-80.
  19. Schoen C, Osborn R, Huynh PT, Doty M, Davis K, Zapert K, Peugh J. Primary Care and Health System Performance: Adults’ Experiences in Five Countries. Health Aff. 2004.
  20. Kruk ME & Freedman LP. Assessing health system performance in developing countries: A review of the literature. Health Policy. 2008; 85: 263-76.
  21. Smith F. Private local pharmacies in low- and middle-income countries: a review of interventions to enhance their role in public health. Trop Med Int Health. 2009; 14(3): 362-72.
  22. Satyanarayana S, Kwan A, Daniels B, Subbaramn R, McDowell A, Bergkvist S, et al. Use of standardised patients to assess antibiotic dispensing for tuberculosis by pharmacies in urban India: a cross-sectional study. Lancet Infect Dis. 2016; 16(11): 1261-8.
  23. Kudzma E C. Florence Nightingale and healthcare reform. Nurs Sci Q. 2006; 19(1): 61-4.
  24. Donabedian A. The end results of health care: Ernest Codman’s contribution to quality assessment and beyond. Milbank Q. 1989; 67(2): 233-56.
Advertisements

Patient Involvement in Patient Safety: Null Result from a High Quality Study

Most patient safety evaluations are simple before and after / time series improvement studies. So it is always refreshing to find a study with contemporaneous controls. Lawton and her colleagues report a nice cluster randomized trial covering 33 hospital wards in five hospitals.[1] They evaluate a well-known patient safety intervention based on the idea of giving patients a more active role in monitoring safety on their ward.

The trial produced a null result, but some of the measures of safety were in the right direction and there was a correlation between the enthusiasm/fidelity with which the intervention was implemented and measures of safety.

Safety is hard to measure (as the authors state), and improvement often builds on a number of small incremental changes. So, it would be very nice to see this intervention replicated, possibly with measures to generate greater commitment from ward staff.
Here is the problem with patient safety research; on the one hand the subject of patient safety is full of hubristic claims made on the basis of insufficient (weak) evidence. On the other hand, high quality studies, such as the one reported here, often fail to find an effect. In many cases, as in the study reported here, there are reasons to suspect a type 2 error (false negative result). Beware also the rising tide – the phenomenon that arises where a trial occurs in the context of a strong secular trend – this trend ‘swallows up’ the headroom for a marginal intervention effect.[2] What is to be done? First, do not declare defeat too early. Second, be prepared to either carry out larger studies or replication studies that can be combined in a meta-analysis. Third, make multiple measurements across a causal chain [3] and synthesise this disparate data using Bayesian networks.[4] Fourth, further to the Bayesian approach, do not dichotomise results on the standard frequentist statistical convention into null and positive. It is stupid to classify a p-value of 0.06 as null if other evidence supports an effect, or to classify a p-value of 0.04 as positive if other data point the opposite way. Knowledge of complex areas, such as service interventions to improve safety, should take account of patterns in the data and information external to the index study. Bayesian networks provide a framework for such an analysis.[4] [5]

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Lawton R, O’Hara JK, Sheard L, et al. Can patient involvement improve patient safety? A cluster randomised control trial of the Patient Reporting and Action for a Safe Environment (PRASE) intervention. BMJ Qual Saf. 2017; 26: 622-31.
  2. Chen YF, Hemming K, Stevens AJ, Lilford RJ. Secular trends and evaluation of complex interventions: the rising tide phenomenon. BMJ Qual Saf. 2016; 25: 303-10.
  3. Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ. 2010; 341: c4413.
  4. Watson SI & Lilford RJ. Essay 1: Integrating multiple sources of evidence: a Bayesian perspective. In: Challenges, solutions and future directions in the evaluation of service innovations in health care and public health. Southampton (UK): NIHR Journals Library, 2016.
  5. Lilford RJ, Girling AJ, Sheikh, et al. Protocol for evaluation of the cost-effectiveness of ePrescribing systems and candidate prototype for other related health information technologies. BMC Health Serv Res. 2014; 14: 314.

Declining Readmission Rates – Are They Associated with Increased Mortality?

I have always been a bit nihilistic about reducing readmission rates to hospitals.[1][2] However, I may have been overly pessimistic. A new study confirms that it is possible to reduce readmission rates by imposing financial incentives.[3] Importantly, this does not seem to have caused an increase in mortality – as might occur if hospitals were biased against re-admitting sick patients in order to avoid a financial penalty. “False null result” (type two error), do I hear you ask? Probably not, since the data are based on nearly seven million admissions. In fact, 30 day mortality rates were slightly lower among hospitals that reduced readmission rates.

— Richard Lilford, CLAHRC WM Director

References:

  1. Lilford RJ. If Not Preventable Deaths, Then What About Preventable Admissions? NIHR CLAHRC West Midlands News Blog. 6 May 2016.
  2. Lilford RJ. Unintended Consequences of Pay-For-Performance Based on Readmissions. NIHR CLAHRC West Midlands News Blog. 13 January 2017.
  3. Joynt KE, & Maddox TM. Readmissions Have Declined, and Mortality Has Not Increased. The Importance of Evaluating Unintended Consequences. JAMA. 2017; 318(3): 243-4.

Predicting Readmissions on the Basis of a Well-Known Risk of Readmission Score

A recent NIHR CLAHRC West Midlands study examined a score based on co-morbidities, hospital use before the index admission, length of stay, and rate of admission – the LACE score.[1] The findings broadly corroborate the score and previous evidence – high scores are statistically associated with risk of readmission, but predictive accuracy is low and hardly likely to improve on clinical assessment; no doctor would use such a test to identify patients. This is an inpatient study based on over 90,000 admissions. We do not want every clinical action to be codified in a score – it is a waste of time. Moreover, most readmissions are caused by a new problem.[2] So a more sensible way forward, from my point of view, would be a general index of risk of deterioration to cover patients at all points in their journey. Would the ‘frailty index’ [3] [4] serve this purpose perfectly well?

— Richard Lilford, CLAHRC WM Director

References:

  1. Damery S, Combes G. Evaluating the predictive strength of the LACE index in identifying patients at high risk of hospital readmission following an inpatient episode: a retrospective cohort study. BMJ Open. 2017; 7: e016921.
  2. Lilford RJ. Unintended Consequences of Pay-for-Performance Based on Readmissions. NIHR CLAHRC West Midlands News Blog. 13 January 2017.
  3. Lilford RJ. Future Trends in NHS. NIHR CLAHRC West Midlands News Blog.
  4. Clegg A, Bates C, Young J, et al. Development and validation of an electronic frailty index using routine primary care electronic health record data. Age Ageing. 2016.

Introducing Hospital IT systems – Two Cautionary Tales

The beneficial effects of mature IT systems, such as at the Brigham and Women’s Hospital,[1] Intermountain Health Care,[2] and University Hospitals Birmingham NHS Foundation Trust,[3] have been well documented. But what happens when a commercial system is popped into a busy NHS general hospital? Lots of problems according to two detailed qualitative studies from Edinburgh.[4] [5] Cresswell and colleagues document problems with both stand-alone ePrescribing systems and with multi-modular systems.[4] The former drive staff crazy with multiple log-ins and duplicate data entry. Nor does their frustration lessen with time. Neither system types (stand-alone or multi-modular) presented a comprehensive overview of the patient record. This has obvious implications for patient safety. How is a doctor expected to detect a pattern in the data if they are not presented in a coherent format? In their second paper the authors examine how staff cope with the above problems.[5] To enable them to complete their tasks ‘workarounds’ were deployed. These workarounds frequently involved recourse to paper intermediaries. Staff often became overloaded with work and often did not have the necessary clinical information at their fingertips. Some workarounds were sanctioned by the organisation, others not. What do I make of these disturbing, but thorough, pieces of research? I would say four things:

  1. Move slowly and carefully when introducing IT and never, never go for heroic ‘big bang’ solutions.
  2. Employ lots of IT specialists who can adapt systems to people – do not try to go the other way round and eschew ‘business process engineering’, the risks of which are too high – be incremental.
  3. If you do not put the doctors in charge, make sure that they feel as if they are. More seriously – take your people with you.
  4. Forget integrating primary and secondary care, and social care and community nurses, and meals on wheels and whatever else. Leave that hubristic task to your hapless successor and introduce a patient held booklet made of paper – that’s WISDAM.[6]

— Richard Lilford, CLAHRC WM Director

References:

  1. Weissman JS, Vogeli C, Fischer M, Ferris T, Kaushal R, Blumenthal B. E-prescribing Impact on Patient Safety, Use and Cost. Rockville, MD: Agency for Healthcare Research and Quality. 2007.
  2. Bohmer RMJ, Edmondson AC, Feldman L. Intermountain Health Care. Harvard Business School Case 603-066. 2002
  3. Coleman JJ, Hodson J, Brooks HL, Rosser D. Missed medication doses in hospitalised patients: a descriptive account of quality improvement measures and time series analysis. Int J Qual Health Care. 2013; 25(5): 564-72.
  4. Cresswell KM, Mozaffar H, Lee L, Williams R, Sheikh A. Safety risks associated with the lack of integration and interfacing of hospital health information technologies: a qualitative study of hospital electronic prescribing systems in England. BMJ Qual Saf. 2017; 26: 530-41.
  5. Cresswell KM, Mozaffar H, Lee L, Williams R, Sheikh A. W. Workarounds to hospital electronic prescribing systems: a qualitative study in English hospitals. BMJ Qual Saf. 2017; 26: 542-51.
  6. Lilford RJ. The WISDAM* of Rupert Fawdry. NIHR CLAHRC West Midlands News Blog. 5 September 2014.

Length of Hospital Stay

The average length of hospital stay for patients has ‘plummeted’ over the last thirty years, from 10 days in 1983 to 5 days in 2013.[1] However, the proportion of patients discharged to a nursing facility has quadrupled over this same period.[2] So, from the point of view of the patient, the stay away from home has not changed as much as it might be inferred from an uncritical analysis of inpatient stays. So, how have home-to-home times changed? This was assessed by Barnett et al.[3] on the basis of Medicare administration claims for 82 million hospitalisations over the years 2004 to 2011 inclusive.

Yes, the mean length of hospital stay declined (from 6.3 to 5.7 days), but the mean length of stay in post-acute care facilities increased from 4.8 to 6 days. Total home-to-home time increased from 11.1 to 11.7 days. This is not necessarily a bad thing, but it must be taken into account in assessing costs and benefits of care. The risk of iatrogenic harm and costs are lower in nursing facilities than hospitals. However, the article cited here does not consider the possibility that these risks and costs are not lower for the group of people in nursing facilities who would otherwise be cared for in hospital.

— Richard Lilford, CLAHRC WM Director

References:

  1. Centers for Medicare and Medicaid Services. CMS program statistics: 2013 Medicare Utilization Section. 2017.
  2. Tian W (AHRQ). An All-Payer View of Hospital Discharge to Postacute Care, 2013. Rockville, MD: Agency for Healthcare Research and Quality; 2016.
  3. Barnett ML, Grabowski DC, Mehrotra A. Home-to-Home Time – Measuring What Matters to Patients and Payers. N Engl J Med. 2017; 377: 4-6.

Payment by Results – a Null Result!

Reimbursement levels for medical care in large US hospitals are reduced by up to 2% if compliance with evidence-based clinical care standards falls below threshold levels. Does this result in improved care compared to control hospitals not exposed to the financial incentive? To find out, intervention hospitals were compared to control hospitals.[1] The ‘value based purchasing’ schemes were not introduced in a prospective experiment, and the controls (small rural hospitals) are very different in nature to those larger hospitals to whom the incentive applies. To mitigate potential bias, difference-in-difference approaches were used; hospitals were matched for previous performance; and the usual statistical adjustments were made. Adherence to appropriate clinical processes was increasing among both control and intervention hospitals before the intervention was implemented. Rates of adherence did not differ between intervention and control hospitals post-intervention. The clinical indicators related to three tracer conditions frequently used in studies of adherence to clinical standards – pneumonia, heart attack or heart failure. Patient experience measures also did not differ over intervention and controls, and while mortality was improved for pneumonia, it did not do so for the other conditions. The effect on pneumonia deaths was regarded as a chance finding (alpha error), given the null result on mediating variables (i.e. clinical process variables). Arguably these results were null because the incentive was low (only 2% of total reimbursement) and distributed over a large number of outcomes. Alternatively, doctors are largely intrinsically motivated and do not need financial incentives to moderate their performance. We will pick up on this issue in our next News Blog.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Ryan AM, Krinsky S, Maurer KA, Dimick JB. Changes in Hospital Quality Associated with Hospital Value-Based Purchasing. N Engl J Med. 2017; 376: 2358-66.

The Increasing Codification and Transparency of Hospital Practice

Ah the halcyon days, when at the age of 26 I was the most senior obstetrician on site in a huge high-risk hospital. How times have changed. Now a fully accredited specialist must be on hand, she must follow check-lists, and cognitive aids will try to pre-empt errors.[1] And now we hear that she will be under video-surveillance for a substantial portion of her working life.[2] Of course all of this is a good thing – codification reduces error in all industries. And we must also realise that codification does not mean that the need for judgement is vitiated, or that medicine cannot still be fun and even heroic, as we have argued before.[3] And transparency is also no bad thing; as it punishes those who transgress so it exonerates the many who are falsely accused.

— Richard Lilford, CLAHRC WM Director

References:

  1. Merry AF, Mitchell SJ. Advancing Patient Safety Through the Use of Cognitive Aids. BMJ Qual Saf. 2016; 25(10):733-5.
  2. Joo S, Xu T, Makary MA. Video transparency: a powerful tool for patient safety and quality improvement. BMJ Qual Saf. 2016; 25: 911-3.
  3. Lilford RJ. Can We Do Without Heroism in Health Care? NIHR CLAHRC West Midlands News Blog. March 20, 2015.

Clinical and Epidemic Outcomes from Implementation of Hospital-Based Antimicrobial Stewardship Programmes (ASPs)

The poor authors of this study had to read 24,917 citations to locate 26 studies with pre- and post-implementation comparisons.[1] The mean effect across these 26 ASPs was a 19% reduction in total antimicrobial consumption, while there was a 27% reduction in use of ‘restricted’ antibiotic agents, and an 18.5% reduction in use of broad-spectrum antibiotics. Overall hospital costs decreased by no less than 34% (mainly due to a 9% reduction in length of stay). There was a reduction in infections with resistant organisms, but no overall reduction in infection related adverse events. Of course, the interventions varied in nature and there was no attempt to classify them (say by type and intensity of intervention) and analyse the results accordingly. The study designs are generally weak, not controlling for temporal trends. The health economics is short-term and (for understandable reasons) the potential benefits of a contingent decrease in antimicrobial resistance were not modelled.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Karanika S, Paudel S, Grigoras C, Kalbasi A, Mylonakis E. Systematic Review and Meta-analysis of Clinical and Economic Outcomes from the Implementation of Hospital-Based Antimicrobial Stewardship Programs. Antimicrob Agents Chemother. 2016; 60(8): 4840-52

Unintended Consequences of Pay-For-Performance Based on Readmissions

Introducing fines for readmission rates crossing a certain threshold has been associated with reduced readmissions. Distilling a rather wordy commentary by Friebel and Steventon,[1] there are problems with the policy since it might not lead to optimal care:

  1. The link between quality of care and readmission is not good according to most studies, so that there is a risk that patients who need readmission will not get it.
  2. In support of the above, less than a third of readmissions are for the condition that caused the previous admission (which is not to say that none are preventable, but it suggests that a high proportion might not be).
  3. Risk-adjustment is at best imperfect.
  4. And this probably explains why ‘safety net’ hospitals caring for the poorest clientele come off worst under the pay-for-performance system.

I refer it my iron law of incentives – ‘only use them when providers truly believe that the target of the incentive lies within their control.’

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Friebel R, Steventon A. The multiple aims of pay-for-performance and the risk of unintended consequences. BMJ Qual Saf. 2016.