Bad Apples vs. Bad Systems

The bad apples versus bad systems argument has erupted again. This argument has been put forcibly in the Los Angeles Times by Philip Levitt.[1] He points out that:

  1. Error rates are not declining, despite humongous effort. This is not quite right; they declined quite markedly in England over the last decade,[2] and on many dimensions of safety adherence it was near 100%. Nevertheless, adverse events remain a substantial problem.
  2. Many interventions, such as surgical check-lists [3] and antisepsis bundles,[4] yield positive interventions when first introduced, but these cannot be replicated.[5] [6] [7]
  3. Analysis of the cognitive form of errors put them down mostly to individual failure rather than the system – most are technical errors during procedures, or misdiagnosis.[8] [9]
  4. Many studies show that a small pool of doctors generate a large proportion of complaints (3% of doctors triggering half of all complaints in an Australian study).[10] Arguably this proportion would be reflected among adverse events as well.

So maybe we should re-think our basic safety science premises. Certainly, falls, pressure ulcers, hospital infections, and medication errors can be blamed in large part on the system. However, these are not the major safety issues; over three-quarters of serious adverse events result from misdiagnosis and errors during procedures. While the system may play a part in these failures the CLAHRC WM Director, who practised at various times as physician and surgeon, is not convinced that the main problem lies in the system. No, diagnosis and safe surgery turn on individual skill. So we need to think about selection and improving the performance of individual clinicians – most especially those who make diagnoses and carry out procedures (i.e. doctors). Of course, if the definition of the system is made very broad, then of course selection and training are included, but the solution lies in medical schools and training programmes, rather than individual organisations. Can we identify an error prone phenotype before they end up in court or a complaints tribunal? Identifying such a phenotype is elusive – as work carried out in our pilot CLAHRC discovered.[11]

— Richard Lilford, CLAHRC WM Director

References:

  1. Levitt P. When medical errors kill. Los Angeles Times. 15 March 2014.
  2. Benning A, Dixon-Woods M, Nwulu U, Ghaleb M, Dawson J, Barber N, et al. Multiple component patient safety intervention in English hospitals: controlled evaluation of second phase. BMJ. 2011; 342: d199.
  3. Haynes AB, Weiser TG, Berry WR, Lipsitz SR, Breizat AH, Dellinger EP, et al. A Surgical Checklist to Reduce Morbidity and Mortality in a Global Population. N Engl J Med. 2009; 360: 491-9.
  4. Pronovost P, Needham D, Berenholtz S, Sinopoli D, Chu H, Cosgrove S, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med. 2006; 355(26): 2725-32.
  5. Urbach DR, Govindarajan A, Saskin R, Wilton AS, Baxter NN. Introduction of Surgical Safety Checklists in Ontario, Canada. N Engl J Med. 2014; 370: 1029-38.
  6. Reames BN, Scally CP, Thumma JR, Dimick JB. Evaluation of the Effectiveness of a Surgical Checklist in Medicare Patients. Med Care. 2015; 53(1): 87-94.
  7. Bion J, Richardson A, Hibbert P, Beer J, Abrusci T, McCutcheon M, et al. ‘Matching Michigan’: a 2-year stepped interventional programme to minimise central venous catheter-blood stream infections in intensive care units in England. BMJ Qual Saf. 2013; 22(2): 110-23.
  8. Brennan TA, Leape LL, Laird NM, Hebert L, Localio AR, Lawthers AG, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med. 1991; 324(6): 370-6.
  9. Fabri PJ, Zayas-Castro JL. Human error, not communication and systems, underlies surgical complications. Surgery. 2008; 144(4): 557-65.
  10. Bismark MM, Spittal MJ, Gurrin LC, Ward M, Studdert DM. Identification of doctors at risk of recurrent complaints: a national study of healthcare complaints in Australia. BMJ Qual Saf. 2013; 22(7): 532-40.
  11. Coleman JJ, Hemming K, Nightingale PG, Clark IR, Dixon-Woods M, Ferner RE, Lilford RJ. Can an electronic prescribing system detect doctors more likely to make a serious prescribing error? J R Soc Med. 2011; 104(5): 208-18.
Advertisements

4 thoughts on “Bad Apples vs. Bad Systems”

  1. This makes me recall aviation safety incidents, in which errors were often blamed on techincal ability of the pilot to fly the aircraft, and individual blame was assigned. However, the development of Crew Resource Management was in recognition of the function that the system played in conspiring to allow the technical error to occur. Dave Beaty’s Naked Pilot is an interesting read on the matter. Support to prevent the technical error from happening rather than attaching blame following an incident seems to be the overall message.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s