Measuring the Quality of Health Care in Low-Income Settings

Measuring the quality of health care in High-Income Countries (HIC) is deceptively difficult, as shown by work carried out by many research groups, including CLAHRC WM.[1-5] However, a large amount of information is collected routinely by health care facilities in HICs. This data includes outcome data, such as Standardised Mortality Rates (SMRs), death rates from ’causes amenable to health care’, readmission rates, morbidity rates (such as pressure damage), and patient satisfaction, along with process data, such as waiting times, prescribing errors, and antibiotic use. There is controversy over many of these endpoints, and some are much better barometers of safety than others. While incident reporting systems provide a very poor basis for epidemiological studies (that is not their purpose), case-note review provides arguably the best and most widely used method for formal study of care quality – at least in hospitals.[3] [6] [7] Measuring safety in primary care is inhibited by the less comprehensive case-notes found in primary care settings as compared to hospital case-notes. Nevertheless, increasing amounts of process information is now available from general practices, particularly in countries (such as the UK) that collect this information routinely in electronic systems. It is possible, for example, to measure rates of statin prescriptions for people with high cardiovascular risk, and anticoagulants for people with ventricular fibrillation, as our CLAHRC has shown.[8] [9] HICs also conduct frequent audits of specific aspects of care – essentially by asking clinicians to fill in detailed pro formas for patients in various categories. For instance, National Audits in the UK have been carried out into all patients experiencing a myocardial infarction.[10] Direct observation of care has been used most often to understand barriers and facilitators to good practice, rather than to measure quality / safety in a quantitative way. However, routine data collection systems provide a measure of patient satisfaction with care – in the UK people who were admitted to hospital are surveyed on a regular basis [11] and general practices are required to arrange for anonymous patient feedback every year.[12] Mystery shoppers (simulated patients) have also been used from time to time, albeit not as a comparative epidemiological tool.[13]

This picture is very different in Low- and Middle-Income Countries (LMIC) and, again, it is yet more difficult to assess quality of out of hospital care than of hospital care.[14] Even in hospitals routine mortality data may not be available, let alone process data. An exception is the network of paediatric centres established in Kenya by Prof Michael English.[15] Occasionally large scale bespoke studies are carried out in LMICs – for example, a recent study in which CLAHRC WM participated, measured 30 day post-operative mortality rates in over 60 hospitals across low-, middle- and high-income countries.[16]

The quality and outcomes of care in community settings in LMICs is a woefully understudied area. We are attempting to correct this ‘dearth’ of information in a study in nine slums spread across four African and Asian countries. One of the largest obstacles to such a study is the very fragmented nature of health care provision in community settings in LMICs – a finding confirmed by a recent Lancet commission.[17] There are no routine data collection systems, and even deaths are not registered routinely. Where to start?

In this blog post I lay out a framework for measurement of quality from largely isolated providers, many of whom are unregulated, in a system where there is no routine system of data and no archive of case-notes. In such a constrained situation I can think of three (non-exclusive) types of study:

  1. Direct observation of the facilities where care is provided without actually observing care or its effects. Such observation is limited to some of the basic building blocks of a health care system – what services are present (e.g. number of pharmacies per 1,000 population) and availability (how often the pharmacy is open; how often a doctor / nurse / medical officer is available for consultation in a clinic). Such a ‘mapping’ exercise does not capture all care provided – e.g. it will miss hospital care and municipal / hospital-based outreach care, such as vaccination provided by Community Health Workers. It will also miss any IT based care using apps or online consultations.
  2. Direct observation of the care process by external observers. Researchers can observe care from close up, for example during consultations. Such observations can cover the humanity of care (which could be scored) and/or technical quality (which again could be scored against explicit standards and/or in a holistic (implicit) basis).[6] [7] An explicit standard would have to be based mainly on ‘if-then’ rules – e.g. if a patient complained of weight loss, excessive thirst, or recurrent boils, did the clinicians test their urine for sugar; if the patient complained of persistent productive cough and night sweats was a test for TB arranged? Implicit standards suffer from low reliability (high inter-observer variation).[18] Moreover, community providers in LMICs are arguably likely to be resistant to what they might perceive as an intrusive or even threatening form of observation. Those who permitted such scrutiny are unlikely to constitute a random sample. More vicarious observations – say of the length of consultations – would have some value, but might still be seen as intrusive. Provided some providers would permit direct observation, their results may represent an ‘upper bound’ on performance.
  3. Quality as assessed through the eyes of the patient / members of the public. Given the limitations of independent observation, the lack of anamnestic records of clinical encounters in the form of case-notes, absence of routine data, and likely limitations on access by independent direct observers, most information may need to be collected from patients themselves, or as we discuss, people masquerading as patients (simulated patients / mystery shoppers). The following types of data collection methods can be considered:
    1. Questions directed at members of the public regarding preventive services. So, households could be asked about vaccinations, surveillance (say for malnutrition), and their knowledge of screening services offered on a routine basis. This is likely to provide a fairly accurate measure of the quality of preventive services (provided the sampling strategy was carefully designed to yield a representative sample). This method could also provide information on advice and care provided through IT resources. This is a situation where some anamnestic data collection would be possible (with the permission of the respondent) since it would be possible to scroll back through the electronic ‘record’.
    2. Opinion surveys / debriefing following consultations. This method offers a viable alternative to observation of consultations and would be less expensive (though still not inexpensive). Information on the kindness / humanity of services could be easily obtained and quantified, along with ease of access to ambulatory and emergency care.[19] Measuring clinical quality would again rely on observations against a gold standard,[20] but given the large number of possible clinical scenarios standardising quality assessment would be tricky. However, a coarse-grained assessment would be possible and, given the low quality levels reported anecdotally, failure to achieve a high degree of standardisation might not vitiate collection of important information. Such a method might provide insights into the relative merits and demerits of traditional vs. modern health care, private vs. public, etc., provided that these differences were large.
    3. Simulated patients offering standardised clinical scenarios. This is arguably the optimal method of technical quality assessment in settings where case-notes are perfunctory or not available. Again, consultations could be scored for humanity of care and clinical/ technical competence, and again explicit and/or implicit standards could be used. However, we do not believe it would be ethical to use this method without obtaining assent from providers. There are some examples of successful use of the methods in LMICs.[21] [22] However, if my premise is accepted that providers must assent to use of simulated patients, then it is necessary to first establish trust between providers and academic teams, and this takes time. Again, there is a high probability that only the better providers will provide assent, in which case observations would likely represent ‘upper bounds’ on quality.

In conclusion, I think that the basic tools of quality assessment, in the current situation where direct observation and/or simulated patients are not acceptable, is a combination of:

  1. Direct observation of facilities that exist, along with ease of access to them, and
  2. Debriefing of people who have recently used the health facilities, or who might have received preventive services that are not based in these facilities.

We do not think that the above mentioned shortcomings of these methods is a reason to eschew assessment of service quality in community settings (such as slums) in LMICs – after all, one of the most powerful levers to improvement is quantitative evidence of current care quality.[23] [24] The perfect should not be the enemy of the good. Moreover, if the anecdotes I have heard regarding care quality (providers who hand out only three types of pill – red, yellow and blue; doctors and nurses who do not turn up for work; prescription of antibiotics for clearly non-infectious conditions) are even partly true, then these methods would be more than sufficient to document standards and compare them across types of provider and different settings.

— Richard Lilford, CLAHRC WM Director

References:

  1. Brown C, Hofer T, Johal A, Thomson R, Nicholl J, Franklin BD, Lilford RJ. An epistemology of patient safety research: a framework for study design and interpretation. Part 1. Conceptualising and developing interventions. Qual Saf Health Care. 2008; 17(3): 158-62.
  2. Brown C, Hofer T, Johal A, Thomson R, Nicholl J, Franklin BD, Lilford RJ. An epistemology of patient safety research: a framework for study design and interpretation. Part 2. Study design. Qual Saf Health Care. 2008; 17(3): 163-9.
  3. Brown C, Hofer T, Johal A, Thomson R, Nicholl J, Franklin BD, Lilford RJ. An epistemology of patient safety research: a framework for study design and interpretation. Part 3. End points and measurement. Qual Saf Health Care. 2008; 17(3): 170-7.
  4. Brown C, Hofer T, Johal A, Thomson R, Nicholl J, Franklin BD, Lilford RJ. An epistemology of patient safety research: a framework for study design and interpretation. Part 4. One size does not fit all. Qual Saf Health Care. 2008; 17(3): 178-81.
  5. Brown C, Lilford R. Evaluating service delivery interventions to enhance patient safety. BMJ. 2008; 337: a2764.
  6. Benning A, Ghaleb M, Suokas A, Dixon-Woods M, Dawson J, Barber N, et al. Large scale organisational intervention to improve patient safety in four UK hospitals: mixed method evaluation. BMJ. 2011; 342: d195.
  7. Benning A, Dixon-Woods M, Nwulu U, Ghaleb M, Dawson J, Barber N, et al. Multiple component patient safety intervention in English hospitals: controlled evaluation of second phase. BMJ. 2011; 342: d199.
  8. Finnikin S, Ryan R, Marshall T. Cohort study investigating the relationship between cholesterol, cardiovascular risk score and the prescribing of statins in UK primary care: study protocol. BMJ Open. 2016; 6(11): e013120.
  9. Adderley N, Ryan R, Marshall T. The role of contraindications in prescribing anticoagulants to patients with atrial fibrillation: a cross-sectional analysis of primary care data in the UK. Br J Gen Pract. 2017. [ePub].
  10. Herrett E, Smeeth L, Walker L, Weston C, on behalf of the MINAP Academic Group. The Myocardial Ischaemia National Audit Project (MINAP). Heart. 2010; 96: 1264-7.
  11. Care Quality Commission. Adult inpatient survey 2016. Newcastle-upon-Tyne, UK: Care Quality Commission, 2017.
  12. Ipsos MORI. GP Patient Survey. National Report. July 2017 Publication. London: NHS England, 2017.
  13. Grant C, Nicholas R, Moore L, Sailsbury C. An observational study comparing quality of care in walk-in centres with general practice and NHS Direct using standardised patients. BMJ. 2002; 324: 1556.
  14. Nolte E & McKee M. Measuring and evaluating performance. In: Smith RD & Hanson K (eds). Health Systems in Low- and Middle-Income Countries: An economic and policy perspective. Oxford: Oxford University Press; 2011.
  15. Tuti T, Bitok M, Malla L, Paton C, Muinga N, Gathara D, et al. Improving documentation of clinical care within a clinical information network: an essential initial step in efforts to understand and improve care in Kenyan hospitals. BMJ Global Health. 2016; 1(1): e000028.
  16. Global Surg Collaborative. Mortality of emergency abdominal surgery in high-, middle- and low-income countries. Br J Surg. 2016; 103(8): 971-88.
  17. McPake B, Hanson K. Managing the public-private mix to achieve universal health coverage. Lancet. 2016; 388: 622-30.
  18. Lilford R, Edwards A, Girling A, Hofer T, Di Tanna GL, Petty J, Nicholl J. Inter-rater reliability of case-note audit: a systematic review. J Health Serv Res Policy. 2007; 12(3): 173-80.
  19. Schoen C, Osborn R, Huynh PT, Doty M, Davis K, Zapert K, Peugh J. Primary Care and Health System Performance: Adults’ Experiences in Five Countries. Health Aff. 2004.
  20. Kruk ME & Freedman LP. Assessing health system performance in developing countries: A review of the literature. Health Policy. 2008; 85: 263-76.
  21. Smith F. Private local pharmacies in low- and middle-income countries: a review of interventions to enhance their role in public health. Trop Med Int Health. 2009; 14(3): 362-72.
  22. Satyanarayana S, Kwan A, Daniels B, Subbaramn R, McDowell A, Bergkvist S, et al. Use of standardised patients to assess antibiotic dispensing for tuberculosis by pharmacies in urban India: a cross-sectional study. Lancet Infect Dis. 2016; 16(11): 1261-8.
  23. Kudzma E C. Florence Nightingale and healthcare reform. Nurs Sci Q. 2006; 19(1): 61-4.
  24. Donabedian A. The end results of health care: Ernest Codman’s contribution to quality assessment and beyond. Milbank Q. 1989; 67(2): 233-56.
Advertisements

One thought on “Measuring the Quality of Health Care in Low-Income Settings”

  1. Our recent study of the quality of remote rural PHC units in Uganda corroborates much of what Richard described – a good summary and I hope he publishes this in the ISQUA journal.
    To evaluate the effects of a QI and incentives intervention on quality of maternal care, we had to collect the following before, after and later measures
    1)Process measures from the health centers maternity and ANC and PNC OP clinic registers – we had to manually abstract the data as the monthly summaries were unreliable. (This is a shame, as monthly summaries are entered into the management information system which is actually quite good, and if more reliable would provide information for performance management and QI far better than many high resource countries).
    2)Exit interviews of samples of mothers after ANC and PNC OP clinics
    3)Direct observation of births using an observation collection tool I developed which was also based on standards expected in Uganda national MOH statements.

    Very time consuming data collection and costly even with low wages of trained midwives surveyors we used.

    The marginal improvement in some Q measures was not large enough to be able to attribute it to the intervention (even with comparison control sites).

    Much learned – if monthly Q indicator summaries are properly made and entered into the HMIS then we have a viable and potentially powerful tool. They are very close to being able to do this in Uganda but time for data reporting by midwives has to compete with clinical time seeing mothers when there is often only 1 midwife at the center. Much more to say but well done Richard for excellent summary of main points – but for Uganda I am more hopeful that an effective Q measurement system is possible
    Best wishes,
    John Øvretveit
    John Ovretveit, Director of Research and Professor of Health Innovation Implementation and Evaluation
    LIME/MMC, Tomtebodavägen 18A. Karolinska Institutet, Stockholm 17177, Sweden

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s