Vertical Health Care Programmes or Health System Strengthening: A False Dichotomy

Health care development is sometimes classified as vertical or horizontal. Vertical programmes target specific diseases or disease clusters. For example tuberculosis, HIV and malaria, are targeted by the Global Fund. Horizontal programmes, by contrast, seek to strengthen the system within which health care is embedded. Such programmes are concerned with human resources, financing, education, and supply chains, among many other functions.

There has been a strong push to move from vertical to horizontal programmes from many corners, including from this News Blog. Supporters of such a change in emphasis cannot but acknowledge the massive successes that vertical programmes have notched up, especially in the fields of infant health, maternal health, and infectious diseases.

However, the limitations of a purely diseased-based approach have become increasingly evident. Logically, it is not even possible to instigate a vertical approach in a complete system vacuum. For example, it would be difficult, if not impossible, to instigate a programme to improve HIV care, if the supply chain could not make drugs available and if the health system could not support basic diagnostic services. That said, vertical surfaces should not be able to siphon off more than their fair share of the health services infrastructure.

A recent Lancet paper on health services in Ethiopia made a further important point,[1] that vertical systems can make a very good platform to extend and deepen generic health systems. In fact, that is precisely what has happened in that country, with full support from the Global Fund and GAVI, the Vaccine Alliance. They refer to this combination of vertical and generic development as a “diagonal” investment approach. We would prefer to describe the relationship as one of symbiosis in which vertical and horizontal programmes are designed to reinforce each other.

The Ethiopian initiative involved strengthening the system at multiple levels, from health service financing, human resources policies, education, investment in primary care, and community outreach activities, along with support for community action and self-help (including the “IKEA model” previously described in this news blog).[2] Certainly, Ethiopia, along with other countries such as Bangladesh, Thailand and Rwanda, stand out for having achieved remarkable improvements over many dimensions of health. In Ethiopia the reduction in mortality for children under the age of five years was 67% from the 1990 baseline, while there was a 71% decline in the maternal mortality ratio and deaths from malaria, tuberculosis and HIV were halved. This took place against a financial backdrop of declining international aid but increasing domestic expenditure. The combination of vertical programmes and health system strengthening seems to have ensured that the money was not wasted.

— Richard Lilford, CLAHRC WM Director

References:

  1. Assefa Y, Tesfaye D, Van Damme W, Hill PS. Effectiveness and sustainability of a diagonal investment approach to strengthen the primary health-care system in Ethiopia. Lancet. 2018; 392: 1473-81.
  2. Lilford RJ. Pre-payment Systems for Access to Healthcare. NIHR CLAHRC West Midlands News Blog. 18 May 2018.
Advertisements

Childhood IQ and Mortality

Many studies have shown an association between childhood intelligence and mortality. However, most studies have been conducted with male participants, and potential mechanisms for the putative association are poorly understood. A recent paper looked at a large sample of Swedish people in an attempt to clarify these issues.[1]

The authors looked at IQ data from 19,919 Swedes who were 13 years old at the time (9,817 women), along with socioeconomic data from their childhood and middle age over the following 53 years. The analysis found an association between lower IQ and increased all-cause mortality. A one standard deviation decrease in IQ was associated with increased risk of all-cause mortality in both men (hazard ratio 1.31, 95% CI 1.23-1.39) and women (HR 1.16, 95% CI 1.08-1.25). Most causes of death were associated with lower IQ in men, while in women a lower IQ was associated with an increased risk of death from cancer and cardiovascular disease. When the authors adjusted for childhood socioeconomic factors the associations were slightly attenuated; but were further attenuated when adjusting for adulthood factors – considerably in men (overall mortality HR=1.17, 95% CI 1.08-1.26), and almost completely in women (HR 1.02, 95% CI 0.93-1.12). These results suggest that it is the social and socioeconomic circumstances in adulthood that contribute to the association between IQ and mortality, particularly in women, though the authors state that more research is needed to clarify the pathways linking childhood IQ and mortality across genders.

— Peter Chilton, Research Fellow

Reference:

  1. Wallin AS, Allebeck P, Gustafsson J-E, Hemmingsson T. Childhood IQ and mortality during 53 years’ follow-up of Swedish men and women. J Epidemiol Community Health. 2018; 72(10): 926-32.

Health Service and Delivery Research – a Subject of Multiple Meanings

Never has there been a topic so subject to lexicological ambiguity as that of Service Delivery Research. Many of the terms it uses are subject to multiple meanings, making communication devilishly difficult; a ‘Tower of Babel’ according to McKibbon, et al.[1] The result is that two people may disagree when they agree, or agree when they are fundamentally at odds. The subject is beset with ‘polysemy’(one word means different things) and, to an even greater extent, ‘cognitive synonyms’ (different words mean the same thing).

Take the very words “Service Delivery Research”. The study by McKibbon, et al. found 46 synonyms (or near synonyms) for the underlying construct, including applied health research, management research, T2 research, implementation research, quality improvement research, and patient safety research. Some people will make strong statements as to why one of these terms is not the same as another – they will tell you why implementation research is not the same as quality improvement, for example. But seldom will two protagonists agree and give the same explanation as to why they differ, and textual exegesis of the various definitions does not support separate meanings – they all tap into the same concept, some focussing on outcomes (quality, safety) and others on the means to achieve those outcomes (implementation, management).

Let us examine some widely used terms in more detail. Take first the term “implementation”. The term can mean two quite separate things:

  1. Implementation of the findings of clinical research (e.g. if a patient has a recent onset thrombotic stroke then administer a ‘clot busting’ medicine).
  2. Implementation of the findings from HS&DR (e.g. do not use incentives when the service providers targeted by the incentive do not believe they have any control over the target.[2][3]

Then there is my bête noire, “complex interventions”. This term concatenates separate ideas, such as the complexity of the intervention vs. the complexity of the system (e.g. health system) with which the intervention interacts. Alternatively, it may concatenate the complexity of the intervention components vs. the number of components it includes.

It is common to distinguish between process and outcome, á la Donabedian.[4] But this conflates two very different things – clinical process (such as prescribing the correct medicine, eliciting the relevant symptoms, or displaying appropriate affect), and service level (upstream) process endpoints (such as favourable staff/patient ratios, or high staff morale). We have described elsewhere the methodological importance of this distinction.[5]

Intervention description is famously conflated with intervention uptake/ fidelity/ adaptation. The intervention description should be the implementation as described (like the recipe), while the way the interventions is assimilated in the organisation is a finding (like the process the chef actually follows).[6]

These are just a few examples of words with multiple meanings that cause health service researchers to fall over their feet. Some have tried to forge an agreement over these various terms, but widespread agreement is yet to be achieved. In the meantime, it is important to explain precisely what is meant when we talk about implementation, processes, complexity, and so on.

— Richard Lilford, CLAHRC WM Director

References:

  1. McKibbon KA, Lokker C, Wilczynski NL, et al. A cross-sectional study of the number and frequency of terms used to refer to knowledge translation in a body of health literature in 2006: a Tower of Babel? Implementation Science. 2010; 5: 16.
  2. Lilford RJ. Financial Incentives for Providers of Health Care: The Baggage Handler and the Intensive Care Physician. NIHR CLAHRC West Midlands News Blog. 2014 July 25.
  3. Lilford RJ. Two Things to Remember About Human Nature When Designing Incentives. NIHR CLAHRC West Midlands News Blog. 2017 January 27.
  4. Donabedian A. Explorations in quality assessment and monitoring. Health Administration Press, 1980.
  5. Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ. 2010; 341: c4413.
  6. Brown C, Hofer T, Johal A, Thomson R, Nicholl J, Franklin BD, Lilford RJ. An epistemology of patient safety research: a framework for study design and interpretation. Part 3. End points and measurement. Qual Saf Health Care. 2008. 17;170-7.

Can Diet Help Maintain Brain Health?

A recent study in the journal of Neurology looked at the long-term effects high fruit and vegetable intake had on a person’s cognitive function.[1] The authors were able to research and follow-up 27,842 US men over a 26 year period. These men were middle-aged (mean age of 51 years) and were or had been health professionals.

Every four years, from 1986 to 2002, they completed questionnaires looking at their eating habits, and then completed subjective cognitive function questionnaires in 2008 and 2012. Logistic regression of the data found significant individual associations between higher intakes of vegetables (around six servings a day compared to two), fruits (around three servings a day compared to half) and fruit juice (once a day compared to less than once a month) and lower odds of moderate or poor subjective cognitive function. These associations remained significant after adjusting for non-dietary factors and total energy intake, though adjusting for dietary factors weakened the association with fruit intake. Daily consumption of orange juice (compared to less than one serving per month) was associated with much lower odds of poor subjective cognitive function, with an adjusted odds ratio of 0.53 (95% CI 0.43-0.67). Meanwhile the adjusted odds ratios for vegetables were 0.83 (05% CI 0.76-0.92) for moderate, and 0.66 (0.55-0.80) for poor subjective cognitive function. The authors also found that high intake of fruit and vegetables at the start of the study period was associated with a lower risk of poor subjective cognitive function at the end of the study. Although the study does not prove a causal link, the fact that the association lasted the length of the study support the idea that vegetable and fruit consumption may help avert memory loss.

— Peter Chilton, Research Fellow

Reference:

  1. Yuan C, Fondell E, Bhushan A, Ascherio A, Okereke OI, Grodstein F, Willett WC. Long-term intake of vegetables and fruits and subjective cognitive function in US men. Neurology. 2018.

Senior Doctors and In-hospital Care

Readers of this News Blog may be aware that we are involved in the HiSLAC (high-intensity, specialist-led acute care) project that examines the impact of increasing consultant presence on acute in-hospital care at weekends.[1-4] Professor Julian Bion, the Principal Investigator for the project, recently drew our attention to two studies from the US that have shown some interesting results in relation to the potential impact of senior doctors on the quality of care. One of the studies was a cross-over randomised controlled trial (RCT) conducted in general medical wards in which increased supervision by attending physicians (senior doctors) was compared with standard supervision [5]; the other was a retrospective cohort study in which the association between physician’s age and patient outcomes was explored.[6]

In the RCT, the attending physicians joined residents and interns (doctors who are still in training) on their ward rounds to see previously admitted (i.e. not newly admitted) patients in the increased supervision group, while the attending physicians were available but did not join the ward rounds in the standard supervision group. Medical error rates did not differ significantly between increased vs standard supervision (91 [95% CI 77 to 104] vs 108 [95% CI 86 to 134] events per 1000 patient-days), but interns (the most junior doctors) spoke significantly less, and both residents and interns felt that they were lessefficient and less autonomous in the ward rounds with increased supervision.[5]

The retrospective cohort study was undertaken using a 20% random sample of Medicare (an US federal health insurance program primarily for elderly people) beneficiaries admitted to hospital with a medical condition and treated by hospitalists (senior doctors specialised in the general care of patients in hospitals). The association between the hospitalists’ age and 30-day mortality, 30-day re-admission and cost of care was explored with statistical adjustment covering patient characteristics, physician characteristics and hospital fixed effects (which essentially allows comparisons be made within hospitals). Adjusted 30-day mortality was found to increase with doctors’ age: 10.8%, 11.1%, 11.3% and 12.1% for ages <40, 40-49, 50-59 and ≥60 respectively. The association appears robust under various sensitivity and subgroup analyses, with an exception that no such association was found among doctors with a high volume of patients. Re-admission rates were similar between doctors’ age groups and costs of care were slightly higher among older doctors.[6]

What should we make out of these findings? For the RCT, the observed effect (reduction in medical errors) was in the expected direction but the study was under-powered (the sample size was powered to detect a 40% relative reduction in error rates vs. 15% actually observed). However, the junior doctors clearly felt qualified to ‘fly solo’. For the observational study, while the association between doctors’ age and care quality and outcomes may require further scrutiny, it is highly speculative. Since an experimental study is not on the cards, cause and effect reasoning must await triangulation of multiple observations across the chain from cause to effect.[7] Such a study is currently under way with respect to the cause of the “weekend effect”.[8]

— Yen-Fu Chen, Principal Research Fellow

References:

  1. Watson SI, Chen YF, Bion JF, Aldridge CP, Girling A, Lilford RJ. Protocol for the health economic evaluation of increasing the weekend specialist to patient ratio in hospitals in England. BMJ Open. 2018; 8: e015561.
  2. Bion J, Aldridge CP, Girling A, et al. Two-epoch cross-sectional case record review protocol comparing quality of care of hospital emergency admissions at weekends versus weekdays. BMJ Open. 2017; 7: e018747.
  3. Chen Y-F, Boyal A, Sutton E, et al. The magnitude and mechanisms of the weekend effect in hospital admissions: A protocol for a mixed methods review incorporating a systematic review and framework synthesis. Syst Rev. 2016; 5(1): 84.
  4. Tarrant C, Sutton E, Angell E, Aldridge CP, Boyal A, Bion J. The ‘weekend effect’ in acute medicine: a protocol for a team-based ethnography of weekend care for medical patients in acute hospital settings. BMJ Open.2017; 7(4): e016755.
  5. Finn KM, Metlay JP, Chang Y, et al. Effect of increased inpatient attending physician supervision on medical errors, patient safety, and resident education: a randomized clinical trial. JAMA Intern Med. 2018; 178(7): 952-59.
  6. Tsugawa Y, Newhouse JP, Zaslavsky AM, Blumenthal DM, Jena AB. Physician age and outcomes in elderly patients in hospital in the US: observational study. BMJ. 2017; 357: j1797.
  7. Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end pointsBMJ. 2010; 341: c4413.
  8. Lilford RJ, Chen YF. The ubiquitous weekend effect: moving past proving it exists to clarifying what causes it. BMJ Qual Saf. 2015; 24(8): 480-2.

 

A Casualty of Evidence-Based Medicine – Or Just One of Those Things. Balancing a Personal and Population Approach

My mother-in-law, Celia, died last Christmas. She died in a nursing care home after a short illness – a UTI that precipitated prescription of two courses of antibiotics followed by an overwhelming C. diffinfection from which she did not recover. She had suffered from mild COPD after years of cigarette smoking, although she had given up more than 35 years previously, and she also had hypertension (high blood pressure) treated with a variety of different medications (more of which later). She was an organised and sensible Jewish woman who would not let you leave her flat without a food parcel of one kind or another, and who had arranged private health insurance to have her knees and cataracts replaced in good time. Officially, medically she had multimorbidity; unofficially her life was a full and active one, which she enjoyed. She moved house sensibly and in good time, to a much smaller warden-supervised flat with a stair lift, ready to enjoy her declining years in comfort and with support. She had a wide circle of friends, loved going out to matinées at the theatre, and was a passionate bridge player and doting grandma. So far so typical, but I wonder if indirectly she died of iatrogenesis – doctor induced disease – and I have been worrying about exactly how to understand and interpret the pattern of events that afflicted her for some time.

A couple of weeks ago a case-control study was published in JAMA (I can already hear you say ‘case control in JAMA!’ yes – andit’s a good paper).[1] It helps to raise the problem of whatmay have happened to my son’s grandma and has implications for evidence use in health care. The important issue is that my mother-in-law also suffered from recurrent syncope, or fainting and falls. It became inconvenient – actually more than inconvenient. She would faint after getting up from a meal, after going upstairs, after rising in the morning – in fact at any time when she stood up. She fell a lot, maybe ten times that I knew about and perhaps there were more. She badly bruised her face once, falling onto her stair lift and on three occasions she broke her bones as a result of falling. She broke her ankle requiring surgical intervention and her arm, and her little finger. Her GP ordered a 24-hour ECG and referred her to a cardiologist where she had a heap of expensive investigations.

Ever the over-enthusiastic medically-qualified, meddling epidemiologist, I went with her to see her cardiologist. We had a long discussion about my presumptive diagnosis: postural hypotension – low blood pressure on standing up – and her blood pressure readings confirmed my suspicion. Postural hypotension can be caused by rare abnormalities, but one of the commonest causes is antihypertensive medication – medication for high blood pressure. The cardiologist and the GP were interested in my view, but were unhappy to change her medication. As far as they were concerned, she definitely came into the category of high blood pressure, which should be treated.

The JAMA paper describes the mortality and morbidity experience of 19,143 treated patients matched to untreated controlsin the UK using CPRD data. Patients entered the study on an ‘index date’, defined as 12 months after the date of the third consecutive blood pressure reading in specific a range (140-159/90-99mmHg). It says: “During a median follow-up period of 5.8 years (interquartile range, 2.6-9.0 years), no evidence of an association was found between antihypertensive treatment and mortality (hazard ratio [HR], 1.02; 95% CI, 0.88-1.17) or between antihypertensive treatment and CVD (HR, 1.09; 95% CI, 0.95-1.25). Treatment was associated with an increased risk of adverse events, including hypotension (HR, 1.69; 95% CI, 1.30-2.20; number needed to harm at 10 years [NNH10], 41), and syncope (HR, 1.28; 95% CI, 1.10-1.50; NNH10, 35).”

Translated into plain English, this implies that the high blood pressure medication did not make a difference to the outcomes that it was meant to prevent (cardiovascular disease or death). However, it did make a difference to the likelihood of getting adverse events including hypotension (low blood pressure) and syncope (fainting). The paper concludes: “This prespecified analysis found no evidence to support guideline recommendations that encourage initiation of treatment in patients with low-risk mild hypertension. There was evidence of an increased risk of adverse events, which suggests that physicians should exercise caution when following guidelines that generalize findings from trials conducted in high-risk individuals to those at lower risk.”

Of course, there are plenty of possible criticisms that can never be completely ironed out of a retrospective case control study relying on routine data, even by the eagle-eyed scrutineers at CLAHRC WM and the JAMA editorial office. Were there underlying pre-existing characteristics that differentiated case and controls at inception into the study, which might affect their subsequent mortality or morbidity experience? Perhaps those who were the untreated controls were already ‘survivors’ in some way that could not be adjusted for. Was the follow-up period long enough for the participants to experience the relevant outcomes of interest? A median of 5.8 years is not long when considering the development of major cardiovascular illness. Was attention to methods of dealing with missing data adequate? For example, the study says: “Where there was no record of blood pressure lowering, statin or antiplatelet treatment, it was assumed that patients were not prescribed treatment.” Nevertheless, some patients might have been receiving prescriptions that, for whatever reason, were not properly recorded. The article is interesting, and food for thought. We must always bear in mind, however, that observational designs are subject to the play of those well-known, apparently causative variables, ‘confoundings.’[2]

What does all this mean for my mother-in-law? I did not have access to her full medical record and do not know the exact pattern of her blood pressure readings over the years. I am sure that current guideless would clearly have stated that she should be prescribed antihypertensive medication. The risk of her getting a cardiovascular event must have been high, but the falls devastated her life completely. Her individual GP and consultant took a reasonable, defensible and completely sensible decision to continue with her medication and her falls continued. Finally, a family decision was taken that she couldn’t stay in her own home – she had to be watched 24 hours a day. Her unpredictable and devastating falls were very much a factor in the decision.

Celia hated losing her autonomy and she never really agreed with the decision. From the day that the decision was taken she went downhill. She stopped eating when she went in to the nursing home and wouldn’t even take the family’s chicken soup, (the Jewish antibiotic) however lovingly prepared. It was not surprising that after a few weeks, and within days of her 89thbirthday, she finally succumbed to infection and died.

How can we rationalise all this? Any prescription for any medication should be a balance of risks and benefits, and we need to assess these at both the population level, for guidelines, and at the individual level, for individuals. It’s very hard to calculate precisely how the risk of possible future cardiovascular disease (heart attack or stroke) stacked up for my mother-in-law, against the real and present danger of her falls. But I can easily see what apparently went wrong in her medical care, with the benefit of hindsight. I think that the conclusion has to be that in health care we should never lose sight of the individual. Was my mother-in-law an appropriately treated elderly woman experiencing the best of evidence-based medicine? Or was she the victim of iatrogenesis, a casualty of evidence-based medicine whose personal experiences and circumstances were not fully taken into account in the application of guidelines? Certainly, in retrospect it seems to me that I may have failed her – I wish I’d supported her more to have her health care planned around her life, rather than to have her shortened life planned around her health care.

Aileen Clarke, Professor at Warwick Medical School

References:

  1. Sheppard JP, Stevens S, Stevens R, et al. Benefits and Harms of Antihypertensive Treatment in Low-Risk Patients With Mild Hypertension. JAMA Intern Med. 2018.
  2. Goldacre B. Personal communication. 2018.

Impact of Childcare on Children

Leaving your child crying at the nursery door is a difficult experience that can leave a working parent questioning whether they have the right priorities. When I first experienced this a few years ago, a good friend working at Cancer Research sent me a summary of research showing an inverse association between institutional childcare and childhood cancer (probably mediated by early childhood infections). “Don’t worry, going to nursery is doing at least some good for your child!” she said.

A new study using data from the EDEN mother-child cohort (based in France) gives additional reasons to alleviate working parent guilt.[1] This study examined childcare arrangements in the first three years of life for 1,428 children, categorising this as: with a childminder, centre-based (i.e. nursery or crèche staffed with professionals), or informal (primarily parents, complemented with grandparents or other non-professionals). Emotional and behavioural development of the child were assessed at age 3, 5.5 and 8 years. Confounders, including child factors (such as birthweight, duration of breastfeeding), parental sociodemographic factors (such as marital status, mother’s perception of partner support), and parents’ mental health, were considered in analyses through propensity scores and inverse probability weights.

Formal childcare was found to predict lower levels of emotional symptoms and peer-relationship problems, and promote high levels of prosocial behaviour even at age 8. Children who were in centre-based childcare had the lowest levels of emotional symptoms and peer relationship problems.

Surprisingly (to me), subgroup analyses showed that girls, children whose mother had high education, and those whose mother was not depressed may benefit the most from formal childcare. The authors state that the result for girls is likely to be because childcare mainly reduces internalising problems which are more prevalent in girls. The fact that the other ‘low-risk’ children fare better when exposed to formal childcare is suggested to be because the universal curriculum is most appropriate for those who do not have more severe emotional and social development issues.

Clearly there are many things to consider when deciding whether to work while also a parent to a small child, even if a rule generally applies, only the person making the decision knows the context of their own family and what suits them best. Also worth noting that this observational study cannot prove a causal relationship. But for those of us who do choose to leave a child in centre-based care- this paper offers some solace in those moments of ambivalence.

— Oyinlola Oyebode, Associate Professor

Reference:

  1. Gomajee R, El-Khoury F, Côté S, van der Waerden J, Pryor L, Melchior M; EDEN mother-child Cohorts Study Group. Early childcare type predicts children’s emotional and behavioural trajectories into middle childhood. Data from the EDEN mother-child cohort study. Journal of epidemiology and community health. J Epidemiol Community Health. 2018;72(11):1033-1043.

Examining Quality of TB Care with Standardised Patients

Kwan and colleagues have recently published another study to add to their growing portfolio of research on the use of standardised patients (SP) (i.e., actors trained to act as a real patient and portray a case) to examine the quality of tuberculosis (TB) care in India.[1] This interesting paper builds on their prior work, some of which we have discussed in earlier editions of this blog.[2]

TB is a significant problem in India. It accounts for a quarter of the world’s estimated 10.4 million new cases of TB annually, and nearly a third of the 1.7 million yearly TB deaths. The quality of healthcare provision in India’s private sector – the first point of contact for the bulk of symptomatic TB patients – is generally accepted to be suboptimal and highly variable.

This impressive study involved 2,652 SP-provider interactions across 1,203 health facilities and 1,288 provider practices in two economically disparate Indian cities (Mumbai and Patna) with a high prevalence of cases of TB. It focused on healthcare providers both with and without formal medical training, and was covertly nested within a Government of India initiated TB management improvement programme. The authors trained 24 local actors (seven female and 17 male) to portray four scenarios representing various stages of diagnostic and disease progression of TB. Over a nine month period, SPs undertook incognito visits to providers – with measures in place to protect against detection. Within two-hours of each visit a field researcher administered exit questionnaires to SPs to record details of the interaction. The main outcome of interest in this study was case-specific correct management based on local clinical guidelines for the management of TB.

The key findings were that:

  • Only 25% of SP-provider interactions resulted in standards-compliant care.
  • Only 35% of cases were correctly managed and of these, 53% of providers ordered a chest X-ray, 36% referred the SP for further care (roughly equal split of referrals to private and public sector providers), and 31% ordered a microbiological test for diagnosis – a relatively infrequent occurrence across all case scenarios.
  • Medicines (mostly antibiotics) were very frequently prescribed or dispensed – the average rate was three per interaction.
  • Rather unsurprisingly, yet reassuringly, medically trained providers were almost three times more likely than non-medically trained providers to correctly manage cases, ask for chest X-ray and/or sputum tests, and initiate anti-TB treatment.
  • Differences in case management for medically and non-medically trained providers between Mumbai and Patna were minimal.

However, the important take-home message is that, in spite of providing relatively higher-quality care, medically trained providers still only correctly managed 54% of interactions, and were more likely than others to prescribe unnecessary or harmful antibiotics, which in a global epidemic of antibiotic resistance, is a particularly worrying result.

A key strength of this study is that it provided representative data on actual provider behaviour, thus addressing the widely acknowledged ‘know-do’ gap, though it also reiterates two important and recurrent considerations for the use of SPs in research studies:

  1. SPs are most useful for first-visits and have not yet been used in repeat visits. But is it reasonable to assume that quality of care may be better at a follow-up visit? This is an issue worthy of investigation in future work.
  2. Should we be asking for prior consent from participating providers? A continuing issue of contention, particularly relevant to the use of SPs in real-life (not educational) settings.

— Navneet Aujla, Research Fellow

Reference:

  1. Kwan A, Daniels B, Saria V, et al.Variations in the quality of tuberculosis care in urban India: A cross-sectional, standardized patient study in two cities. PLoS Med. 2018; 15(9): e1002653.
  2. Lilford RJ. Private Consultations More Effective Than Public Provision in Rural India. NIHR CLAHRC West Midlands News Blog. 23 June 2017.

Mortality Rate Convergence between High- and Low-Income Countries

A recent Lancet commission led by Watkins and others examined the rate of convergence between high- and low-income countries for a number of conditions.[1] Huge progress has been made in mortality of under-fives and from HIV/AIDS. Progress is less impressive for maternal mortality, and less impressive still for tuberculosis mortality. The authors argued for greater investment in the latter two topics. Other topics singled out for good reason include cervical cancer, hepatitis B and rheumatic heart disease, all on the grounds of great disparities between rich and poor populations. They also argue that more attention must be paid to preparing for pandemics, a topic covered by CLAHRC West Midlands.[2]

The authors argue for greater domestic spending and point out that the economic returns on investment arise from both increased productivity andthe improvement in human welfare, such as that captured in DALYs. But they are very keen to see better targeting of expenditure, which will require careful economic analysis, such as that we are carrying out into ambulance services. The authors argue for more savvy procurement to shape markets using Gavi, the vaccine alliance, as an excellent example. Following this model, rich countries could incentivise industry to develop new treatments for tuberculosis, for example. The authors make the excellent point that huge improvements could come from closing the delivery practice gap through population, policy and implementation research. The spread of unhealthy products needs to be curtailed following the model of the WHO convention in tobacco control.

A recurring theme is that many of the above objectives require international action: shaping markets, preparing for pandemics, and preventing diffusion if unhealthy products, for example. I am writing this report from Kigali at the close of the NIHR Global Surgery Unit conference. This has been precisely the kind of international collaboration that the authors are arguing for.

— Richard Lilford, CLAHRC WM Director

References:

  1. Watkins DA, Yamey G, Schäferhoff M, et al. Alma-Ata at 40 years: reflections from the LancetCommission on Investing in Health. Lancet. 2018; 392: 1434-60.
  2. Watson SI, Chen Y-F, Nguyen-Van-Tam JS, Myles PR, Venkatesan S, Zambon M, Uthman O, Chilton PJ, Lilford RJ. Evidence synthesis and decision modelling to support complex decisions: stockpiling neuraminidase inhibitors for pandemic influenza usage. F1000Res. 2016; 5: 2293.

Evidence-Based Guidelines and Practitioner Expertise to Optimise Community Health Worker Programmes

The rapid increase in scale and scope of community health worker (CHW) programmes highlights a clear need for guidance to help programme providers optimise programme design. A new World Health Organization (WHO) guideline in this area [1] is therefore particularly welcome, and provides a complement to existing guidance based on practitioner expertise.[2] The authors of the WHO guideline undertook an overview of existing reviews (N=122 reviews with over 4,000 references included), 15 separate systematic reviews of primary studies (N=137 studies included), and a stakeholder perception survey (N=96 responses). The practitioner expertise report was developed following a consensus meeting of six CHW programme implementers, a review of over 100 programme documents, a comparison of the standard operating procedures of each implementer to identify areas of alignment and variation, and interviews with each implementer.

The volume of existing research, in terms of the number of eligible studies included in each of the 15 systematic reviews, varied widely, from no studies for the review question “Should practising CHWs work in a multi-cadre team versus in a single-cadre CHW system?” to 43 studies for the review question “Are community engagement strategies effective in improving CHW programme performance and utilization?”. Across the 15 review questions, only two could be answered with “moderate” certainty of evidence (the remainder were “low” or “very low”): “What competencies should be included in the curriculum?” and “Are community engagement strategies effective?”. Only three review questions had a “strong” recommendation (as opposed to “conditional”): those based on Remuneration(do so financially), Contracting agreements(give CHWs a written agreement), and Community engagement(adopt various strategies). There was also a “strong” recommendation not to use marital status as a selection criterion.

The practitioner expertise report provided recommendations in eight key areas and included a series of appendices with examples of selection tools, supervision tools and performance management tools. Across the 18 design elements, there was alignment across the six implementers for 14, variation for two (Accreditation– although it is recommended that all CHW programmes include accreditation – and CHW:Population ratio), and general alignment but one or more outliers for two (Career advancement– although supported by all implementers, and Supply chain management practices).

There was general agreement between the two documents in terms of the design elements that should be considered for CHW programmes (Table 1), although notincluding an element does not necessarily mean that the report authors do not think it is important. In terms of the specific content of the recommendations, the practitioner expertise document was generally more specific; for example, on the frequency of supervision the WHO recommend “regular support” and practitioners “at least once per month”. The practitioner expertise report also included detail on selection processes, as well as selection criteria: not just what to select for, but how to put this into practice in the field. Both reports rightly highlight the need for programme implementers to consider all of the recommendations within their own local contexts; one size will not fit all. Both also highlight the need for more high quality research. We recently found no evidence of the predictive validity of the selection tools used by Living Goods to select their CHWs,[3] although these tools are included as exemplars in the practitioner expertise report. Given the lack of high quality evidence available to the WHO report authors, (suitably qualified) practitioner expertise is vital in the short term, and this should now be used in conjunction with the WHO report findings to agree priorities for future research.

Table 1: Comparison of design elements included in the WHO guideline and Practitioner Expertise report

114 DC - WHO Guidelines Fig

— Celia Taylor, Associate Professor

References:

  1. World Health Organization. WHO guideline on health policy and system support to optimize community health worker programmes. Geneva, Switzerland: WHO; 2018.
  2. Community Health Impact Coalition. Practitioner Expertise to Optimize Community Health Systems. 2018.
  3. Taylor CA, Lilford RJ, Wroe E, Griffiths F, Ngechu R. The predictive validity of the Living Goods selection tools for community health workers in Kenya: cohort study. BMC Health Serv Res. 2018; 18: 803.