Lead Exposure and DALYs

It is well known that exposure to lead can cause a number of health problems, such as cognitive impairment, cardiovascular problems, low birth weight, etc. Exposure is also associated with a decreased life expectancy and economic output. While many countries have banned the use of lead in products such as petrol and paints, leading to significant declines in the levels of lead recorded in a person’s blood (termed blood lead levels – BLLs) there are still numerous other sources of exposure. In India, for example, studies found elevated BLLs in the population more than ten years after leaded petrol was phased out; sources include from lead smelting sites, some ayurvedic medicines, cosmetics, contaminated food, and contaminated tube wells, rivers and soil. In order to assess the extent of elevated BLLs in India, Ericson and colleagues conducted a meta-analysis of 31 studies totalling 67 samples.[1] Overall, they found a mean BLL of 6.86 μg/dL (95% CI: 4.38-9.35) in children, and 7.52 μg/dL (95% CI: 5.28-9.76) in adults (who did not work with lead). As a reference, the CDC deem a BLL of 5 μg/dL as requiring prompt medical investigation, “based on the 97.5% of BLL distribution among children… in the United States”.[2] From these figures the authors estimated that such high levels of exposure resulted in a DALY loss of 4.9 million (95% CI 3.9-5.6) in 2012. Further, data from other studies suggest that a BLL of 0.1-1.0 μg/dl contributes to loss of a single IQ point, meaning the levels of lead seen in these children would result in an average loss of four IQ points (95% CI 2.5-4.7).

The authors fear that a significant amount of the lead exposure stems from used lead batteries used in motor vehicles, which are often processed informally, and thus call for better regulations and larger studies.

Peter Chilton, Research Fellow

References:

  1. Ericson B, Dowling R, Dey S, et al. A meta-analysis of blood lead levels in India and the attributable burden of disease. Environ Int. 2018; 121(1): 461-70.
  2. Centers for Disease Control and Prevention. CDC Response to Advisory Committee on Childhood Lead Poisoning Prevention Recommendations in “Low Level Lead Exposure Harms Children: A Renewed Call of Primary Prevention”. 2012.
Advertisements

Health Effects of Armed Conflict: A Truly Fascinating Study

The phenomenon that more people die from the indirect effects of warfare than are killed directly is widely recognised. Wagner and colleagues studied the effect of armed conflict on child mortality in Africa.[1] They used a geospatial approach, linking georeferenced data on armed conflict to georeferenced data from the Demographic and Health Surveys. Their study covered two decades (1995-2015) and 35 African countries. The outcome variable was child survival to the age of one year. Overall, there was nearly an eight-percent increased risk of child death during a year of conflict. However, many of the conflicts were small, and the increased risk of death before the age of one year was over 25% for armed conflicts with more than 1,000 direct fatalities. The cumulative effect over eight years was up to four times higher than the contemporaneous increase, and the effect is greatly increased for long-lasting conflicts. There were significantly stronger effects in rural than in urban areas. The authors also examined child growth and found an increased risk of stunting in relation to conflict.

Sadly, there was no shortage of armed conflicts in the 35 African countries studied – 15,441 armed conflicts were recorded in the Uppsala Conflict Data Program over the two decades. The results reported here represent a massive burden of disease on a scale with malnutrition.

Avoiding conflict is a tricky subject, which lies outside the health domain, and which is discussed in Paul Collier’s book ‘The Bottom Billion’.[2] Conflict is also very strongly associated with national poverty, and generally the avoidance of conflict is, arguably, the biggest threat confronting humankind, as we will discuss in the future.

— Richard Lilford, CLAHRC WM Director

References:

  1. Wagner Z, Heft-Neal S, Bhutta ZA, Black RE, Burke M, Bendavid E. Armed conflict and child mortality in Africa: a geospatial analysis. Lancet. 2018; 392: 857-65.
  2. Collier P. The Bottom Billion: Why the Poorest Countries are Failing and What Can Be Done About It. Oxford: Oxford University Press; 2007.

Estimating Mortality Due to Low-Quality Care

A recent paper by Kruk and colleagues attempts to estimate the number of deaths caused by sub-optimal care in low- and middle-income countries (LMICs).[1] They do so by selecting 61 conditions that are highly amenable to healthcare. They estimate deaths from these conditions from the global burden of disease studies. The proportion of deaths attributed to differences in health systems is estimated from the difference in deaths between LMICs and high-income countries (HICs). So if the death rate from stroke in people aged 70 to 75 is ten per thousand in HICs and 20 per thousand in LMICs, then ten deaths per 1000 are preventable. This ‘subtractive method’ to estimate deaths that could be prevented by improved health services simply answers the otiose question: “what would happen if low-income countries and their populations could be converted, by the wave of a wand, into high-income countries complete with populations enjoying high income from conception?” Such a reductionist approach simply replicates the well-known association between per capita GDP and life expectancy.[2]

The authors of the above paper do try to isolate the effect of institutional care from access to facilities. To make their distinction they need to estimate utilisation of services. This they do from various household surveys, conducted at selected sites around the world. These surveys contain questions about service use. So a further subtraction is performed; if half of all people deemed to be having a stroke utilise care, then half of the difference in stroke mortality can be attributed to quality of care.

Based on this methodology the authors find that the lion’s share of deaths are caused by poor quality care not failure to get care. This conclusion is flawed because:

  1. The link between the databases is at a very coarse level – there is no individual linkage.
  2. As a result risk-adjustment is not possible.
  3. Further to the above, the method is crucially unable to account for delays in presentation and access to care preceding presentation that will inevitably result in large differences in prognosis at presentation.
  4. Socio-economic status and deprivation over a lifetime is associated with recovery from a condition, so differences in outcome are not due only to differences in care quality.[3]
  5. There are measurement problems at every turn. For example, Global Burden of Disease is measured in very different ways across HICs and LMICs – the latter rely heavily on verbal autopsy.
  6. Quality, as measured by crude subtractive methodologies, includes survival achieved by means of expensive high technology care. However, because of opportunity costs, introduction of effective but expensive treatments will do more harm than good in LMICs (until they are no longer LMICs).

The issue of delay in presentation is crucial. Take, for example, cancer of the cervix. In HICs the great majority of cases are diagnosed at an early, if not at a pre-invasive, stage. However, in low-income countries almost all cases were already far advanced when they present. To attribute the death rate difference to the quality of care is inappropriate. Deep in the discussion the authors state ‘comorbidity and disease history could be different between low and high income countries which can result in some bias.’ This is an understatement and the problem cannot be addressed by a passing mention of it. Later they also assert that all sensitivity analyses support the conclusion that poor healthcare is a larger driver of amenable mortality than utilisation of services. But it is really difficult to believe such a sensitivity analyses when this bias is treated so lightly.

Let us be clear, there is tons of evidence that care is, in many respects, very sub-optimal in LMICs. We care about trying to improve it. But we think such dramatic results based on excessively reductionist analyses are simply not justifiable and in seeking attention in this way risk undermining broader support for the important goal of improving care in LMICs. In areas from global warming to mortality during the Iraq war we have seen the harm that marketing with unreliable methods and generalizing beyond the evidence can do to a good cause by giving fodder to those who don’t want to believe that there is a problem. What is needed are careful observations and direct measurements of care quality itself, along with evaluations of the cost-effectiveness of methods to improve care. Mortality is a crude measure of care quality.[4][5] Moreover, the extent to which healthcare reduces mortality is quite modest among older adults. The type of paper reported here topples over into marketing – it is as unsatisfying as a scientific endeavour as it is sensational.

— Richard Lilford, CLAHRC WM Director

— Timothy Hofer, Professor in Division of General Medicine, University of Michigan

References:

  1. Kruk ME, Gage AD, Joseph NT, Danaei G, García-Saisó S, Salomon JA. Mortality due to low-quality health systems in the universal health coverage era: a systematic analysis of amenable deaths in 137 countries. Lancet. 2018.
  2. Rosling H. How Does Income Relate to Life Expectancy. Gap Minder. 2015.
  3. Pagano D, Freemantle N, Bridgewater B, et al. Social deprivation and prognostic benefits of cardiac surgery: observational study of 44,902 patients from five hospitals over 10 years. BMJ. 2009; 338: b902.
  4. Lilford R, Mohammed MA, Spiegelhalter D, Thomson R. Use and misuse of process and outcome data in managing performance of acute medical care: avoiding institutional stigma. Lancet. 2004; 363: 1147-54.
  5. Girling AJ, Hofer TP, Wu J, et al. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling studyBMJ Qual Saf. 2012; 21(12): 1052-6.

Monumental Study of Service Interventions to Drive Up the Quality of Care in Low- and Middle- Income Countries

Rowe and colleagues conducted a systematic review covering over half a century of studies of different methods to improve clinician practices.[1] By Jupiter, they scanned over 200,000 citations, selecting 337 studies of 18 improvement methods. Time series studies and studies with contemporaneous controls were included. Effects were measured in percentage point differences in clinical practices – for example the proportion of patients receiving unnecessary treatments.

In this particular study only comparisons of intervention versus control were used. Head-to-head comparisons of different interventions are to be reported separately.

Thirteen different intervention strategies were identified and these include topics such as high-intensity training, supervision, and group problem solving.  All studies were classified on a risk of bias scale; only a small proportion of studies were at low risk of bias.

Training alone had moderate effects on clinical behaviour in the range of 10 to 16 percentage points, but training combined with supervision had somewhat larger effects at about 18 percentage points. The effect of training was generally smaller at less than three percentage points for community health workers. As you might expect, wide differences in methodology and context make comparisons difficult.

This is a large and complex study which bears careful reading. Here are some take home messages from close colleagues:

  • Just like Oxman and colleagues demonstrated with physicians, there are no magic bullets.
  • How much research effort is sub-optimal given the lack of improvement in study quality over time?
  • A deep-dive into the studies to look at the role of context would say “it matters” – the question is how can we use the results to design more effective interventions in the future?

— Celia Taylor

  • Studies of strategies to improve health worker performance and quality in actual practice show there is usually a performance / quality gap even after implementing an intervention.
  • Effect sizes varied widely for most strategies and demonstrates difficulty of how effective a strategy might be in different contexts.
  • Training or supervision alone has small effects so good to combine with group problem solving as these combined have larger effect sizes.

— Jo Sartori

  • Effect sizes varied widely and there was a large risk of bias, so more rigorous studies are needed (panel 3). However, I also thought that the finding that “strategies that included community support plus training, with or without other components, tended to have larger effect sizes”, was interesting.
  • Finally, the ‘group problem solving’ strategy, that I have not heard about before, is something that I’d like to look into more as the authors think this component may benefit other strategies, while alone it brings about moderate effects.

— Maartje Klatter

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Rowe AK, Rowe SY, Peters DH, et al. Effectiveness of strategies to improve health-care provider practices in low-income and middle-income countries: a systematic review. Lancet Glob Health. 2018.

Childhood Diarrhoeal Diseases – Update of the Famous Wolf Review

Yes, the famous Wolf meta-analysis [1] of the impact of drinking water, sanitation and hand-washing (WASH) interventions on childhood diarrhoeal disease has been updated.[2] It is a monumental study and the clever authors had to wrestle to the ground the following issues:

  1. Multiple comparisons across different WASH interventions (14 for studies of drinking water alone).
  2. Different designs – before and after, time series, RCTs, etc.
  3. Problem of reactivity of the outcome measure.
  4. Multiple sources of potential variation, such as urban vs. rural; different levels of coverage and use achieved.

The results enable me to update the model we have already published in the Lancet.[3] Updating the model on drinking water intervention intensity vs. effect we get:

112 DCi - Diarrhoea Figure

Improved sanitation reduces diarrhoea by about 25%, and hygiene interventions by a similar amount (bit the latter are often ephemeral). Logically one would expect the effectiveness of hygiene interventions to reduce in proportion to the effectiveness of water and sanitation interventions. The coverage and uptake varies across studies, so it would be nice to make the above model three dimensional to include the effect of coverage and ‘herd effects’ that might be expected. The authors found that average effect sizes across sanitary interventions were much greater (45%) when coverage exceeded 75% than when it did not (24%), and, as this was part of a meta-regression, I guess they controlled for intervention type.

The authors make a sensitivity adjustment for ‘reactivity’, meaning that people are less likely to report diarrhoea on a survey if they discern that an intervention has been put in place. The fact that diarrhoea rates behave as expected suggests that they are certainly better than meaningless.[4] The meta-regression showed no difference in effect sizes for water intervention across urban and rural areas.

— Richard Lilford, CLAHRC WM Director

References:

  1. Wolf J, Prüss-Ustün A, Cumming O, et al. Systematic review: Assessing the impact of drinking water and sanitation on diarrhoeal disease in low‐and middle‐income settings: systematic review and meta‐regression. Trop Med Int Health. 2014; 19(8): 928-42.
  2. Wolf J, Hunter PR, Freeman MC, et al. Impact of drinking water, sanitation and handwashing with soap on childhood diarrhoeal disease: updated meta-analysis and meta-regression. Trop Med Int Health. 2018; 23(5): 508-25.
  3. Lilford RJ, Oyebode O, Satterthwaite D, et al. Improving the health and welfare of people who live in slums. Lancet. 2017; 389: 559-70.
  4. Lilford RJ. Important New Data on WASH and Nutritional Interventions from Kenya and Bangladesh. NIHR CLAHRC West Midlands News Blog. 18 May 2018.

Quality of Care on Removal of Financial Incentives in General Practices

Minchin, et al. report on the use of interrupted time series analyses of electronic medical records to track the effect of removal of financial incentives on provider behaviour.[1] Incentives were withdrawn for 12 quality of care indicators in 2014, while they were retained for six indicators.

The results showed a sharp and almost immediate fall in adherence to the 12 indicators for which the incentive was withdrawn. There was no such drop in performance for the six indicators that were retained.

Many of the measurements of adherence were based on clinician entry into the electronics records to confirm compliance. For example, to confirm that advice on disease prevention had been given. It is therefore possible that clinicians continued to adhere to the tenets of good practice after withdrawal of the incentive, while simply omitting to record this detail in the electronic notes. However, not all measurements were dependent on active clinical entry – for example, the electronic record is populated automatically with blood test results. There was a fall in adherence to previously incentivised indicators, such as blood tests, where physician entry was bypassed, as well as on those that required physician entry. However, the fall in compliance with practice standards that did not depend on physician entry was not as great as the fall in compliance with indicators that depend on physician entry.

The results reported here are broadly in-line with the literature; removal of financial incentives for clinical care standards is generally followed by a decline in performance.

What does this mean for the use of performance measures? One must assume that they cannot be retained in perpetuity; at some point the world must move on, even if only to implement a further set of performance measures.  But my overarching impression is reconfirmed – the use of incentives, measurements and targets is of limited value. In the last analysis, the only way to bring about a sustained, lasting and self-perpetuating improvement in care, is by winning the hearts and minds of clinicians. It is important to kindle a set of high rectitude values, and it is important to select individuals with the right characteristics, i.e. highly principled people with a deep sense of altruism. This is, I am afraid, an ultra-long-term solution – a person’s attitude starts on mother’s knee and is reinforced or supressed by the totality of life experience. Inspiring teachers at medical school and good role models throughout life are critical. That is one reason that I continue to argue that medical ethics and, so-called, ‘communication skills’ should be taught by doctors and not farmed out to philosophers and psychologists.[2] When I was a clinical professor these valuable colleagues taught me, but I taught the students.

— Richard Lilford, CLAHRC WM Director

References:

  1. Minchin M, Roland M, Richardson J, Rowark S, Guthrie B. Quality of Care in the United Kingdom after Removal of Financial Incentives. N Engl J Med. 2018; 379: 948-57.
  2. Lilford RJ. Doctor-Patient Communication in the NHS. NIHR CLAHRC West Midlands News Blog. 24 March 2017.

Unique Study of the Introduction of Commercial ePrescribing Systems Shows an Overall Reduction in Medication Error

This study [1] shows that the introduction of a commercial computerised decision support system resulted in an important reduction in prescribing errors across three hospitals included in this time series study. The result was highly significant in two of the hospitals, but in the remaining hospital a small increase in errors was seen. The latter finding could be ascribed to an increased rate of errors for just two prescriptions, perhaps because staff relied too heavily on the system to guide them.

The study also showed that only about a third of the decision support capacity was enabled across the hospitals. Moreover, the decision concerning which decision algorithm to enable varied considerably across hospitals.

It can be concluded that commercial systems have the potential to reduce error rates. They can almost eliminate errors where the system will refuse to prescribe a medicine because of the egregious nature of the potential error. Service providers need help in implementing these systems so that they consistently enable the most important decision support capabilities. Lastly, staff should be educated as to which capabilities have been enabled and which have not, so that they do not come to place excessive reliance on the decision support system.

N.B. The CLAHRC WM Director was an applicant on the study reported here, though not an author of this particular paper.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Pontefract SK, Hodson J, Slee A, Shah S, Girling AJ, Williams R, Sheikh A, Coleman JJ. Impact of a commercial order entry system on prescribing errors amenable to computerised decision support in the hospital setting: a prospective pre-post study. BMJ Qual Saf. 2018; 27: 725-36.

Cognitive Bias Modification for Addictive Behaviours

It can be difficult to change health behaviours. Good intentions to quit smoking or drink less alcohol, for example, do not always translate into action – or, if they do, the change doesn’t last very long. A meta-analysis of meta-analyses suggests that intentions explain, at best, a third of the variation in actual behaviour change.[1] [2] What else can be done?

One approach is to move from intentions to inattention. Quite automatically, people who regularly engage in a behaviour like smoking or drinking alcohol pay more attention to smoking and alcohol-related stimuli. To interrupt this process ‘cognitive bias modification’ (CBM) can be used.

Amongst academics, the results of CBM have been called “striking” (p. 464),[3] prompted questions about how such a light-touch intervention can have such strong effects (p. 495),[4] and led to the development of online CBM platforms.[5]

An example of a CBM task for heavy alcohol drinkers is using a joystick to ‘push away’ pictures of beer and wine and ‘pull in’ pictures of non-alcoholic soft drinks. Alcoholic in-patients who received just an hour of this type of CBM showed a 13% lower rate of relapse a year later than those who did not – 50/108 patients in the experimental group and 63/106 patients in the control group.[4]

Debate about the efficacy of CBM is ongoing. It appears that CBM is more effective when administered in clinical settings rather than in a lab experiment or online.[6]

— Laura Kudrna, Research Fellow

References:

  1. Sheeran P. Intention-behaviour relations: A conceptual and empirical review. In: Stroebe W, Hewstone M (Eds.). European review of social psychology, (Vol. 12, pp. 1–36). London: Wiley; 2002.
  2. Webb TL Sheeran P. Does changing behavioral intentions engender behavior change? A meta-analysis of the experimental evidence. Psychol Bull. 2006; 132(2): 249.
  3. Sheeran P, Gollwitzer PM, Bargh JA. Nonconscious processes and health. Health Psychol. 2013; 32(5): 460.
  4. Wiers RW, Eberl C, Rinck M, Becker ES, Lindenmeyer J. Retraining automatic action tendencies changes alcoholic patients’ approach bias for alcohol and improves treatment outcome. Psychol Sci. 2011; 22(4): 490-7.
  5. London School of Economics and Political Science. New brain-training tool to help people cut drinking. 18 May 2016.
  6. Wiers RW, Boffo M, Field M. What’s in a trial? On the importance of distinguishing between experimental lab studies and randomized controlled trials: The case of cognitive bias modification and alcohol use disorders. J Stud Alcohol Drugs. 2018; 79(3): 333-43.

A Fascinating Service Improvement Cluster Randomised Trial

Most of you know that, across over 150 randomised trials, the effect of simple audit and feedback is a very modest 5% improvement in compliance with a standard of care. Could this be improved by a more active form of feedback based on daily dashboards, as well as weekly performance review?

To find out, Patel and colleagues conducted a cluster randomised trial in which one group of hospital-based clinical teams (control teams) received the standard feedback, consisting of twice monthly emails depicting performance on quality metrics.[1] The intervention team also received, in addition, access to daily updated performance dashboards, as well as in-person review of performance data. The authors refer to this enhanced audit and feedback as ‘next-generation audit and feedback’. A total of 40 medical teams participated in the trial. The outcome was a composite of various performance criteria, such as medicine reconciliation and a timely discharge summary. The intervention mimics a method that produced improvements in medicine administration in our CLAHRC West Midlands.[2]

The trail showed a sharp increase in performance in the intervention group compared to the control teams. But that is not all. The investigators introduced a wash-out period – that is to say, they withdrew the intervention but continued to monitor performance. What do you think happened?

The intervention did not stick – the enhanced performance in the intervention group soon reverted to baseline. The improvement in performance was considerably greater than that observed in most quality improvement studies, but it was ephemeral. Other studies have also found that using face-to-face performance review is more effective than more passive feedback methods. I really liked the idea of a wash-out – expect to see this in a CLAHRC near you! And the transient nature of the improvement provides further evidence in support of my conclusion – we should aim for deep change in attitudes, in addition to more surface level approaches to behaviour change.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Patel S, Rajkomar A, Harrison JD, Prasad PA, Valencia V, Ranji SR, Mourad M. Next-generation audit and feedback for inpatient quality improvement using electronic health record data: a cluster randomised controlled trial. BMJ Qual Saf. 2018; 27: 691-9.
  2. Coleman JJ, Hodson J, Brooks HL, Rosser D. Missed medication doses in hospitalised patients: a descriptive account of quality improvement measures and time series analysis. Int J Qual Health Care. 2013; 25(5): 564-72.

Cannabis and Schizophrenia: Which Way Around Does Causality Run?

Does cannabis lead to schizophrenia or does schizophrenia lead to the use of cannabis? That there is a strong, dose-related, association between the use of cannabis and the development of schizophrenia is not in doubt. But association studies cannot prove causality. Furthermore, a dose response can be seen where exposure to the putative causative agent and the true causative agent are correlated.

Genes to the rescue! Power and colleagues looked to see whether genetic polymorphisms that are associated with cannabis use are also associated with schizophrenia in people not exposed to cannabis.[1] They found that genes that pre-disposed to cannabis use also pre-disposed to schizophrenia, independently of whether the person actually used cannabis. The strength of the association between cannabis use predisposing genes and schizophrenia was the same in people who used cannabis, as in those who have never used this substance. Moreover, the risk was the same irrespective of the dose of cannabis consumed. The genetic predisposition to consume cannabis explained less than one-tenth of the variance in cannabis use. Nevertheless, this finding suggests that it is the predisposition to use cannabis, rather than the cannabis itself, that causes the psychiatric symptoms. If corroborated, then this study has important implications for policy.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Power RA, Verwij KJH, Zuhair M, et al. Genetic predisposition to schizophrenia associated with increased use of cannabis. Mol Psychiatr. 2014; 19: 1201-4.