Tag Archives: Guidelines

Evidence-Based Guidelines and Practitioner Expertise to Optimise Community Health Worker Programmes

The rapid increase in scale and scope of community health worker (CHW) programmes highlights a clear need for guidance to help programme providers optimise programme design. A new World Health Organization (WHO) guideline in this area [1] is therefore particularly welcome, and provides a complement to existing guidance based on practitioner expertise.[2] The authors of the WHO guideline undertook an overview of existing reviews (N=122 reviews with over 4,000 references included), 15 separate systematic reviews of primary studies (N=137 studies included), and a stakeholder perception survey (N=96 responses). The practitioner expertise report was developed following a consensus meeting of six CHW programme implementers, a review of over 100 programme documents, a comparison of the standard operating procedures of each implementer to identify areas of alignment and variation, and interviews with each implementer.

The volume of existing research, in terms of the number of eligible studies included in each of the 15 systematic reviews, varied widely, from no studies for the review question “Should practising CHWs work in a multi-cadre team versus in a single-cadre CHW system?” to 43 studies for the review question “Are community engagement strategies effective in improving CHW programme performance and utilization?”. Across the 15 review questions, only two could be answered with “moderate” certainty of evidence (the remainder were “low” or “very low”): “What competencies should be included in the curriculum?” and “Are community engagement strategies effective?”. Only three review questions had a “strong” recommendation (as opposed to “conditional”): those based on Remuneration(do so financially), Contracting agreements(give CHWs a written agreement), and Community engagement(adopt various strategies). There was also a “strong” recommendation not to use marital status as a selection criterion.

The practitioner expertise report provided recommendations in eight key areas and included a series of appendices with examples of selection tools, supervision tools and performance management tools. Across the 18 design elements, there was alignment across the six implementers for 14, variation for two (Accreditation– although it is recommended that all CHW programmes include accreditation – and CHW:Population ratio), and general alignment but one or more outliers for two (Career advancement– although supported by all implementers, and Supply chain management practices).

There was general agreement between the two documents in terms of the design elements that should be considered for CHW programmes (Table 1), although notincluding an element does not necessarily mean that the report authors do not think it is important. In terms of the specific content of the recommendations, the practitioner expertise document was generally more specific; for example, on the frequency of supervision the WHO recommend “regular support” and practitioners “at least once per month”. The practitioner expertise report also included detail on selection processes, as well as selection criteria: not just what to select for, but how to put this into practice in the field. Both reports rightly highlight the need for programme implementers to consider all of the recommendations within their own local contexts; one size will not fit all. Both also highlight the need for more high quality research. We recently found no evidence of the predictive validity of the selection tools used by Living Goods to select their CHWs,[3] although these tools are included as exemplars in the practitioner expertise report. Given the lack of high quality evidence available to the WHO report authors, (suitably qualified) practitioner expertise is vital in the short term, and this should now be used in conjunction with the WHO report findings to agree priorities for future research.

Table 1: Comparison of design elements included in the WHO guideline and Practitioner Expertise report

114 DC - WHO Guidelines Fig

— Celia Taylor, Associate Professor

References:

  1. World Health Organization. WHO guideline on health policy and system support to optimize community health worker programmes. Geneva, Switzerland: WHO; 2018.
  2. Community Health Impact Coalition. Practitioner Expertise to Optimize Community Health Systems. 2018.
  3. Taylor CA, Lilford RJ, Wroe E, Griffiths F, Ngechu R. The predictive validity of the Living Goods selection tools for community health workers in Kenya: cohort study. BMC Health Serv Res. 2018; 18: 803.

Interim Guidelines for Studies of the Uptake of New Knowledge Based on Routinely Collected Data

CLAHRC West Midlands and CLAHRC East Midlands use Hospital Episode Statistics (HES) to track the effect of new knowledge from effectiveness studies on implementation of the findings from those studies. Acting on behalf of CLAHRCs we have studied uptake of findings from the HTA programme over a five year period (2011-15). We use the HES database to track uptake of study treatments where the use of that treatment is recorded on the HES database – most often these are studies of surgical procedures. We conduct time series analyses to examine the relationship between publication of apparently clear-cut findings and the implementation (or not) of those findings. We have encountered some bear traps in this apparently simple task, which must be carried out with an eye to detail. Our work is ongoing, but here we alert practitioners to some things to look out for based on the literature and our experience. First, note that the use of time series to study clinical practice based on routine data is both similar and different from the use of control charts in statistical process control. For the latter purpose, News Blog readers are referred to the American National Standard (2018).[1] Here are some bear-traps/issues to consider when using databases for the former purpose – namely to scrutinise databases for changes in treatment for a given condition:

  1. Codes. By a long way, the biggest problem you will encounter is the selection of codes. The HTA RCT on treatment of ankle fractures [2] described the type of fracture in completely different language to that used in the HES data. We did the best we could, seeking expert help from an orthopaedic surgeon specialising in the lower limb. Some thoughts:
    1. State the codes or code combinations used. In a recent paper, Costa and colleagues did not state all the codes used in the denominator for their statistics on uptake of treatment for fractures of the lower radius.[3] This makes it impossible to replicate their findings.
    2. Give the reader a comprehensive list of relevant codes highlighting those that you selected. This increases transparency and comparability, and can be included as an appendix.
    3. When uncertain, start with a narrow set of codes that seem to correspond most closely to indications for treatment in the research studies, but also provide results for a wider range – these may reflect ‘spill-over’ effects of study findings or miscoding. Again, the wider search can be included as an appendix, and serves as a kind of sensitivity analysis.
    4. If possible, examine coding practice by examining local databases that may contain detailed clinical information with the routine codes generated by that same institution. This provides empirical information on coding accuracy. We did this with respect to use of tight-fitting casts to treat unstable ankle fracture (found to be non-inferior to more invasive surgical plates [4]) and found that the procedure was coded in different ways. We combined these three codes in our study, although this increases measurement error (reducing the signal) on the assumption that these codes are not specific.
  2. Denominators.
    1. In some cases denominators cannot be ascertained. We encountered this problem in our analysis of surgery for oesophageal reflux, where surgery was found more effective than medical treatment.[5] The counterfactual here is medical therapy that can be delivered in various settings and that is not specific for the index condition. Here we simply had to examine the effects of the trial results on the number of operations carried out country-wide. Seasonal effects are a potential problem with denominator-free data.
    2. For surgical procedures, the procedure should be combined with the counterfactual procedure from the trial to create a denominator. The denominator can also be expanded to include other procedures for the same operation if this makes sense clinically.
  3. Data-interval. The more frequent the index procedure, then the shorter the appropriate interval. If the number of observations falls below a certain threshold, then the data cannot be reported to protect patient privacy, and a wider interval must be used. A six month interval seemed suitable for many surgical procedures.
  4. Of protocols and hypotheses. We have found that the detailed protocol must emerge as an iterative process including discussion with clinical experts. But we think there should be a ‘general’ prior hypothesis for this kind of work. So we specified the dates of publication of the HTA report as our pre-set time point – the equivalent of the primary hypothesis. We applied this date line for all of the procedures examined. However, solipsistic focus on this data line would obviously lead to an impoverished understanding, so we follow a three phase process inspired by Fichte’s thesis-antithesis-synthesis-thesis model [6]:
    1. We test the hypothesis that a linear model fits the data using a CUSUM (cumulative sum) test. The null hypothesis is that the cumulative sum of recursive residuals has an expected value of 0. If it wanders outside the 95% confidence band at any point in time, this indicates that the coefficients have changed and a single linear model does not fit the data.
    2. If the above test indicates a change in the coefficients, we use a Wald test to identify the point at which the model has a break. We estimate two separate models before and after the break data and the slopes/intercepts are compared.
    3. Last we ‘check by members’ and discuss with experts who can fill us in on when guidelines emerged and when other trials may have been published – ideally a literature review would complement this process.
  5. Interpretation. In the absence of contemporaneous controls, cause and effect inference must be cautious.

This is an initial iteration of our thoughts on this topic. However, increasing amounts of data are being captured in routine systems, and databases are increasingly constructed in real time since they are used primarily as a clinical tool. So we thought it would be helpful to start laying down some procedural rules for retrospective use of data to determine long-term trends. We invite readers to comment, enhance and extended this analysis.

— Richard Lilford, CLAHRC WM Director

— Katherine Reeves, Statistical Intelligence Analyst at UHBFT Health Informatics Centre

References:

  1. ASTM International. Standard Practice for Use of Control Charts in Statistical Process Control. Active Standard ASTM E2587. West Conshohocken, PA: ASTM International; 2018.
  2. Keene DJ, Mistry D, Nam J, et al. The Ankle Injury Management (AIM) trial: a pragmatic, multicentre, equivalence randomised controlled trial and economic evaluation comparing close contact casting with open surgical reduction and internal fixation in the treatment of unstable ankle fractures in patients aged over 60 years. Health Technol Assess. 20(75): 1-158.
  3. Costa ML, Jameson SS, Reed MR. Do large pragmatic randomised trials change clinical practice? Assessing the impact of the Distal Radius Acute Fracture Fixation Trial (DRAFFT). Bone Joint J. 2016; 98-B: 410-3.
  4. Willett K, Keene DJ, Mistry D, et al. Close Contact Casting vs Surgery for Initial Treatment of Unstable Ankle Fractures in Older Adults. A Randomized Clinical Trial. JAMA. 2016; 316(14): 1455-63.
  5. Grant A, Wileman S, Ramsay C, et al. The effectiveness and cost-effectiveness of minimal access surgery amongst people with gastro-oesophageal reflux disease – a UK collaborative study. The REFLUX trial. Health Technol Assess. 2008; 12(31): 1–214.
  6. Fichte J. Early Philosophical Writings. Trans. and ed. Breazeale D. Ithaca, NY: Cornell University Press, 1988

Giving Feedback to Patient and Public Advisors: New Guidance for Researchers

Whenever we are asked for our opinion we expect to be thanked and we also like to know if what we have contributed has been useful. If a statistician/qualitative researcher/health economist has contributed to a project, they would (rightfully) expect some acknowledgement and whether their input had been incorporated. As patient and public contributors are key members of the research team, providing valuable insights that shape research design and deliver, it’s right to assume that they should also receive feedback on their contributions. But a recent study led by Dr Elspeth Mathie (CLAHRC East of England) found that routine feedback to PPI contributors is the exception rather than the rule. The mixed methods study (questionnaire and semi-structured interviews) found that feedback was given in a variety of formats with variable satisfaction with feedback. A key finding was that nearly 1 in 5 patient and public contributors (19%) reported never having received feedback for their involvement.[1]

How should feedback be given to public contributors?

There should be no ‘one size fits all’ approach to providing feedback to public contributors. The study recommends early conversations between researchers and public contributors to determine what kind of feedback should be given to contributors and when. The role of a Public and Patient Lead can help to facilitate these discussions and ensure feedback is given and received throughout a research project. Three main categories of feedback were identified:

  • Acknowledgement of contributions – Whether input was received and saying ‘thanks’
  • Information about the impact of contributions – Whether input was useful and how it was incorporated into the project;
  • Study success and progress – Information on whether a project was successful (e.g. securing grant funding/gaining ethical approval) and detail about how the project is progressing.

 

What are the benefits to providing feedback for public contributors?

The study also explored benefits of giving feedback to contributors. Feedback can:

  • Increase motivation for public contributors to be involved in future research projects;
  • Help improve a contributor’s input into future project (if they know what has been useful, they can provide more of the same);
  • Build the public contributor’s confidence;
  • Help the researcher reflect on public involvement and the impact it has on research.

 

What does good feedback look like?

Researchers, PPI Leads and public contributors involved in the feedback study have co-produced Guidance for Researchers on providing feedback for public contributors to research.[2] The guidance explores the following:

  • Who gives feedback?
  • Why is PPI feedback important?
  • When to include PPI feedback in research cycle?
  • What type of feedback?
  • How to give feedback?

Many patient and public contributors get involved in research to ‘make a difference’. This Guidance will hopefully help ensure that all contributors learn how their contributions have made a difference and will also inspire them to continue to provide input to future research projects.

— Magdalena Skrybant, PPIE Lead

References:

  1. Mathie E, Wythe H, Munday D, et al. Reciprocal relationships and the importance of feedback in patient and public involvement: A mixed methods study. Health Expect. 2018.
  2. Centre for Research in Public Health and Community Care. Guidance for Researchers: Feedback. 2018

More Guidelines for Applied Researchers: This Time for Reporting of Feasibility Studies for Trials

CLAHRCs are, at least in part, vehicles for testing new service interventions on a small scale to see if they are likely to work at a grand scale. Guidelines for such ‘proof of principle’ studies are therefore important to CLAHRCs and CONSORT has now published guidelines specifically for feasibility trials.[1] The paper hugely exceeds the BMJ word limit and is a somewhat tedious read, albeit clear and useful as a reference document. The distinguished authors define a pilot trial as a particular type of feasibility study in which ‘a future definitive study, or a part of it, is conducted on a smaller scale.’ Lancasters’ original point, that such as study should not be used to determine sample size for the definitive study, is made and should be taken to heart. Most of the examples are based on clinical treatments, rather than service/policy interventions. Anyway, it should be included in the growing library of guidelines that researchers need to be aware of, but don’t let us enslave ourselves to these voluminous documents. It is the principles behind them that should be deployed creatively by intelligent researchers.

— Richard Lilford, CLAHRC WM Director

References:

  1. Eldridge SM, Chan CL, Campbell MJ. CONSORT 2010 Statement: Extension to Randomised Pilot and Feasibility Trials. BMJ. 2016; 355: i5239.

Are electronic patient records a medico-legal time-bomb?

Litigation lawyers used to chase ambulances; soon they may be trawling patient records.

Electronic records can be an uncompromising record of when care departed from clinical guidelines. Once upon a time the Bolam test – acting in accordance with a body of medical opinion – was sufficient to define the required standard of care. But after 1997, the Bolitho case modified this to allow that a body of medical opinion could be challenged as irrational.[1] More recently, clinical guidelines have begun to inform the required standard of care.[2]

Now both the Medical Defence Union (MDU) and the Medical Protection Society (MPS) advise that doctors must be prepared to justify decisions and actions that depart from nationally recognised guidelines.[3] [4] Alongside this, the General Medical Council regard it as a professional responsibility to be familiar with guidelines.[5] So if a doctor departs from clinical guidelines without recording a reason and their patient suffers a foreseeable adverse outcome as a result, there is a basis for a medical negligence claim.

Do electronic patient records have implications for this? Quite possibly. By retrieving historical records it is relatively easy to identify if a patient who suffered an adverse outcome was previously treated in accordance with guidelines. We can get an idea of the scale of this by looking at an example.

Anticoagulants halve the risk of stroke in atrial fibrillation, and guidelines recommend anticoagulant use in most patients with atrial fibrillation.[6] But in the UK half are untreated.[7] By contrast, 87% are treated in Germany and 92% in Switzerland.[8] [9] So it is not a question of contraindications. There are about 100,000 strokes a year in the UK, with about 80,000 ischaemic strokes.[10] Data from Sweden – with a similar undertreatment problem – indicates that about 29% of these will have atrial fibrillation.[11] So the UK has roughly 23,000 strokes a year in atrial fibrillation patients, half of whom were not on anticoagulants – 11,500 litigation opportunities. A case would be hard to defend without a documented rationale for withholding treatment.

From the ambulance-chasing perspective this means one in every nine stroke patients represents a business opportunity: certainly worth a letter requesting the records if only the patient will accede. The archive of patient data is easy to search. It is a treasure trove for medical negligence lawyers. It is just surprising it has not already happened.

— Tom Marshall, Deputy Director CLAHRC WM, Prevention and Detection of Diseases

References:

  1. Brazier M, Miola J. Bye-bye Bolam: a medical litigation revolution? Med Law Rev. 2000; 8: 85–114.
  2. Samanta A, Mello MM, Foster C, Tingle J, Samanta J. The role of clinical guidelines in medical negligence litigation: a shift from the Bolam standard? Med Law Rev. 2006; 14(3): 321-66.
  3. Medical Defence Union. Must Doctors Comply With Guidelines? 2010. [Online]. [Last accessed 11 July 2014].
  4. Medical Protection Society. Ignoring the guidelines Case Reports. 2013; 21(1). [Online]. [Last accessed 11 July 2014].
  5. General Medical Council. Good Medical Practice. 2013.[Online]. [Last accessed 11 July 2014].
  6. National Institute for Health and Care Excellence. NICE Guidelines CG180. Atrial fibrillation: the management of atrial fibrillation. 2014. [Online]. [Last accessed 11 July 2014].
  7. Holt TA, Hunter TD, Gunnarsson C, Khan N, Cload P, Lip GY. Risk of stroke and oral anticoagulant use in atrial fibrillation: a cross-sectional survey. Br J Gen Pract. 2012; 62(603): e710-7.
  8. Meiltz A, Zimmermann M, Urban P, Bloch A, Association of Cardiologists of the Canton of Geneva. Atrial fibrillation management by practice cardiologists: a prospective survey on the adherence to guidelines in the real world. Europace. 2008; 10(6): 674-80.
  9. Meinertz T, Kirch W, Rosin L, Pittrow D, Willich SN, Kirchhof P, ATRIUM investigators. Management of atrial fibrillation by primary care physicians in Germany: baseline results of the ATRIUM registry. Clin Res Cardiol. 2011; 100(10): 897-905.
  10. British Heart Foundation. Stroke Statistics 2009. 2009. [Online]. [Last accessed 11 July 2014].
  11. Björck S, Palaszewski B, Friberg L, Bergfeldt L. Atrial fibrillation, stroke risk, and warfarin therapy revisited: a population-based study. Stroke. 2013; 44(11): 3103-8.