Tag Archives: Guest blog

Why You Need to Know About Primary Care Networks

Amidst the constant organisational tinkering and occasional wholesale reform that is the NHS structure, it would be easy to note the emergence of a new construct without seeking to fully understand what it is and what it does. Often, by the time the new structural entity has determined its role and embedded this within the wider landscape of service provision, there are already plans in place to supersede it.

Primary care has been particularly prone to this in recent years with Vanguards, Pioneers, Federations, Multispecialty Care Providers to name but a few. So Primary Care Networks (or PCNs), which became legal entities at the beginning of July, may understandably have failed to arouse your interest. If so, then read on.

So what exactly are they? PCNs aim to address the long-standing challenge in primary care of economy of scale. By providing services at a greater scale it is hoped they will help to address issues around workforce, the integration of non-acute services and increasingly ageing buildings from which care is delivered. These networks will mainly cover populations of 30-50,000 people (with some exceptions at both ends of the scale for particularly rural or populous areas). Almost all GPs have signed up to create around 1,300 networks across England and, whilst some do span boundaries, most are within existing Clinical Commissioning Group (CCG) footprints. They provide the locality building blocks for Primary Care against the backdrop of 191 CCGs, a number which continues to decrease with mergers.

As referenced earlier, the longevity of PCNs is always likely to be an initial query. However, they look to have a strong future, at least in the medium term. They are heavily backed by strategy and policy (they are a core component of the NHS Long Term Plan), and are financially wedded to the core GP contract, itself backed by much of the funding announced for primary care within the Long Term Plan.

All good so far, but why might we in NIHR CLAHRC and ARC West Midlands be so interested in PCNs? Well, one of their key remits is to improve the integration of primary and community care services, and there has been an active effort to include social care and voluntary and third sector organisations and to increase social prescribing. Here it speaks to much of our planned work agenda around the management of long-term conditions, and around the acute interfaces of care within both physical and mental health. There will also be funding to help GPs increase their use of digital health, which will be interesting to consider alongside our evaluative work on electronic prescribing, patient-accessed health records and virtual outpatient consultations.

PCNs also potentially provide very nice-sized clusters for implementation. The opportunity to break down, for instance, two CCGs in to seven or eight PCNs in order to conduct an intervention is an extremely attractive one. The more multidisciplinary approach to primary care delivery, which will undoubtedly include an element of skill substitution, will be ripe for evaluation that ARCs will be well placed to address.

Whether PCNs will deliver all that they promise remains to be seen. However, they are an interesting development, look as though they will have longevity, and we hope will offer a new range of evaluative opportunities for service delivery research.

— Paul Bird, Head of Programmes (engagement)

Increasing the use of Statistical Process Control (SPC) charts by healthcare organisations

Previous research and news blogs have highlighted that we know people make better decisions using statistical process control (SPC) charts when compared to standard run charts,[1] and that they are very poorly used within healthcare.[2] The last few months have seen CLAHRC WM busily working to improve the uptake of SPC charts at both operational and strategic level. We have run three workshops for middle and senior managers at Trusts across the region, training over 80 managers in the process and equipping them with tools to allow them to make their own control charts. There are also a number of further sessions planned over the coming months, including with NHS Graduate Trainee Scheme members in the region who we hope will take SPC chart methodology onwards and upwards with them in their careers.

In addition to this, we are working to build broader strategic support for adoption by organisations. On 30 April CLAHRC WM, in conjunction with the West Midlands Academic Health Science Network, hosted an evening workshop on this topic. We were delighted to be able to welcome Samantha Riley, Head of Healthcare Analytics at NHS England and NHS Improvement, to talk about her work in converting hospital boards to SPC methodology, and the excellent “Making Data Count” resource pack, featuring our original article, that she and her team have produced. We had great representation and engagement from Trusts across the region, with good discussion and a set of agreed actions to take the programme forwards. One of these will be to hold a “Making Data Count” Ambassadors workshop in the region in July to create a wider pool of individuals with knowledge of SPC charts. We will be sure to share details of this event in the news blog once finalised!

— Paul Bird, CLAHRC WM Head of Programme Delivery (Engagement)

References:

  1. Schmidtke KA, Watson DG, Vlaev I. The use of control charts by laypeople and hospital decision-makers for guiding decision making. Quart J Exp Psychol. 2017; 70(7): 1114-28.
  2. Schmidtke KA, Poots AJ, Carpio J, et al. Considering chance in quality and safety performance measures: an analysis of performance reports by boards in English NHS trusts. BMJ Qual Saf. 2017; 26: 61-9.

Mandatory Publication and Reporting of Research Findings

Publication bias refers to a phenomenon by which research findings that are statistically significant or perceived to be interesting/desirable are more likely to be published, and vice versa.[1] The bias is a major threat to scientific integrity and can have major implications for patient welfare and resource allocation. Progress has been made over the years in raising awareness and minimising the occurrence of such bias in clinical research: pre-registration of trials has been made compulsory by editors of leading medical journals [2] and subsequently regulatory agencies. Evidence of a positive impact on the registration and reporting of findings from trials used to support drug licensing has started to emerge.[3,4] So can this issue be consigned to history now? Unfortunately the clear answer is no.

A recent systematic review showed that, despite a gradual improvement in the past two decades, the mean proportion of pre-registration among randomised controlled trials (RCTs) included in previous meta-epidemiological studies of trial registration only increased from 25% to 52% between 2005 to 2015.[5] A group of researchers led by Dr Ben Goldacre created the EU Trials Tracker (https://eu.trialstracker.net/), which utilises automation to facilitate the identification of trials that are due to report their findings but have not done so within the European Union Clinical Trials Register.[6] Their estimates show a similar picture that half of the trials that were completed have not reported their results. The findings of the Trial Tracker are presented in a league table that allows people to see which sponsors have the highest rate of unreported trials. You might suspect that pharmaceutical companies are likely to be the top offenders given the high profile cases of supressing drug trial data in the past. In fact the opposite is now true – major pharmaceutical companies are among the best compliers of trial reporting, whereas some of the universities and hospitals seem to have achieved fairly low reporting rates. While there may be practical issues and legitimate reasons behind the absence/delay in the report of findings for some of the studies, the bottom line is that making research findings available is a moral duty for researchers irrespective of funding sources, and with improved trial registration and enhanced power of data science, leaving research findings to perish and be forgotten in a file drawer/folder is neither an acceptable nor a feasible option.

With slow but steady progress in tackling publication bias in clinical research, you might wonder about health services research that is close to heart for our CLAHRC. Literature on publication bias in this field is scant, but we have been funded by the NIHR HS & DR Programme to explore the issue in the past two years and some interesting findings are emerging. Interested readers can access further details, including conference posters reporting our early findings, on our project website (warwick.ac.uk/publicationbias). We will share further results with News Blog readers in the near future, and in due course, publish them all!

— Yen-Fu Chen, Associate Professor

References:

  1. Song F, Parekh S, Hooper L, et al. Dissemination and publication of research findings: an updated review of related biases. Health Technol Assess. 2010;14(8):1-193.
  2. Laine C, De Angelis C, Delamothe T, et al. Clinical trial registration: looking back and moving aheadAnn Intern Med. 2007;147(4):275-7.
  3. Zou CX, Becker JE, Phillips AT, et al. Registration, results reporting, and publication bias of clinical trials supporting FDA approval of neuropsychiatric drugs before and after FDAAA: a retrospective cohort study. Trials. 2018;19(1):581.
  4. Phillips AT, Desai NR, Krumholz HM, Zou CX, Miller JE, Ross JS. Association of the FDA Amendment Act with trial registration, publication, and outcome reporting. Trials. 2017;18(1):333.
  5. Trinquart L, Dunn AG, Bourgeois FT. Registration of published randomized trials: a systematic review and meta-analysis. BMC Medicine. 2018;16(1):173.
  6. Goldacre B, DeVito NJ, Heneghan C, et al. Compliance with requirement to report results on the EU Clinical Trials Register: cohort study and web resource. BMJ. 2018;362:k3218.

Engaging with Engagement

Engagement is easy. We are in a fortunate position in CLAHRC West Midlands that there is seemingly a long queue of people keen to talk to us about interesting and exciting health and social care projects. However, there is little point in engagement for engagement’s sake: our resources are too scarce to invest in projects or relationships with little or no return, and so meaningful engagement is much harder.

In putting together our application for hosting an Applied Research Collaboration we were faced with our perennial challenge of who to engage with and how. To do so we began to map our networks (see figure) and quickly realised even the number of NHS organisations (71) was too broad for us to work across in depth, never mind the wide range of academic, social care, voluntary sector and industry partners in the wider landscape beyond.

Our approach has been to work with partners who are keen to work with us; we make no apology for being a coalition of the willing. However, we have worked purposefully to ensure reach across all sectors, actively seeking out collaborators with whom we have had more limited interactions with, but who we know can help deliver the reach we require for research and implementation. For instance, we have one of the best performing and most forward thinking ambulance services in the country, with paramedics working at the very interface between physical and mental health, social care and emergency medicine. Given that we know some of these problems are best addressed upstream, the ambulance service gives us the opportunity to head closer to where the river rises than ever before.

119 GB - Figure 1

[1] Based in 2013/14 figures from RAWM
[2] Department of Business Enterprise Innovation and Skills, Business Population Estimates

In addition to this, we seek to use overarching bodies to help reach across spaces which are too diffuse and fragmented to allow us to access (such as the voluntary, charitable and third sectors). Even using these we will have to be selective from the 21 which exist when we seek to engage with voluntary groups (for example around priority setting, Public and Community Involvement Engagement and Participation, or co-production). Elsewhere, we utilise networks of networks, for example collaborating with the Membership Innovation Councils of the West Midlands Academic Health Science Network which draw in representatives from a wide cross section of organisations and professions who can then transmit our message to their respective organisations and local networks. Our experience tells us these vicarious contacts can often deliver some of the most useful engagement opportunities.

Finally, we have always been committed within CLAHRC to cross-site working and having our researchers and staff embedded as much as possible within healthcare organisations. This is in part to ensure our research remains grounded within the ‘real world’ of service delivery, rather than the dreaming spires (or concrete and glass tower blocks) of academia. However, we know that regardless of how well you plan and construct your network, some of the best ideas come about through chance encounters and corridor conversations. Nobel prize-winning economist Elinor Ostrom, much beloved by the CLAHRC WM team, elegantly described the value of ‘cheap talk’ in relation to collectively owned resources.[3] The visibility of our team can often prompt a brief exchange to rule in or out an idea for CLAHRC where a formal contact or approach might not have been made, making our ‘cheap talk’ meaningful through its context. Perhaps this is how we should see ourselves in CLAHRC West Midlands; as a finite but shared resource to the health and social care organisations within our region.

— Paul Bird, Head of Programmes (engagement)

References:

  1. RAWM. The West Midlands Voluntary and Community Sector. 2015.
  2. Rhodes C. Business Statistics. Briefing Paper No. 06152. 2018.
  3. Ostrom E. Beyond Markets and States: Polycentric Governance of Complex Economic Systems. Am Econ Rev. 2010; 100(3): 641-72.

Barriers and Facilitators to Self-care of Leprosy in the Community – Report on a Stakeholder Consultation in Kathmandu, Nepal

The problem

Ulceration and deformity of the extremities, particularly the feet, are important complications of leprosy (known as Hansen’s disease in America). The pathophysiology of leprosy ulcers are similar to those of ulcers in diabetes mellitus – in both cases nerve damage leads to loss of sensation, which in turn leads to repetitive injury and ultimately ulceration. In addition, leprosy causes deformities, which increase the risk of repeated trauma and hence ulceration. Leprosy is a disease that affects the poorest of the poor; frequently those living in remote areas. The disease is highly stigmatising in the communities among whom it occurs leading to late presentation at healthcare facilities and hence a high incidence of ulceration among people who have contracted the disease. Once a person has had one ulcer, repeated ulceration is common, affecting at least 30% of patients.[1]

NIHR CLAHRC WM is working with The Leprosy Mission to develop interventions to prevent ulceration among high risk leprosy patients – especially those who have had previous ulcers. To this effect, we participated in a stakeholder meeting organised by colleagues Drs Deanna Hagge and Indra Napit at the Anandaban Hospital in Kathmandu, Nepal on 14 December 2018.

117 gb barriers and facilitators - image

©Paramjit Gill, 2018. Photo taken with permission of participants.

Stakeholders included leprosy-affected people, ulcer patients, administrative and clinical staff, representatives working on behalf of leprosy affected people, and two government officials. Stakeholders were asked to speak not only about barriers to prevention of ulcers but also possible means to overcome these barriers. All voices were heard and the meeting lasted for about two-hours.

First, we report themes relating to barriers to prevention that emerged during the stakeholder meeting. Second, we arrange them according to the well-known COM-B model [2] encompassing Capability, Opportunity and Motivation as factors affecting Behaviour. Finally, we consider what may be done to overcome the barriers.

Themes

The following themes emerged during the consultation:

  • Poverty. All were agreed that the need to work to provide the essentials of life increased the risk of placing pressure on vulnerable foot surfaces and of repeated trauma. Pressure to provide for self and family also increased the risk of late presentation of ulcers or their prodromal signs (so-called ‘hotspots’). One stakeholder commented “If a person cannot work for three months due to wound healing and not putting pressure on the ulcer then how do they live?
  • There was almost unanimous agreement that stigma was a problem, as it led to ‘denial’ (and hence late presentation) and failure to practice self-care and wear protective footwear, which might mark the wearer as a leprosy victim. The view was expressed that stigma reaches its highest intensity in remote rural locations – “some family members don’t know the person has leprosy so question self-care habits such as soaking the hands and feet…in rural areas patients need to hide the wounds and clean them in the night time so nobody sees.”
  • Poor information provision. Arguments regarding this barrier were more nuanced. While acknowledging that communication, especially communication pending discharge, was seldom perfect, there was also a feeling that staff tried hard and made a reasonable fist of communicating the essentials of preventative self-care. One stakeholder commented that “leprosy workers are not successful in convincing patients that their body is their responsibility and they have to look after it”. However, convincing patients can be hard, as many people afflicted with leprosy have poor functional literacy. Bridging the gulf in cultural assumptions between care givers and service users may be difficult in a single hospital stay – a point we pursue below.

Analysis according to the ‘trans-theoretical’, COM-B model

We have arranged the above themes using the COM-B model in the Figure. This figure is used to inform the design of interventions that address multiple barriers to healthy behaviours.

117 gb barriers and facilitators to self-care - figure

(With thanks to Dr Laura Kudrna for her advice).

Designing acceptable interventions

Principles not prescriptions

Interventions to improve care are like guidelines or recipes, not prescriptions. They should not be applied in a one size fits all manner – local implementers should always adapt to context [3] – an intervention as promulgated is not identical to the intervention implemented.[4] Thus, an intervention description at the international level must leave plenty of scope for adaptation of principles to local circumstances that vary within and between countries. For example, peer groups may be more feasible in high than in low burden settings while the development of local health services may determine the extent to which ‘horizontal’ integration between specialist leprosy and general services is possible.

Capability issues

Starting with Capability, there was a feeling that this was not where the main problem lay; patients generally left hospital with the equipment and knowledge that they needed to prepare for life in the community. Our stakeholders said that patients had access to protective footwear (although innovations, such as three dimensional printing to adapt footwear to particular types of defect, would be welcome). Likewise, as stated above, gains to be achieved by an ‘enhanced discharge’ process might be limited. This is for three reasons. First, patients usually receive rather thorough counselling in how to care for themselves during their hospital stay. Second, they usually know the measures to take. Third, understanding is seldom sufficient to bring about behaviour change – school girls who become pregnant are seldom unaware of contraception, for example. In conclusion, a hospital based intervention might not be the most propitious use of scarce resources. This, of course, does not preclude ongoing facility based research to improve treatment and protective methods, nor does it preclude facility outreach activities as we now discuss.

Enhancing ‘Opportunity’

The main barrier identified at the stakeholder meeting seemed to lie in the area of opportunity. Two important principles were established in the meeting. First, since ulcer prevention is an ongoing process its centre of gravity must be located where people live, that is, in the community. Second, peer-supported self-care is a model of established success in leprosy [5][6] (as it has been in diabetes).[7] Two corollaries flow from these considerations. First, where peer support has not been established, this deficiency should be corrected with support from facility based leprosy services. This may take different forms in high burden areas, where groups of people can come together, compared to low burden settings. This suggests that m-health, especially m-consulting, would be particularly useful in low burden settings. Second, where peer support exists (in the form of self-care groups) it should be strengthened, again with support from local facilities who can provide know-how, materials and, we think, inspiration/leadership for ongoing strengthening of locality based support groups. Such support, it was argued, not only provides technical know-how, but importantly, psychological support, to foster resilience and mitigate the pernicious effects of stigma. Telecommunication, when available, will have an important role in coordinating and supporting community self-care. We heard stories of people having to travel for three days to reach a facility; of having to find their way back to remote rural locations with recently healed ulcers on deformed feet and having to complete their journeys on foot. There is a prima facie case that providing mobile telephones will be cost-effective (save in locations so remote they fall outside mobile phone coverage). There was considerable support in the stakeholder meeting for personalised care plans. While accepting the need to individualise, an individual’s needs are not stable. Thus, while specific plans should be made at discharge, it is in the community that adaptations must be made according to changing clinical circumstances, work requirements and personal preferences. In all of the above initiatives, the specialist leprosy services should act as a source of information and psychological/emotional support. Especially in low burden areas, they can act like a poisons facility poisons reference service, providing know how to patients and care providers as and when necessary.

Motivation

As per the legend to our figure, we think that promoting opportunity and motivation go hand in hand in the case of community and outreach services for patients with leprosy who are at risk of ulcers as a result of local anaesthesia and limb deformities. Stigma aggravates the practical and psychological effects of the disease and includes a loss of self-worth and ‘self-stigma’.[8] People with leprosy often have something akin to a ‘crushed spirit’ or ‘chronic depression’ depending on the label one wants to use. Peer supported, facility enabled, self-care may improve motivation. Moreover, emotional support may enable people who have the stigmata left over from active infection to become ambassadors for the condition and help reduce social stigma.

Discussion

It is not enough to say that people suffering the stigma of leprosy should integrate with their communities rather than live in institutions or ‘colonies’, without taking steps to enable them to live in those communities. Such steps are of two types:

  • Community level action in supporting /facilitating communities to replace stigma by embracing people with leprosy and actively supporting them.
  • Individual support for people with leprosy who are likely to encounter stigma, but who need to prevail in the face of discrimination.

Interventions need to be achievable at low unit cost. So the plan is to design an intervention for propagation across many health systems and to evaluate how it is assimilated locally and what effects it has, within and across multiple contexts. The intervention we propose will involve facility outreach to educate people and ‘teach the teachers’ in communities with the aim of enhancing self-care. There are other actions that might be taken to support people with leprosy (and for that matter other people with disabilities) in the community. One set of measures are those that may alleviate the grinding poverty that many people with leprosy suffer, for instance by providing small loans, non-conditional cash transfers and enterprise coaching. Such interventions, targeting the poorest of the poor have been evaluated and some have proven effective.[9] They may be applicable to people who bear the effects of leprosy and we would be keen to join in the evaluation of such interventions. Information technology would seem to have a large role, as stated above. Diplomatic overtures to opinion formers, such as community leaders, may reduce stigma, especially if people who suffer from ulcers are themselves empowered to advocate on behalf of fellow sufferers. It may be the case that improving care for leprosy sufferers will have spill-over effects on other people with ulcer conditions or physical disabilities. The CLAHRC WM Director would like to thank Drs Indra Napit and Deanna Hagge for organising an excellent meeting, and the attendees for giving their time and sharing their experiences.

–Prof Richard Lilford, Dr Indra Napit and Ms Jo Sartori

References:

  1. Kunst, H. Predisposing factors for recurrent skin ulcers in leprosy. Lepr Rev. 2000;1(3):363-8.
  2. Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci. 2011;6:42.
  3. Lilford RJ. Implementation science at the crossroads. BMJ Qual Saf. 2018; 27:331-2.
  4. Lilford RJ. Context is Everything in Service Delivery Research. NIHR CLAHRC West Midlands News Blog. 27 October 2017.
  5. Deepak S, Estivar Hansine P, Braccini C. Self-care groups of leprosy-affected people in Mozambique. Lepr Rev. 2013;84:4.
  6. Benbow C, Tamiru T. The experience of self-care groups with people affected by leprosy: ALERT, Ethiopia. Lepr Rev. 2001;72:311-21.
  7. Tricco AC, Ivers NM, Grimshaw JM, et al. Effectiveness of quality improvement strategies on the management of diabetes: a systematic review and meta-analysis. Lancet. 2012; 379: 2252-61.
  8. World Health Organization. Global Leprosy Strategy 2016-2020. Geneva: WHO; 2016.
  9. Banerjee A, Karlan D, Zinman J. Six Randomized Evaluations of Microcredit: Introduction and Further StepsAm Econ J Appl Econ. 2015; 7(1): 1-21.

Do Poor Examination Results Predict That a Doctor Will Get into Trouble with the Regulator?

A recent paper by Richard Wakeford and colleagues [1] reports that better performance in postgraduate examinations for membership of the Royal Colleges of General Practitioners and of Physicians (MRCGP and MRCP respectively) is associated with a reduced likelihood of being sanctioned by the General Medical Council for insufficient fitness to practise. The effect was stronger for examinations of clinical skills, as opposed to those of applied medical knowledge, but was statistically significant for all five examinations studied. The unweighted mean effect size (Cohen’s d) was -0.68 – i.e. doctors with sanctions had examination scores that were, on average, around two-thirds of a standard deviation below those of doctors without a sanction. The authors find a linear relationship between performance and the likelihood of a sanction, which suggests that there was no clear performance threshold at which there is a significant change in the risk of a sanction.

The main analysis does not control for the timing of the examination attempt vis-á-vis the timing of the sanction, and the authors rightly point out that having a sanction could reduce subsequent examination performance due to the stress of being under investigation, for example. However the results of a sub-analysis for two of the knowledge assessments (MRCGP Applied Knowledge Test, and MRCP Part 1) suggest a slightly larger effect size when only considering doctors whose examination attempt was at least two years before their sanction, so the “temporality” requirement for causation is not absent. We also know there is some stability in relative examination performance (and, plausibly, therefore, knowledge) over time [2] – so “reversed” timing may not be a critical bias.

This study is important as it suggests that performance on the proposed UK Medical Licensing Assessment (UKMLA) (which is likely to be similar in format to both the examinations included in this study) may be a predictor of future standards of professional practice. However, the study also suggests that it may not be possible to find a pass mark for the UKMLA that has a significant impact on the number of doctors for whom sanctions are imposed (in comparison to other possible pass marks). Given the intention of the UKMLA as a pass/fail assessment and the low rate of sanctions amongst doctors on the GMC register (1.6% of those on the register in January 2017 had one or more sanctions since September 2008, and even lower amongst doctors in their first decade since joining the register), it is unlikely that the introduction of the UKLMA will have a detectable difference on the rate of sanctions. As a result, other outcome measures for an evaluation of its predictive validity will be needed, even with a large sample size (around 8,000 UK candidates per year).

Nevertheless, given that at least some sanctions relate to communication (and not just clinical performance), the results of Wakeford and colleagues’ study also imply that there is not necessarily a trade-off between a doctor’s knowledge base and their skills relating to communication, empathy and bedside manner. This may have implications for those responsible for selection into and within the profession, as Richard Lilford and I suggested some time ago.[3] Taken to its limit, it could be argued that the expensive and often criticised situational judgement test which is intended to evaluate the non-cognitive attributes of doctors may not be required after all.

— Celia Brown, Associate Professor

References:

  1. Wakeford R, Ludka K, Woolf K, McManus IC. Fitness to practise sanctions in UK doctors are predicted by poor performance at MRCGP and MRCP(UK) assessments: data linkage study. BMC Medicine. 2018; 16: 230.
  2. McManus IC,Woolf K, Dacre J, Paice E, Dewberry C. The Academic Backbone: longitudinal continuities in educational achievement from secondary school and medical school to MRCP(UK) and the specialist register in UK medical students and doctorsBMC Medicine. 2013: 11:242.
  3. Brown CA, & Lilford RJ. Selecting medical students. BMJ. 2008; 336: 786.

A Casualty of Evidence-Based Medicine – Or Just One of Those Things. Balancing a Personal and Population Approach

My mother-in-law, Celia, died last Christmas. She died in a nursing care home after a short illness – a UTI that precipitated prescription of two courses of antibiotics followed by an overwhelming C. diffinfection from which she did not recover. She had suffered from mild COPD after years of cigarette smoking, although she had given up more than 35 years previously, and she also had hypertension (high blood pressure) treated with a variety of different medications (more of which later). She was an organised and sensible Jewish woman who would not let you leave her flat without a food parcel of one kind or another, and who had arranged private health insurance to have her knees and cataracts replaced in good time. Officially, medically she had multimorbidity; unofficially her life was a full and active one, which she enjoyed. She moved house sensibly and in good time, to a much smaller warden-supervised flat with a stair lift, ready to enjoy her declining years in comfort and with support. She had a wide circle of friends, loved going out to matinées at the theatre, and was a passionate bridge player and doting grandma. So far so typical, but I wonder if indirectly she died of iatrogenesis – doctor induced disease – and I have been worrying about exactly how to understand and interpret the pattern of events that afflicted her for some time.

A couple of weeks ago a case-control study was published in JAMA (I can already hear you say ‘case control in JAMA!’ yes – andit’s a good paper).[1] It helps to raise the problem of whatmay have happened to my son’s grandma and has implications for evidence use in health care. The important issue is that my mother-in-law also suffered from recurrent syncope, or fainting and falls. It became inconvenient – actually more than inconvenient. She would faint after getting up from a meal, after going upstairs, after rising in the morning – in fact at any time when she stood up. She fell a lot, maybe ten times that I knew about and perhaps there were more. She badly bruised her face once, falling onto her stair lift and on three occasions she broke her bones as a result of falling. She broke her ankle requiring surgical intervention and her arm, and her little finger. Her GP ordered a 24-hour ECG and referred her to a cardiologist where she had a heap of expensive investigations.

Ever the over-enthusiastic medically-qualified, meddling epidemiologist, I went with her to see her cardiologist. We had a long discussion about my presumptive diagnosis: postural hypotension – low blood pressure on standing up – and her blood pressure readings confirmed my suspicion. Postural hypotension can be caused by rare abnormalities, but one of the commonest causes is antihypertensive medication – medication for high blood pressure. The cardiologist and the GP were interested in my view, but were unhappy to change her medication. As far as they were concerned, she definitely came into the category of high blood pressure, which should be treated.

The JAMA paper describes the mortality and morbidity experience of 19,143 treated patients matched to untreated controlsin the UK using CPRD data. Patients entered the study on an ‘index date’, defined as 12 months after the date of the third consecutive blood pressure reading in specific a range (140-159/90-99mmHg). It says: “During a median follow-up period of 5.8 years (interquartile range, 2.6-9.0 years), no evidence of an association was found between antihypertensive treatment and mortality (hazard ratio [HR], 1.02; 95% CI, 0.88-1.17) or between antihypertensive treatment and CVD (HR, 1.09; 95% CI, 0.95-1.25). Treatment was associated with an increased risk of adverse events, including hypotension (HR, 1.69; 95% CI, 1.30-2.20; number needed to harm at 10 years [NNH10], 41), and syncope (HR, 1.28; 95% CI, 1.10-1.50; NNH10, 35).”

Translated into plain English, this implies that the high blood pressure medication did not make a difference to the outcomes that it was meant to prevent (cardiovascular disease or death). However, it did make a difference to the likelihood of getting adverse events including hypotension (low blood pressure) and syncope (fainting). The paper concludes: “This prespecified analysis found no evidence to support guideline recommendations that encourage initiation of treatment in patients with low-risk mild hypertension. There was evidence of an increased risk of adverse events, which suggests that physicians should exercise caution when following guidelines that generalize findings from trials conducted in high-risk individuals to those at lower risk.”

Of course, there are plenty of possible criticisms that can never be completely ironed out of a retrospective case control study relying on routine data, even by the eagle-eyed scrutineers at CLAHRC WM and the JAMA editorial office. Were there underlying pre-existing characteristics that differentiated case and controls at inception into the study, which might affect their subsequent mortality or morbidity experience? Perhaps those who were the untreated controls were already ‘survivors’ in some way that could not be adjusted for. Was the follow-up period long enough for the participants to experience the relevant outcomes of interest? A median of 5.8 years is not long when considering the development of major cardiovascular illness. Was attention to methods of dealing with missing data adequate? For example, the study says: “Where there was no record of blood pressure lowering, statin or antiplatelet treatment, it was assumed that patients were not prescribed treatment.” Nevertheless, some patients might have been receiving prescriptions that, for whatever reason, were not properly recorded. The article is interesting, and food for thought. We must always bear in mind, however, that observational designs are subject to the play of those well-known, apparently causative variables, ‘confoundings.’[2]

What does all this mean for my mother-in-law? I did not have access to her full medical record and do not know the exact pattern of her blood pressure readings over the years. I am sure that current guideless would clearly have stated that she should be prescribed antihypertensive medication. The risk of her getting a cardiovascular event must have been high, but the falls devastated her life completely. Her individual GP and consultant took a reasonable, defensible and completely sensible decision to continue with her medication and her falls continued. Finally, a family decision was taken that she couldn’t stay in her own home – she had to be watched 24 hours a day. Her unpredictable and devastating falls were very much a factor in the decision.

Celia hated losing her autonomy and she never really agreed with the decision. From the day that the decision was taken she went downhill. She stopped eating when she went in to the nursing home and wouldn’t even take the family’s chicken soup, (the Jewish antibiotic) however lovingly prepared. It was not surprising that after a few weeks, and within days of her 89thbirthday, she finally succumbed to infection and died.

How can we rationalise all this? Any prescription for any medication should be a balance of risks and benefits, and we need to assess these at both the population level, for guidelines, and at the individual level, for individuals. It’s very hard to calculate precisely how the risk of possible future cardiovascular disease (heart attack or stroke) stacked up for my mother-in-law, against the real and present danger of her falls. But I can easily see what apparently went wrong in her medical care, with the benefit of hindsight. I think that the conclusion has to be that in health care we should never lose sight of the individual. Was my mother-in-law an appropriately treated elderly woman experiencing the best of evidence-based medicine? Or was she the victim of iatrogenesis, a casualty of evidence-based medicine whose personal experiences and circumstances were not fully taken into account in the application of guidelines? Certainly, in retrospect it seems to me that I may have failed her – I wish I’d supported her more to have her health care planned around her life, rather than to have her shortened life planned around her health care.

Aileen Clarke, Professor at Warwick Medical School

References:

  1. Sheppard JP, Stevens S, Stevens R, et al. Benefits and Harms of Antihypertensive Treatment in Low-Risk Patients With Mild Hypertension. JAMA Intern Med. 2018.
  2. Goldacre B. Personal communication. 2018.

Evidence-Based Guidelines and Practitioner Expertise to Optimise Community Health Worker Programmes

The rapid increase in scale and scope of community health worker (CHW) programmes highlights a clear need for guidance to help programme providers optimise programme design. A new World Health Organization (WHO) guideline in this area [1] is therefore particularly welcome, and provides a complement to existing guidance based on practitioner expertise.[2] The authors of the WHO guideline undertook an overview of existing reviews (N=122 reviews with over 4,000 references included), 15 separate systematic reviews of primary studies (N=137 studies included), and a stakeholder perception survey (N=96 responses). The practitioner expertise report was developed following a consensus meeting of six CHW programme implementers, a review of over 100 programme documents, a comparison of the standard operating procedures of each implementer to identify areas of alignment and variation, and interviews with each implementer.

The volume of existing research, in terms of the number of eligible studies included in each of the 15 systematic reviews, varied widely, from no studies for the review question “Should practising CHWs work in a multi-cadre team versus in a single-cadre CHW system?” to 43 studies for the review question “Are community engagement strategies effective in improving CHW programme performance and utilization?”. Across the 15 review questions, only two could be answered with “moderate” certainty of evidence (the remainder were “low” or “very low”): “What competencies should be included in the curriculum?” and “Are community engagement strategies effective?”. Only three review questions had a “strong” recommendation (as opposed to “conditional”): those based on Remuneration(do so financially), Contracting agreements(give CHWs a written agreement), and Community engagement(adopt various strategies). There was also a “strong” recommendation not to use marital status as a selection criterion.

The practitioner expertise report provided recommendations in eight key areas and included a series of appendices with examples of selection tools, supervision tools and performance management tools. Across the 18 design elements, there was alignment across the six implementers for 14, variation for two (Accreditation– although it is recommended that all CHW programmes include accreditation – and CHW:Population ratio), and general alignment but one or more outliers for two (Career advancement– although supported by all implementers, and Supply chain management practices).

There was general agreement between the two documents in terms of the design elements that should be considered for CHW programmes (Table 1), although notincluding an element does not necessarily mean that the report authors do not think it is important. In terms of the specific content of the recommendations, the practitioner expertise document was generally more specific; for example, on the frequency of supervision the WHO recommend “regular support” and practitioners “at least once per month”. The practitioner expertise report also included detail on selection processes, as well as selection criteria: not just what to select for, but how to put this into practice in the field. Both reports rightly highlight the need for programme implementers to consider all of the recommendations within their own local contexts; one size will not fit all. Both also highlight the need for more high quality research. We recently found no evidence of the predictive validity of the selection tools used by Living Goods to select their CHWs,[3] although these tools are included as exemplars in the practitioner expertise report. Given the lack of high quality evidence available to the WHO report authors, (suitably qualified) practitioner expertise is vital in the short term, and this should now be used in conjunction with the WHO report findings to agree priorities for future research.

Table 1: Comparison of design elements included in the WHO guideline and Practitioner Expertise report

114 DC - WHO Guidelines Fig

— Celia Taylor, Associate Professor

References:

  1. World Health Organization. WHO guideline on health policy and system support to optimize community health worker programmes. Geneva, Switzerland: WHO; 2018.
  2. Community Health Impact Coalition. Practitioner Expertise to Optimize Community Health Systems. 2018.
  3. Taylor CA, Lilford RJ, Wroe E, Griffiths F, Ngechu R. The predictive validity of the Living Goods selection tools for community health workers in Kenya: cohort study. BMC Health Serv Res. 2018; 18: 803.

Re-thinking Medical Student Written Assessment

“Patients do not walk into the clinic saying ‘I have one of these five diagnoses. Which do you think is most likely?’” (Surry et al., 2017)

The predominant form of written assessment for UK medical students is the ‘best of five multiple choice question’ (Bo5). Students are presented with a clinical scenario – usually information about a patient, a lead-in or question such as “which is the most likely diagnosis?” and a list of five possible answers, only one of which is unambiguously correct. Bo5 questions are incredibly easy to mark, particularly in the age of computer-read answer sheets (or even computerised assessment). This is critical when results must be turned-round, ratified and feedback provided to students in a timely manner. Because Bo5s are relatively short (UK medical schools allow a median of 72 seconds per question, compared with short answer or essay questions for which at least 10 minutes per question would be allowed), an exam comprising of Bo5 questions can cover a broad sample of the curriculum. This helps to improve the reliability of the exam: a student’s grade is not contingent on ‘what comes up in the exam’, so should have been similar had a different set of questions covering the same curriculum been used. Students not only know that their (or others’) scores are not dependent on what came up, but they are also reassured that they would get the same score regardless of who (or what) marked their paper. There are no hawk/dove issues in Bo5 marking.

On the other hand, Bo5 questions are notoriously difficult to develop. The questions used in the Medical Schools Council Assessment Alliance (MSCAA) Common Content project, where questions are shared across UK medical schools to enable passing standards for written finals exams to be compared,[1] go through an extensive review and selection process prior to inclusion (the general process for MSCAA questions is summarised by Melville, et al. [2]). Yet the data are returned for analysis with comments such as “There is an assumption made in this question that his wife has been faithful to the man” or “Poor distractors – no indication for legionella testing”. But perhaps the greatest problem with Bo5 questions is their poor representativeness to clinical practice. As the title of this blog implied, patients do not come with a list of five possible pathologies, diagnoses, important investigations, treatment options, or management plans. While a doctor would often formulate such a list (e.g. a differential diagnosis) before determining the most likely or appropriate option, such formulation requires considerable skill. We all know that assessment drives learning, so by using Bo5 we may therefore be inadvertently hindering students from developing the full set of clinical reasoning skills required of a doctor. There is certainly evidence that students use test-taking strategies such as elimination of implausible answers and clue-seeking when sitting Bo5-based exams.[3]

A new development in medical student assessment, the Very Short Answer question (VSA) therefore holds much promise. It shifts some of the academic/expert time from question development to marking, but, by exploiting computer-based assessment technology, does so in a way that is not prohibitive given the turn-around times imposed by institutions. The VSA starts with the same clinical scenario as a Bo5. The lead-in changes from “Which is…?” to “What is…?” and this is followed by a blank space. Students are required to type between one and five words in response. A pilot of the VSA style question showed that the list of acceptable answers for a question could be finalised by a clinical academic in just over 90 seconds for a cohort of 300 students.[4] With the finalised list automatically applied to all students’ answers, again there are no concerns regarding hawk/dove markers that would threaten the exam’s acceptability to students. While more time is required per question when using VSAs compared to Bo5s, the internal consistency of VSAs in the pilot was higher for the same number of questions,[4] so it should be possible to find an appropriate compromise between exam length and curriculum coverage that does not jeopardise reliability. The major gain with the use of VSA questions is in clinical validity; these questions are more representative of actual clinical practice than Bo5s, as was reported by the students who participated in the pilot.[4]

To produce more evidence around the utility of VSAs, the MSCAA is conducting a large-scale pilot of VSA questions with final year medical students across the UK this autumn. The pilot will compare student responses and scores to Bo5 and VSA questions delivered electronically and assess the feasibility of online delivery using the MSCAA’s own exam delivery system. A small scale ‘think aloud’ study will run alongside the pilot, to compare students’ thought processes as they attempt Bo5 and VSA questions. This work will provide an initial test of the hypothesis that gains in clinical reasoning validity could be achieved with VSAs, as students are forced to think ‘outside the list of five’. There is strong support for the pilot from UK medical schools, so the results will have good national generalisability and may help to inform the design of the written component of the UK Medical Licensing Assessment.

We would love to know what others, particularly PPI representatives, think of this new development in medical student assessment.

— Celia Taylor, Associate Professor

References:

  1. Taylor CA, Gurnell M, Melville CR, Kluth DC, Johnson N, Wass V. Variation in passing standards for graduation‐level knowledge items at UK medical schools. Med Educ. 2017; 51(6): 612-20.
  2. Melville C, Gurnell M, Wass V. #5CC14 (28171) The development of high quality Single Best Answer questions for a national undergraduate finals bank. [Abstract] Presented at: The International Association for Medical Education AMEE 2015; 2015 Oct 22; Glasgow. p. 372.
  3. Surry LT, Torre D, Durning SJ. Exploring examinee behaviours as validity evidence for multiple‐choice question examinations. Med Educ. 2017; 51(10): 1075-85.
  4. Sam AH, Field SM, Collares CF, et al. Very‐short‐answer questions: reliability, discrimination and acceptability. Med Educ.2018; 52(4): 447-55.

Cognitive Bias Modification for Addictive Behaviours

It can be difficult to change health behaviours. Good intentions to quit smoking or drink less alcohol, for example, do not always translate into action – or, if they do, the change doesn’t last very long. A meta-analysis of meta-analyses suggests that intentions explain, at best, a third of the variation in actual behaviour change.[1] [2] What else can be done?

One approach is to move from intentions to inattention. Quite automatically, people who regularly engage in a behaviour like smoking or drinking alcohol pay more attention to smoking and alcohol-related stimuli. To interrupt this process ‘cognitive bias modification’ (CBM) can be used.

Amongst academics, the results of CBM have been called “striking” (p. 464),[3] prompted questions about how such a light-touch intervention can have such strong effects (p. 495),[4] and led to the development of online CBM platforms.[5]

An example of a CBM task for heavy alcohol drinkers is using a joystick to ‘push away’ pictures of beer and wine and ‘pull in’ pictures of non-alcoholic soft drinks. Alcoholic in-patients who received just an hour of this type of CBM showed a 13% lower rate of relapse a year later than those who did not – 50/108 patients in the experimental group and 63/106 patients in the control group.[4]

Debate about the efficacy of CBM is ongoing. It appears that CBM is more effective when administered in clinical settings rather than in a lab experiment or online.[6]

— Laura Kudrna, Research Fellow

References:

  1. Sheeran P. Intention-behaviour relations: A conceptual and empirical review. In: Stroebe W, Hewstone M (Eds.). European review of social psychology, (Vol. 12, pp. 1–36). London: Wiley; 2002.
  2. Webb TL Sheeran P. Does changing behavioral intentions engender behavior change? A meta-analysis of the experimental evidence. Psychol Bull. 2006; 132(2): 249.
  3. Sheeran P, Gollwitzer PM, Bargh JA. Nonconscious processes and health. Health Psychol. 2013; 32(5): 460.
  4. Wiers RW, Eberl C, Rinck M, Becker ES, Lindenmeyer J. Retraining automatic action tendencies changes alcoholic patients’ approach bias for alcohol and improves treatment outcome. Psychol Sci. 2011; 22(4): 490-7.
  5. London School of Economics and Political Science. New brain-training tool to help people cut drinking. 18 May 2016.
  6. Wiers RW, Boffo M, Field M. What’s in a trial? On the importance of distinguishing between experimental lab studies and randomized controlled trials: The case of cognitive bias modification and alcohol use disorders. J Stud Alcohol Drugs. 2018; 79(3): 333-43.