Tag Archives: Guest blog

Do Poor Examination Results Predict That a Doctor Will Get into Trouble with the Regulator?

A recent paper by Richard Wakeford and colleagues [1] reports that better performance in postgraduate examinations for membership of the Royal Colleges of General Practitioners and of Physicians (MRCGP and MRCP respectively) is associated with a reduced likelihood of being sanctioned by the General Medical Council for insufficient fitness to practise. The effect was stronger for examinations of clinical skills, as opposed to those of applied medical knowledge, but was statistically significant for all five examinations studied. The unweighted mean effect size (Cohen’s d) was -0.68 – i.e. doctors with sanctions had examination scores that were, on average, around two-thirds of a standard deviation below those of doctors without a sanction. The authors find a linear relationship between performance and the likelihood of a sanction, which suggests that there was no clear performance threshold at which there is a significant change in the risk of a sanction.

The main analysis does not control for the timing of the examination attempt vis-á-vis the timing of the sanction, and the authors rightly point out that having a sanction could reduce subsequent examination performance due to the stress of being under investigation, for example. However the results of a sub-analysis for two of the knowledge assessments (MRCGP Applied Knowledge Test, and MRCP Part 1) suggest a slightly larger effect size when only considering doctors whose examination attempt was at least two years before their sanction, so the “temporality” requirement for causation is not absent. We also know there is some stability in relative examination performance (and, plausibly, therefore, knowledge) over time [2] – so “reversed” timing may not be a critical bias.

This study is important as it suggests that performance on the proposed UK Medical Licensing Assessment (UKMLA) (which is likely to be similar in format to both the examinations included in this study) may be a predictor of future standards of professional practice. However, the study also suggests that it may not be possible to find a pass mark for the UKMLA that has a significant impact on the number of doctors for whom sanctions are imposed (in comparison to other possible pass marks). Given the intention of the UKMLA as a pass/fail assessment and the low rate of sanctions amongst doctors on the GMC register (1.6% of those on the register in January 2017 had one or more sanctions since September 2008, and even lower amongst doctors in their first decade since joining the register), it is unlikely that the introduction of the UKLMA will have a detectable difference on the rate of sanctions. As a result, other outcome measures for an evaluation of its predictive validity will be needed, even with a large sample size (around 8,000 UK candidates per year).

Nevertheless, given that at least some sanctions relate to communication (and not just clinical performance), the results of Wakeford and colleagues’ study also imply that there is not necessarily a trade-off between a doctor’s knowledge base and their skills relating to communication, empathy and bedside manner. This may have implications for those responsible for selection into and within the profession, as Richard Lilford and I suggested some time ago.[3] Taken to its limit, it could be argued that the expensive and often criticised situational judgement test which is intended to evaluate the non-cognitive attributes of doctors may not be required after all.

— Celia Brown, Associate Professor

References:

  1. Wakeford R, Ludka K, Woolf K, McManus IC. Fitness to practise sanctions in UK doctors are predicted by poor performance at MRCGP and MRCP(UK) assessments: data linkage study. BMC Medicine. 2018; 16: 230.
  2. McManus IC,Woolf K, Dacre J, Paice E, Dewberry C. The Academic Backbone: longitudinal continuities in educational achievement from secondary school and medical school to MRCP(UK) and the specialist register in UK medical students and doctorsBMC Medicine. 2013: 11:242.
  3. Brown CA, & Lilford RJ. Selecting medical students. BMJ. 2008; 336: 786.
Advertisements

A Casualty of Evidence-Based Medicine – Or Just One of Those Things. Balancing a Personal and Population Approach

My mother-in-law, Celia, died last Christmas. She died in a nursing care home after a short illness – a UTI that precipitated prescription of two courses of antibiotics followed by an overwhelming C. diffinfection from which she did not recover. She had suffered from mild COPD after years of cigarette smoking, although she had given up more than 35 years previously, and she also had hypertension (high blood pressure) treated with a variety of different medications (more of which later). She was an organised and sensible Jewish woman who would not let you leave her flat without a food parcel of one kind or another, and who had arranged private health insurance to have her knees and cataracts replaced in good time. Officially, medically she had multimorbidity; unofficially her life was a full and active one, which she enjoyed. She moved house sensibly and in good time, to a much smaller warden-supervised flat with a stair lift, ready to enjoy her declining years in comfort and with support. She had a wide circle of friends, loved going out to matinées at the theatre, and was a passionate bridge player and doting grandma. So far so typical, but I wonder if indirectly she died of iatrogenesis – doctor induced disease – and I have been worrying about exactly how to understand and interpret the pattern of events that afflicted her for some time.

A couple of weeks ago a case-control study was published in JAMA (I can already hear you say ‘case control in JAMA!’ yes – andit’s a good paper).[1] It helps to raise the problem of whatmay have happened to my son’s grandma and has implications for evidence use in health care. The important issue is that my mother-in-law also suffered from recurrent syncope, or fainting and falls. It became inconvenient – actually more than inconvenient. She would faint after getting up from a meal, after going upstairs, after rising in the morning – in fact at any time when she stood up. She fell a lot, maybe ten times that I knew about and perhaps there were more. She badly bruised her face once, falling onto her stair lift and on three occasions she broke her bones as a result of falling. She broke her ankle requiring surgical intervention and her arm, and her little finger. Her GP ordered a 24-hour ECG and referred her to a cardiologist where she had a heap of expensive investigations.

Ever the over-enthusiastic medically-qualified, meddling epidemiologist, I went with her to see her cardiologist. We had a long discussion about my presumptive diagnosis: postural hypotension – low blood pressure on standing up – and her blood pressure readings confirmed my suspicion. Postural hypotension can be caused by rare abnormalities, but one of the commonest causes is antihypertensive medication – medication for high blood pressure. The cardiologist and the GP were interested in my view, but were unhappy to change her medication. As far as they were concerned, she definitely came into the category of high blood pressure, which should be treated.

The JAMA paper describes the mortality and morbidity experience of 19,143 treated patients matched to untreated controlsin the UK using CPRD data. Patients entered the study on an ‘index date’, defined as 12 months after the date of the third consecutive blood pressure reading in specific a range (140-159/90-99mmHg). It says: “During a median follow-up period of 5.8 years (interquartile range, 2.6-9.0 years), no evidence of an association was found between antihypertensive treatment and mortality (hazard ratio [HR], 1.02; 95% CI, 0.88-1.17) or between antihypertensive treatment and CVD (HR, 1.09; 95% CI, 0.95-1.25). Treatment was associated with an increased risk of adverse events, including hypotension (HR, 1.69; 95% CI, 1.30-2.20; number needed to harm at 10 years [NNH10], 41), and syncope (HR, 1.28; 95% CI, 1.10-1.50; NNH10, 35).”

Translated into plain English, this implies that the high blood pressure medication did not make a difference to the outcomes that it was meant to prevent (cardiovascular disease or death). However, it did make a difference to the likelihood of getting adverse events including hypotension (low blood pressure) and syncope (fainting). The paper concludes: “This prespecified analysis found no evidence to support guideline recommendations that encourage initiation of treatment in patients with low-risk mild hypertension. There was evidence of an increased risk of adverse events, which suggests that physicians should exercise caution when following guidelines that generalize findings from trials conducted in high-risk individuals to those at lower risk.”

Of course, there are plenty of possible criticisms that can never be completely ironed out of a retrospective case control study relying on routine data, even by the eagle-eyed scrutineers at CLAHRC WM and the JAMA editorial office. Were there underlying pre-existing characteristics that differentiated case and controls at inception into the study, which might affect their subsequent mortality or morbidity experience? Perhaps those who were the untreated controls were already ‘survivors’ in some way that could not be adjusted for. Was the follow-up period long enough for the participants to experience the relevant outcomes of interest? A median of 5.8 years is not long when considering the development of major cardiovascular illness. Was attention to methods of dealing with missing data adequate? For example, the study says: “Where there was no record of blood pressure lowering, statin or antiplatelet treatment, it was assumed that patients were not prescribed treatment.” Nevertheless, some patients might have been receiving prescriptions that, for whatever reason, were not properly recorded. The article is interesting, and food for thought. We must always bear in mind, however, that observational designs are subject to the play of those well-known, apparently causative variables, ‘confoundings.’[2]

What does all this mean for my mother-in-law? I did not have access to her full medical record and do not know the exact pattern of her blood pressure readings over the years. I am sure that current guideless would clearly have stated that she should be prescribed antihypertensive medication. The risk of her getting a cardiovascular event must have been high, but the falls devastated her life completely. Her individual GP and consultant took a reasonable, defensible and completely sensible decision to continue with her medication and her falls continued. Finally, a family decision was taken that she couldn’t stay in her own home – she had to be watched 24 hours a day. Her unpredictable and devastating falls were very much a factor in the decision.

Celia hated losing her autonomy and she never really agreed with the decision. From the day that the decision was taken she went downhill. She stopped eating when she went in to the nursing home and wouldn’t even take the family’s chicken soup, (the Jewish antibiotic) however lovingly prepared. It was not surprising that after a few weeks, and within days of her 89thbirthday, she finally succumbed to infection and died.

How can we rationalise all this? Any prescription for any medication should be a balance of risks and benefits, and we need to assess these at both the population level, for guidelines, and at the individual level, for individuals. It’s very hard to calculate precisely how the risk of possible future cardiovascular disease (heart attack or stroke) stacked up for my mother-in-law, against the real and present danger of her falls. But I can easily see what apparently went wrong in her medical care, with the benefit of hindsight. I think that the conclusion has to be that in health care we should never lose sight of the individual. Was my mother-in-law an appropriately treated elderly woman experiencing the best of evidence-based medicine? Or was she the victim of iatrogenesis, a casualty of evidence-based medicine whose personal experiences and circumstances were not fully taken into account in the application of guidelines? Certainly, in retrospect it seems to me that I may have failed her – I wish I’d supported her more to have her health care planned around her life, rather than to have her shortened life planned around her health care.

Aileen Clarke, Professor at Warwick Medical School

References:

  1. Sheppard JP, Stevens S, Stevens R, et al. Benefits and Harms of Antihypertensive Treatment in Low-Risk Patients With Mild Hypertension. JAMA Intern Med. 2018.
  2. Goldacre B. Personal communication. 2018.

Evidence-Based Guidelines and Practitioner Expertise to Optimise Community Health Worker Programmes

The rapid increase in scale and scope of community health worker (CHW) programmes highlights a clear need for guidance to help programme providers optimise programme design. A new World Health Organization (WHO) guideline in this area [1] is therefore particularly welcome, and provides a complement to existing guidance based on practitioner expertise.[2] The authors of the WHO guideline undertook an overview of existing reviews (N=122 reviews with over 4,000 references included), 15 separate systematic reviews of primary studies (N=137 studies included), and a stakeholder perception survey (N=96 responses). The practitioner expertise report was developed following a consensus meeting of six CHW programme implementers, a review of over 100 programme documents, a comparison of the standard operating procedures of each implementer to identify areas of alignment and variation, and interviews with each implementer.

The volume of existing research, in terms of the number of eligible studies included in each of the 15 systematic reviews, varied widely, from no studies for the review question “Should practising CHWs work in a multi-cadre team versus in a single-cadre CHW system?” to 43 studies for the review question “Are community engagement strategies effective in improving CHW programme performance and utilization?”. Across the 15 review questions, only two could be answered with “moderate” certainty of evidence (the remainder were “low” or “very low”): “What competencies should be included in the curriculum?” and “Are community engagement strategies effective?”. Only three review questions had a “strong” recommendation (as opposed to “conditional”): those based on Remuneration(do so financially), Contracting agreements(give CHWs a written agreement), and Community engagement(adopt various strategies). There was also a “strong” recommendation not to use marital status as a selection criterion.

The practitioner expertise report provided recommendations in eight key areas and included a series of appendices with examples of selection tools, supervision tools and performance management tools. Across the 18 design elements, there was alignment across the six implementers for 14, variation for two (Accreditation– although it is recommended that all CHW programmes include accreditation – and CHW:Population ratio), and general alignment but one or more outliers for two (Career advancement– although supported by all implementers, and Supply chain management practices).

There was general agreement between the two documents in terms of the design elements that should be considered for CHW programmes (Table 1), although notincluding an element does not necessarily mean that the report authors do not think it is important. In terms of the specific content of the recommendations, the practitioner expertise document was generally more specific; for example, on the frequency of supervision the WHO recommend “regular support” and practitioners “at least once per month”. The practitioner expertise report also included detail on selection processes, as well as selection criteria: not just what to select for, but how to put this into practice in the field. Both reports rightly highlight the need for programme implementers to consider all of the recommendations within their own local contexts; one size will not fit all. Both also highlight the need for more high quality research. We recently found no evidence of the predictive validity of the selection tools used by Living Goods to select their CHWs,[3] although these tools are included as exemplars in the practitioner expertise report. Given the lack of high quality evidence available to the WHO report authors, (suitably qualified) practitioner expertise is vital in the short term, and this should now be used in conjunction with the WHO report findings to agree priorities for future research.

Table 1: Comparison of design elements included in the WHO guideline and Practitioner Expertise report

114 DC - WHO Guidelines Fig

— Celia Taylor, Associate Professor

References:

  1. World Health Organization. WHO guideline on health policy and system support to optimize community health worker programmes. Geneva, Switzerland: WHO; 2018.
  2. Community Health Impact Coalition. Practitioner Expertise to Optimize Community Health Systems. 2018.
  3. Taylor CA, Lilford RJ, Wroe E, Griffiths F, Ngechu R. The predictive validity of the Living Goods selection tools for community health workers in Kenya: cohort study. BMC Health Serv Res. 2018; 18: 803.

Re-thinking Medical Student Written Assessment

“Patients do not walk into the clinic saying ‘I have one of these five diagnoses. Which do you think is most likely?’” (Surry et al., 2017)

The predominant form of written assessment for UK medical students is the ‘best of five multiple choice question’ (Bo5). Students are presented with a clinical scenario – usually information about a patient, a lead-in or question such as “which is the most likely diagnosis?” and a list of five possible answers, only one of which is unambiguously correct. Bo5 questions are incredibly easy to mark, particularly in the age of computer-read answer sheets (or even computerised assessment). This is critical when results must be turned-round, ratified and feedback provided to students in a timely manner. Because Bo5s are relatively short (UK medical schools allow a median of 72 seconds per question, compared with short answer or essay questions for which at least 10 minutes per question would be allowed), an exam comprising of Bo5 questions can cover a broad sample of the curriculum. This helps to improve the reliability of the exam: a student’s grade is not contingent on ‘what comes up in the exam’, so should have been similar had a different set of questions covering the same curriculum been used. Students not only know that their (or others’) scores are not dependent on what came up, but they are also reassured that they would get the same score regardless of who (or what) marked their paper. There are no hawk/dove issues in Bo5 marking.

On the other hand, Bo5 questions are notoriously difficult to develop. The questions used in the Medical Schools Council Assessment Alliance (MSCAA) Common Content project, where questions are shared across UK medical schools to enable passing standards for written finals exams to be compared,[1] go through an extensive review and selection process prior to inclusion (the general process for MSCAA questions is summarised by Melville, et al. [2]). Yet the data are returned for analysis with comments such as “There is an assumption made in this question that his wife has been faithful to the man” or “Poor distractors – no indication for legionella testing”. But perhaps the greatest problem with Bo5 questions is their poor representativeness to clinical practice. As the title of this blog implied, patients do not come with a list of five possible pathologies, diagnoses, important investigations, treatment options, or management plans. While a doctor would often formulate such a list (e.g. a differential diagnosis) before determining the most likely or appropriate option, such formulation requires considerable skill. We all know that assessment drives learning, so by using Bo5 we may therefore be inadvertently hindering students from developing the full set of clinical reasoning skills required of a doctor. There is certainly evidence that students use test-taking strategies such as elimination of implausible answers and clue-seeking when sitting Bo5-based exams.[3]

A new development in medical student assessment, the Very Short Answer question (VSA) therefore holds much promise. It shifts some of the academic/expert time from question development to marking, but, by exploiting computer-based assessment technology, does so in a way that is not prohibitive given the turn-around times imposed by institutions. The VSA starts with the same clinical scenario as a Bo5. The lead-in changes from “Which is…?” to “What is…?” and this is followed by a blank space. Students are required to type between one and five words in response. A pilot of the VSA style question showed that the list of acceptable answers for a question could be finalised by a clinical academic in just over 90 seconds for a cohort of 300 students.[4] With the finalised list automatically applied to all students’ answers, again there are no concerns regarding hawk/dove markers that would threaten the exam’s acceptability to students. While more time is required per question when using VSAs compared to Bo5s, the internal consistency of VSAs in the pilot was higher for the same number of questions,[4] so it should be possible to find an appropriate compromise between exam length and curriculum coverage that does not jeopardise reliability. The major gain with the use of VSA questions is in clinical validity; these questions are more representative of actual clinical practice than Bo5s, as was reported by the students who participated in the pilot.[4]

To produce more evidence around the utility of VSAs, the MSCAA is conducting a large-scale pilot of VSA questions with final year medical students across the UK this autumn. The pilot will compare student responses and scores to Bo5 and VSA questions delivered electronically and assess the feasibility of online delivery using the MSCAA’s own exam delivery system. A small scale ‘think aloud’ study will run alongside the pilot, to compare students’ thought processes as they attempt Bo5 and VSA questions. This work will provide an initial test of the hypothesis that gains in clinical reasoning validity could be achieved with VSAs, as students are forced to think ‘outside the list of five’. There is strong support for the pilot from UK medical schools, so the results will have good national generalisability and may help to inform the design of the written component of the UK Medical Licensing Assessment.

We would love to know what others, particularly PPI representatives, think of this new development in medical student assessment.

— Celia Taylor, Associate Professor

References:

  1. Taylor CA, Gurnell M, Melville CR, Kluth DC, Johnson N, Wass V. Variation in passing standards for graduation‐level knowledge items at UK medical schools. Med Educ. 2017; 51(6): 612-20.
  2. Melville C, Gurnell M, Wass V. #5CC14 (28171) The development of high quality Single Best Answer questions for a national undergraduate finals bank. [Abstract] Presented at: The International Association for Medical Education AMEE 2015; 2015 Oct 22; Glasgow. p. 372.
  3. Surry LT, Torre D, Durning SJ. Exploring examinee behaviours as validity evidence for multiple‐choice question examinations. Med Educ. 2017; 51(10): 1075-85.
  4. Sam AH, Field SM, Collares CF, et al. Very‐short‐answer questions: reliability, discrimination and acceptability. Med Educ.2018; 52(4): 447-55.

Cognitive Bias Modification for Addictive Behaviours

It can be difficult to change health behaviours. Good intentions to quit smoking or drink less alcohol, for example, do not always translate into action – or, if they do, the change doesn’t last very long. A meta-analysis of meta-analyses suggests that intentions explain, at best, a third of the variation in actual behaviour change.[1] [2] What else can be done?

One approach is to move from intentions to inattention. Quite automatically, people who regularly engage in a behaviour like smoking or drinking alcohol pay more attention to smoking and alcohol-related stimuli. To interrupt this process ‘cognitive bias modification’ (CBM) can be used.

Amongst academics, the results of CBM have been called “striking” (p. 464),[3] prompted questions about how such a light-touch intervention can have such strong effects (p. 495),[4] and led to the development of online CBM platforms.[5]

An example of a CBM task for heavy alcohol drinkers is using a joystick to ‘push away’ pictures of beer and wine and ‘pull in’ pictures of non-alcoholic soft drinks. Alcoholic in-patients who received just an hour of this type of CBM showed a 13% lower rate of relapse a year later than those who did not – 50/108 patients in the experimental group and 63/106 patients in the control group.[4]

Debate about the efficacy of CBM is ongoing. It appears that CBM is more effective when administered in clinical settings rather than in a lab experiment or online.[6]

— Laura Kudrna, Research Fellow

References:

  1. Sheeran P. Intention-behaviour relations: A conceptual and empirical review. In: Stroebe W, Hewstone M (Eds.). European review of social psychology, (Vol. 12, pp. 1–36). London: Wiley; 2002.
  2. Webb TL Sheeran P. Does changing behavioral intentions engender behavior change? A meta-analysis of the experimental evidence. Psychol Bull. 2006; 132(2): 249.
  3. Sheeran P, Gollwitzer PM, Bargh JA. Nonconscious processes and health. Health Psychol. 2013; 32(5): 460.
  4. Wiers RW, Eberl C, Rinck M, Becker ES, Lindenmeyer J. Retraining automatic action tendencies changes alcoholic patients’ approach bias for alcohol and improves treatment outcome. Psychol Sci. 2011; 22(4): 490-7.
  5. London School of Economics and Political Science. New brain-training tool to help people cut drinking. 18 May 2016.
  6. Wiers RW, Boffo M, Field M. What’s in a trial? On the importance of distinguishing between experimental lab studies and randomized controlled trials: The case of cognitive bias modification and alcohol use disorders. J Stud Alcohol Drugs. 2018; 79(3): 333-43.

Giving Feedback to Patient and Public Advisors: New Guidance for Researchers

Whenever we are asked for our opinion we expect to be thanked and we also like to know if what we have contributed has been useful. If a statistician/qualitative researcher/health economist has contributed to a project, they would (rightfully) expect some acknowledgement and whether their input had been incorporated. As patient and public contributors are key members of the research team, providing valuable insights that shape research design and deliver, it’s right to assume that they should also receive feedback on their contributions. But a recent study led by Dr Elspeth Mathie (CLAHRC East of England) found that routine feedback to PPI contributors is the exception rather than the rule. The mixed methods study (questionnaire and semi-structured interviews) found that feedback was given in a variety of formats with variable satisfaction with feedback. A key finding was that nearly 1 in 5 patient and public contributors (19%) reported never having received feedback for their involvement.[1]

How should feedback be given to public contributors?

There should be no ‘one size fits all’ approach to providing feedback to public contributors. The study recommends early conversations between researchers and public contributors to determine what kind of feedback should be given to contributors and when. The role of a Public and Patient Lead can help to facilitate these discussions and ensure feedback is given and received throughout a research project. Three main categories of feedback were identified:

  • Acknowledgement of contributions – Whether input was received and saying ‘thanks’
  • Information about the impact of contributions – Whether input was useful and how it was incorporated into the project;
  • Study success and progress – Information on whether a project was successful (e.g. securing grant funding/gaining ethical approval) and detail about how the project is progressing.

 

What are the benefits to providing feedback for public contributors?

The study also explored benefits of giving feedback to contributors. Feedback can:

  • Increase motivation for public contributors to be involved in future research projects;
  • Help improve a contributor’s input into future project (if they know what has been useful, they can provide more of the same);
  • Build the public contributor’s confidence;
  • Help the researcher reflect on public involvement and the impact it has on research.

 

What does good feedback look like?

Researchers, PPI Leads and public contributors involved in the feedback study have co-produced Guidance for Researchers on providing feedback for public contributors to research.[2] The guidance explores the following:

  • Who gives feedback?
  • Why is PPI feedback important?
  • When to include PPI feedback in research cycle?
  • What type of feedback?
  • How to give feedback?

Many patient and public contributors get involved in research to ‘make a difference’. This Guidance will hopefully help ensure that all contributors learn how their contributions have made a difference and will also inspire them to continue to provide input to future research projects.

— Magdalena Skrybant, PPIE Lead

References:

  1. Mathie E, Wythe H, Munday D, et al. Reciprocal relationships and the importance of feedback in patient and public involvement: A mixed methods study. Health Expect. 2018.
  2. Centre for Research in Public Health and Community Care. Guidance for Researchers: Feedback. 2018

Effective Collaboration between Academics and Practitioners Facilitates Research Uptake in Practice

New research has been conducted by Eivor Oborn, Professor of Entrepreneurship & Innovation at Warwick Business School and Michael Barrett, Professor of  Information Systems & Innovation Studies at Cambridge Judge Business School, to better understand the contribution of collaboration in bridging the gap between research and its uptake in to practice (Inform Organ. 2018; 28[1]: 44-51).

Much has been written on the role of knowledge exchange to bridge the academic-practitioner divide. The common view Is that academics ‘talk funny’, using specialised language, which often leads to the practical take home messages being missed or ‘lost in translation’. The challenge for academics is to learn how to connect ‘theory or evidence driven’ knowledge with practitioners’ knowledge to ‘give sense’ and enable new insights to form.

The research examines four strategies by which academics may leverage their expertise in collaborative relationships with practitioners to realise, what the authors term: ‘Research Impact and Contributions to Knowledge’ (RICK).

  1. Maintain critical distance
    Academics may adopt a strategy of maintaining critical distance in how they engage in academic-practitioner relations for a variety of reasons, for example, to retain control of the subject of investigation.
  2. Prompt deeper engagementAcademics who are immersed in one domain, become fluent in a new language and gain practical expertise in this second (practical) domain. For example, in the Warwick-led NIHR CLAHRC West Midlands, academics are embedded and work closely with their NHS counterparts. This provides academics with knowledge -sharing and -transfer opportunities, enabling them to better respond to the knowledge requirements of the health service, and in some scenarios, co-design research studies, and catalyse upon opportunities to promote the use of evidence from their research activities.
  3. Develop prescience
    Prescience describes a process of anticipating what we need to know – almost akin to ‘horizon-scanning’. A strategy of prescience would aim to anticipate, conceptualize, and influence significant problems that might arise in domains over time. The WBS-led Enterprise Research Centre employs this strategy and seeks to answer one central question: ‘what drives SME growth?’
  4. Achieve hybrid practices
    Engaged scholarship allows academics to expand their networks and collaboration with other domains and in doing so generate an entirely new field of ‘hybrid’ practices.

The research examines how the utility (such as practical or scientific usefulness) of contributions in academic-practitioner collaboration can be maximised. It calls for established journals to support a new genre of articles that involve engaged scholarship, produced by multidisciplinary teams of academic, practitioners and policymakers.

The research is published in Information & Organization journal, together with a collection of articles on Research Impact and Contributions to Knowledge (RICK) – a framework coined by co-author on the above research, Prof Michael Barrett.

— Nathalie Maillard, WBS Impact Officer

New Framework to Guide the Evaluation of Technology-Supported Services

Heath and care providers are looking to digital technologies to enhance care provision and fill gaps where resource is limited. There is a very large body of research on their use, brought together in reviews, which among many others, include, establishing effectiveness in behaviour change for smoking cessation and encouraging adherence to ART,[1] demonstrating improved utilisation of maternal and child health services in low- and middle-income countries,[2] and delineating the potential for improvement in access to health care for marginalised groups.[3] Frameworks to guide health and care providers when considering the use of digital technologies are also numerous. Mehl and Labrique’s framework aims to help a low- or middle-income country consider how they can use digital mobile health innovation to help succeed in the ambition to achieving universal health coverage.[4] The framework tells us what is somewhat obvious, but by bringing it together it provides a powerful tool for thinking, planning, and countering pressure from interest groups with other ambitions. The ARCHIE framework developed by Greenhalgh, et al.[5] is a similar tool but for people with the ambition of using telehealth and telecare to improve the daily lives of individuals living with health problems. It sets out principles for people developing, implementing, and supporting telehealth and telecare systems so they are more likely to work. It is a framework that, again, can be used to counter pressure from interest groups more interested in the product than the impact of the product on people and the health and care service. Greenhalgh and team have now produced a further framework that is very timely as it provides us with a tool for thinking through the potential for scale-up and sustainability of health and care technologies.[6]

Greenhalgh, et al. reviewed 28 previously published technology implementation frameworks in order to develop their framework, and use their own studies of digital assistive technologies to test the framework. Like the other frameworks this provides health and care providers with a powerful tool for thinking, planning and resisting. The Domains in the Framework include, among others, the health condition, the technology, the adopter system (staff, patients, carers), the organisation, and the Domain of time – how the technology embeds and is adapted over time. For each Domain in the Framework the question is asked whether it is simple, complicated or complex in relation to scale-up and sustainability of the technology. For example, the nature of the condition: is it well understood and predictable (simple), or poorly understood and unpredictable (complex)? Asking this question for each Domain allows us to avoid the pitfall of thinking something is simple when it is in reality complex. For example, there may be a lot of variability in the health condition between patients, but the technology may have been designed with a simplified textbook notion of the condition in mind. I suggest that even where clinicians are involved in the design of interventions, it is easy for them to forget how often they see patients that are not like the textbook, as they, almost without thinking, deploy their skills to adapt treatment and management to the particular patient. Greenhalgh, et al. cautiously conclude that “it is complexity in multiple domains that poses the greatest challenge to scale-up, spread and sustainability”. They provide examples where unrecognised complexity stops in its tracks the use of a technology.

— Frances Griffiths, Professor of Medicine in Society

References:

  1. Free C, Phillips G, Galli L. The effectiveness of mobile-health technology-based health behaviour change or disease management interventions for health care consumers: a systematic review. PLoS Med. 2013;10:e1001362.
  2. Sondaal SFV, Browne JL, Amoakoh-Coleman M, Borgstein A, Miltenburg AS, Verwijs M, et al. Assessing the Effect of mHealth Interventions in Improving Maternal and Neonatal Care in Low- and Middle-Income Countries: A Systematic Review. PLoS One. 2016;11(5):e0154664.
  3. Huxley CJ, Atherton H, Watkins JA, Griffiths F. Digital communication between clinician and patient and the impact on marginalised groups: a realist review in general practice. Br J Gen Pract. 2015;65(641):e813-21.
  4. Mehl G, Labrique A. Prioritising integrated mHealth strategies for universal health coverage. Science. 2014;345:1284.
  5. Greenhalgh T, Procter R, Wherton J, Sugarhood P, Hinder S, Rouncefield M. What is quality in assisted living technology? The ARCHIE framework for effective telehealth and telecare services. BMC Medicine. 2015;13(1):91.
  6. Greenhalgh T, Wherton J, Papoutsi C, Lynch J, Hughes G, A’Court C, et al. Beyond Adoption: A New Framework for Theorizing and Evaluating Nonadoption, Abandonment, and Challenges to the Scale-Up, Spread, and Sustainability of Health and Care Technologies. J Med Internet Res. 2017;19(11):e367.

Patient’s experience of hospital care at weekends

The “weekend effect”, whereby patients admitted to hospitals during weekends appear to be associated with higher mortality compared with patients who are admitted during weekdays, has received substantial attention from the health service community and the general public alike.[1] Evidence of the weekend effect was used to support the introduction of ‘7-day Service’ policy and associated changes to junior doctor’s contracting arrangement by the NHS,[2-4] which have further propelled debates surrounding the nature and causes of the weekend effect.

Members of the CLAHRC West Midlands are closely involved in the HiSLAC project,[5] which is an NIHR HS&DR Programme funded project led by Professor Julian Bion (University of Birmingham) to evaluate the impact of introducing 7-day consultant-led acute medical services. We are undertaking a systematic review of the weekend effect as part of the project,[6] and one of our challenges is to catch up with the rapidly growing literature fuelled by the public and political attention. Despite that hundreds of papers on this topic have been published, there has been a distinct gap in the academic literature – most of the published papers focus on comparing hospital mortality rates between weekends and weekdays, but virtually no study have compared quantitatively the experience and satisfaction of patients between weekends and weekdays. This was the case until we found a study recently published by Chris Graham of the Picker Institute, who has unique access to data not in the public domain, i.e. the dates of admission to hospital given by the respondents.[7]

This interesting study examined data from two nationwide surveys of acute hospitals in 2014 in England: the A&E department patient survey (with 39,320 respondents representing a 34% response rate) and the adult inpatient survey (with 59,083 respondents representing a 47% response rate). Patients admitted at weekends were less likely to respond compared to those admitted during weekdays, but this was accounted for by patient and admission characteristics (e.g. age groups). Contrary to the inference that would be made on care quality based on hospital mortality rates, respondents attending hospital A&E department during weekends actually reported better experiences with regard to ‘doctors and nurses’ and ‘care and treatment’ compared with those attending during weekdays. Patients who were admitted to hospital through A&E during weekends also rated information given to them in the A&E more favourably. No other significant differences in the reported patient experiences were observed between weekend and weekday A&E visits and hospital admissions. [7]

As always, some cautions are needed when interpreting these intriguing findings. First, as the author acknowledged, patients who died following the A&E visits/admissions were excluded from the surveys, and therefore their experiences were not captured. Second, although potential differences in case mix including age, sex, urgency of admission (elective or not), requirement of a proxy for completing the surveys and presence of long-term conditions were taken into account in the aforementioned findings, the statistical adjustment did not include important factors such as main diagnosis and disease severity which could confound patient experience. Readers may doubt whether these factors could overturn the finding. In that case the mechanisms by which weekend admission may lead to improved satisfaction Is unclear. It is possible that patients have different expectations in terms of hospital care that they receive by day of the week and consequently may rate the same level of care differently. The findings from this study are certainly a very valuable addition to the growing literature that starts to unfold the complexity behind the weekend effect, and are a further testament that measuring care quality based on mortality rates alone is unreliable and certainly insufficient, a point that has long been highlighted by the Director of the CLAHRC West Midlands and other colleagues.[8] [9] Our HiSLAC project continues to collect and examine qualitative,[10] quantitative,[5] [6] and economic [11] evidence related to this topic, so watch the space!

— Yen-Fu Chen, Principal Research Fellow

References:

  1. Lilford RJ, Chen YF. The ubiquitous weekend effect: moving past proving it exists to clarifying what causes it. BMJ Qual Saf 2015;24(8):480-2.
  2. House of Commons. Oral answers to questions: Health. 2015. House of Commons, London.
  3. McKee M. The weekend effect: now you see it, now you don’t. BMJ 2016;353:i2750.
  4. NHS England. Seven day hospital services: the clinical case. 2017.
  5. Bion J, Aldridge CP, Girling A, et al. Two-epoch cross-sectional case record review protocol comparing quality of care of hospital emergency admissions at weekends versus weekdays. BMJ Open 2017;7:e018747.
  6. Chen YF, Boyal A, Sutton E, et al. The magnitude and mechanisms of the weekend effect in hospital admissions: A protocol for a mixed methods review incorporating a systematic review and framework synthesis. Systems Review 2016;5:84.
  7. Graham C. People’s experiences of hospital care on the weekend: secondary analysis of data from two national patient surveys. BMJ Qual Saf 2017;29:29.
  8. Girling AJ, Hofer TP, Wu J, et al. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling study. BMJ Qual Saf 2012;21(12):1052-56.
  9. Lilford R, Pronovost P. Using hospital mortality rates to judge hospital performance: a bad idea that just won’t go away. BMJ 2010;340:c2016.
  10. Tarrant C, Sutton E, Angell E, Aldridge CP, Boyal A, Bion J. The ‘weekend effect’ in acute medicine: a protocol for a team-based ethnography of weekend care for medical patients in acute hospital settings. BMJ Open 2017;7: e016755.
  11. Watson SI, Chen YF, Bion JF, Aldridge CP, Girling A, Lilford RJ. Protocol for the health economic evaluation of increasing the weekend specialist to patient ratio in hospitals in England. BMJ Open 2018:In press.

The reliability of ethical review committees

I recently submitted the same application for ethical review for a multi-country study to three ethical review panels, two of which were overseas and one in the UK. The three panels together raised 19 points to be addressed before full approval could be given. Of these 19 points, just one was raised by two committees and none was raised by all three. Given CLAHRC WM’s methodological interest in inter-rater reliability and my own interests in the selection and assessment of health care students and workers, I was left pondering a) whether different ethical review committees consistently have different pass/fail thresholds for different ethical components of a proposed research study; and b) whether others have had similar experiences (we would welcome any examples of either convergent or divergent decisions by different ethical review committees).

Let me explain with two examples. One point raised was the need for formal written client consent during observations of Community Health Workers’ day-to-day activities. We had argued that because the field worker would only be observing the actions of the Community Health Worker and not the client, then formal written client consent was not required, but that informal verbal consent would be requested and the field worker would withdraw if the client did not wish them to be present. The two overseas committees both required formal written client consent, but the UK committee was happy with our justification for not doing so. On the other hand, the UK committee did not think we had provided sufficient reassurance of how we would protect the health and safety of the field worker as they conducted the observations, which could involve travelling alone into remote rural communities. The two overseas committees, however, considered our original plans for ensuring field worker health and safety sufficient.

What are the potential implications if different ethical review committees have different “passing standards”? As with pass/fail decisions in selection and assessment, there could be false positives or false negatives if studies are reviewed by “dove-ish” or “hawk-ish” committees respectively. As with selection and assessment, a false positive is probably the most concerning of the two: a study is given ethical clearance when ethical issues that would concern most other committees have not been raised and addressed. Although it is probably very rare that a study never gets ethical approval, a false negative decision would mean that the research team is required to make potentially costly and time-consuming amendments that most other committees would consider excessive. I have no experience on the “other side” of an ethical review committee, but I expect there must be some consideration of balancing the need for the research findings against potential ethical risks to participants and the research team.

Two interesting research questions arise. The first is to examine how ethical review committees make their decisions and set passing standards for research studies. A study of this nature in undergraduate medical education is currently ongoing: Peter Yates at Keele University is qualitatively examining how medical schools set their standards for finals examinations. The second is to explore the extent of the difference in passing standards across ethical review committees, by asking a sample of committees to each review a set of identical applications and to compare their decisions. A similar study in undergraduate medical education investigated differences in passing standards for written finals examinations across UK medical schools.[1] To avoid significant bias due to the Hawthorne effect, the ethical review committees would really need to be unaware that they were the subjects of such research. This, of course, raises a significant ethical dilemma with respect to informed consent and deception. Therefore it is not known whether such a study would be given ethical approval (and if so, by which committees?).

— Celia Taylor, Associate Professor

Reference:

  1. Taylor CA, Gurnell M, Melville CR, Kluth DC, Johnson N, Wass V. Variation in passing standards for graduation‐level knowledge items at UK medical schools. Med Educ. 2017; 51(6): 612-20.