Tag Archives: Director & Co-Directors’ Blog

Patient and Public Involvement: Direct Involvement of Patient Representatives in Data Collection

It is widely accepted that the public and patient voice should be heard loud and clear in the selection of studies, in the design of those studies, and in the interpretation and dissemination of the findings. But what about involvement of patient and the public in the collection of data? Before science became professionalised, all scientists could have been considered members of the public. Robert Hooke, for example, could have called himself architect, philosopher, physicist, chemist, or just Hooke. Today, the public are involved in data collection in many scientific enterprises. For example, householders frequently contribute data on bird populations, and Prof Brian Cox involved the public in the detection of new planets in his highly acclaimed television series. In medicine, patients have been involved in collecting data; for example patients with primary biliary cirrhosis were the data collectors in a randomised trial.[1] However, the topic of public and patient involvement in data collection is deceptively complex. This is because there are numerous procedural safeguards governing access to users of the health service and that restrict disbursement of the funds that are used to pay for research.

Let us consider first the issue of access to patients. It is not permissible to collect research data without undergoing certain procedural checks; in the UK it is necessary to be ratified by the Disclosure and Barring Service (DBS) and to have necessary permissions from the institutional authorities. You simply cannot walk onto a hospital ward and start handing out questionnaires or collecting blood samples.

Then there is the question of training. Before collecting data from patients it is necessary to be trained in how to do so, covering both salient ethical and scientific principles. Such training is not without its costs, which takes us to the next issue.

Researchers are paid for their work and, irrespective of whether the funds are publically or privately provided, access to payment is governed by fiduciary and equality/diversity legislation and guidelines. Access to scarce resources is usually governed by some sort of competitive selection process.

None of the above should be taken as an argument against patients and the public taking part in data collection. It does, however, mean that this needs to be a carefully managed process. Of course things are very much simpler if access to patients is not required. For example, conducting a literature survey would require only that the person doing it was technically competent and in many cases members of the public would already have all, or some, of the necessary skills. I would be very happy to collaborate with a retired professor of physics (if anyone wants to volunteer!). But that is not the point. The point is that procedural safeguards must be applied, and this entails management structures that can manage the process.

Research may be carried out by accessing members of the public who are not patients, or at least who are not accessed through the health services. As far as I know there are no particular restrictions on doing so, and I guess that such contact is governed by the common law covering issues such as privacy, battery, assault, and so on. The situation becomes different, however, if access is achieved through a health service organisation, or conducted on behalf of an institution, such as a university. Then presumably any member of the public wishing to collect data from other members of the public would fall under the governance arrangements of the relevant institution. The institution would have to ensure not only that the study was ethical, but that the data-collectors had the necessary skills and that funds were disbursed in accordance with the law. Institutions already deploy ‘freelance’ researchers, so I presume that the necessary procedural arrangements are already in place.

This analysis was stimulated by a discussion in the PPI committee of CLAHRC West Midlands, and represents merely my personal reflections based on first principles. It does not represent my final, settled position, let alone that of the CLAHRC WM, or any other institution. Rather it is an invitation for further comment and analysis.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Browning J, Combes B, Mayo MJ. Long-term efficacy of sertraline as a treatment for cholestatic pruritus in patients with primary biliary cirrhosis. Am J Gastroenterol. 2003; 98: 2736-41.

Cognitive Behavioural Therapy vs. Mindfulness Therapy

It is known that mindfulness therapy is effective in improving depression and, in many circumstances, in improving chronic pain (see later in News Blog). What is not so clear is whether it is better than the more standard therapy of cognitive behavioural therapy (CBT).

Cognitive behavioural therapy aims to abolish or reduce painful and harmful thoughts. Mindfulness therapy on the other hand does not seek to extirpate the depressing thoughts, but rather to help the person disassociate themselves from the harmful consequences of these thoughts. It often involves an element of meditation.

We have found three recent studies which compare CBT and mindfulness therapy head-to-head for depression.[1-3] In all three RCTs the two therapies were a dead heat. In short, both methods seem equally effective and certainly they are both better than nothing. But does this mean that they are equal; that the choice does not matter one way or the other?

In this article I argue that the fact that the two therapies all equally effective in improving mood, does not mean that they are equivalent. This is because they are designed to have different effects – abolition of harmful thoughts in one case, learning to live with them in the other. So it is reasonable to ask which one would prefer, abolishing the painful thoughts or simply learning not to be affected by them.

Philosophically, the argument behind CBT is that thoughts, at least at a certain level, are a kind of behaviour. They are a behaviour in the sense that they can be changed under conscious control. Mindfulness therapy does not attempt to ‘over-write’ thoughts. This means that the two therapies, in so far as they achieve their objectives, are not philosophically equivalent. Moreover, there are arguments in favour of removing the harmful thoughts, even if this does not result in any greater improvement in mood than the counter-factural. Consider a man whose wife is annoyed by certain movements that he is unable to control. It is surely much better, both from her point of view and from the point of view of the husband, that these painful thoughts should be removed altogether, rather than just tolerated. Alternatively, consider a person who is chronically distressed by a recurring memory of the painful death of a parent. Again, it is surely better that this person trains himself to think of another aspect of the parent’s life whenever the troubling thoughts recur, than to simply continue to remember the death, but not get upset by it.

So, I think that CBT is philosophically preferable to mindfulness therapy, even if it is no more effective in improving mood. From a philosophical point of view, it is important to develop a high rectitude way of thinking. When negative or morally questionable thoughts pop into the brain, as they do from time to time, these should be suppressed. A racist thought, for example, should be replaced with thoughts of higher rectitude. It is the purpose of the examined life to be able to control negative or bigoted thoughts and supplant them with more positive thoughts under conscious control. From this philosophical perspective CBT can be seen as an extension of the human ability to supplant negative or reprehensible thoughts with ones that are more positive or of higher rectitude. I choose CBT over mindfulness; for all that they might be equally effective in elevating mood, psychiatric treatments have implications that go beyond purely clinical outcomes – since they affect the mind there is always a philosophical dimension.

— Richard Lilford, CLAHRC WM Director

References:

  1. Manicavasagar V, Perich T, Parker G. Cognitive Predicators of Change in Cognitive Behaviour Therapy and Mindfulness-Based Cognitive Therapy for Depression. Behav Cogn Psychother. 2012; 40: 227-32.
  2. Omidi A, Mohammadkhani P, Mohammadi A, Zargar F. Comparing Mindfulness Based Cognitive Therapy and Traditional Cognitive Behavior Therapy With Treatments as Usual on Reduction of Major Depressive Symptoms. Iran Red Crescent Med J. 2013; 15(2): 142-6.
  3. Sundquist J, Lilja A, Palmér K, et al. Mindfulness group therapy in primary care patients with depression, anxiety and stress and adjustment disorders: randomised controlled trial. Br J Psychiatry. 2015; 206(2): 128-35.

The Beneficial Effects of Taking Part in International Research: an Old Chestnut Revisited

Two recent and well-written articles grapple with this question of whether or not clinical trials are beneficial, net of any benefit conferred by the therapeutic modalities evaluated in those trials.[1] [2]

The first study from the Netherlands concerns the effect of taking part in clinical trials where controls are made up of people not participating in trials (presumably because they were not offered entry in the trial).[1] This is the topic of a rather extensive literature, including a study to which I contributed.[3] The latter study found that the putative ‘trial effect’ applied only in circumstances where care given to control patients was not protocol-directed. In other words, our results suggested that the ‘trial effect’ was really a ‘protocol effect’. In that case the effect should be ephemeral and disappear as greater proportions of care become protocolised. And that is what appears to have happened – Lin, et al.[1] report no benefit to trial participants versus non-trial patients for the highly protocolised disease Hodgkin lymphoma. They speculate that while participation in trials does not affect individual patient care in the short-term, hosting trials does sensitise clinicians at an institutional level, so that they are more likely than clinicians from non-participating hospitals to practice evidence-based care. However, they offer no direct evidence for this assertion. Such evidence is, however, provided by the next study.

The effects of high participation rates in clinical trials at the hospital level is evaluated in an elegant study recently published in the prestigious journal ‘Gut’.[2] The team of authors (that includes prominent civil servants and many distinguished cancer specialists and statisticians) compared outcomes from colon cancer according to the extent to which the hospital providing treatment participated in trials. This ingenious study was accomplished by linking the NIHR’s data on clinical trials participation to cancer registry data and Hospital Episode Statistics. It turned out that risk-adjusted survival was significantly better in the high participation hospitals than in lower participation hospitals, even after substantial risk-adjustment. “Residual confounding” do I hear you say? Perhaps, but the authors have two further lines of evidence for the causal explanation. First, they documented a dose-response; the greater the level of participation, the greater the improvement in survival. Of course, an unknown confounder that was correlated with participation rates would produce just such a finding. The second line of evidence is more impressive – the longer the duration over which a hospital had sustained high participation rates, the greater the effect. Again, of course, this argument is not impregnable – duration might not serve as a good Instrumental Variable. How might the case be further strengthened (or refuted)? By unravelling the theoretical pathway between explanatory and outcome variables.[4] Since this is a database study, the process variables that might mediate the putative effect were not available to the authors. However, separate studies have indeed found an association between improved processes of care and trial participation.[5] Taken in the round, I think that a cause/effect explanation holds (>90% of my probability density favours the causal explanation).

— Richard Lilford, CLAHRC WM Director

References:

  1. Liu L, Giusti F, Schaapveld M, et al. Survival differences between patients with Hodgkin lymphoma treated inside and outside clinical trials. A study based on the EORTC-Netherlands Cancer Registry linked data with 20 years of follow-up. Br J Haematol. 2017; 176: 65-75.
  2. Downing A, Morris EJA, Corrigan N, et al. High hospital research participation and improved colorectal cancer survival outcomes: a population-based study. Gut. 2017; 66: 89-96.
  3. Braunholtz DA, Edwards SJ, Lilford RJ. Are randomized clinical trials good for us (in the short term)? Evidence for a “trial effect”. J Clin Epidemiol. 2001; 54(3): 217-24.
  4. Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end pointsBMJ. 2010; 341: c4413.
  5. Selby P. The impact of the process of clinical research on health service outcomes. Ann Oncol. 2011; 22(s7): vii2-4.

Private Consultations More Effective than Public Provision in Rural India

Doing work across high-income countries (CLAHRC WM) and lower income countries (CLAHRC model for Africa) provides interesting opportunities to compare and contrast. For example, our work on user fees in Malawi [1] mirrors that in high-income countries [2] – in both settings, relatively small increments in out-of-pocket expenses results in a large decrease in demand and does so indiscriminately (the severity of disease among those who access services is not shifted towards more serious cases). However, the effect of private versus public provision of health care is rather more nuanced.

News Blog readers are likely aware of the famous RAND study in the US.[3] People were randomised to receive their health care on a fee-for-service basis (‘privately’) vs. on a block contract basis (as in a public service). The results showed that fee-for-service provision resulted in more services being provided (interpreted as over-servicing), but that patients were more satisfied clients, compared to those experiencing public provision. Clinical quality was no different. In contrast, a study from rural India [4] found that private provision results in markedly improved quality compared to public provision, albeit with a degree of over-servicing.

The Indian study used ‘standardised patients’ (SPs) to measure the quality of care during consultations covering three clinical scenarios – angina, asthma and the parent of a child with dysentery. The care SPs received was scored against an ideal standard. Private providers spent more time/effort collecting the data essential for making a correct diagnosis, and were more likely to give treatment appropriate to the condition. First, they compared private providers with public providers and found that the former spent 30% more time gathering information from the SPs than the public providers. Moreover, the private providers were more likely to be present when the patient turned up for a consultation. There was a positive correlation between the magnitude of fees charged by private providers and time spent eliciting symptoms and signs, and the probability that the correct treatment would be provided. However, the private providers are often not doctors, so this result could reflect different professional mix, at least in part. To address this point, a second study was done whereby the same set of doctors were presented with the same clinical cases – a ‘dual sample’. The results were even starker, with doctors spending twice as long with each patient when seen privately.

Why were these results from rural India so different from the RAND study? The authors suggest that taking a careful history and examination is part of the culture for US doctors, and that they had reach a kind of asymptote, such that context made little difference to this aspect of their behaviour. Put another way, there was little headroom for an incentive system to drive up quality of care. However, in low-income settings where public provision is poorly motivated and regulated, fee-for-service provision drives up quality. The same seems to apply to education, where private provision was found to be of higher quality than public provision in low-income settings – see previous News Blog.[5]

However, it should be acknowledged that none of the available alternatives in rural India were good ones. For example, the probability of receiving the correct diagnosis varied across the private and public provider, but never exceeded 15%, while the rate of correct treatment varied from 21% to about 50%. Doctors were more likely than other providers to provide the correct diagnosis. A great deal of treatment was inappropriate. CLAHRC West Midlands’ partner organisation in global health is conducting a study of service provision in slums with a view to devising affordable models of improving health care.[6]

— Richard Lilford, CLAHRC WM Director

References:

  1. Watson SI, Wroe EB, Dunbar EL, et al. The impact of user fees on health services utilization and infectious disease diagnoses in Neno District, Malawi: a longitudinal, quasi-experimental study. BMC Health Serv Res. 2016; 16: 595.
  2. Carrin G & Hanvoravongchai P. Provider payments and patient charges as policy tools for cost-containment: How successful are they in high-income countries? Hum Resour Health. 2003; 1: 6.
  3. Brook RH, Ware JE, Rogers WH, et al. The effect of coinsurance on the health of adults. Results from the RAND Health Insurance Experiment. Santa Monica, CA: RAND Corporation, 1984.
  4. Das J, Holla A, Mohpal A, Muralidharan K. Quality and Accountability in Healthcare Delivery: Audit-Study Evidence from Primary Care in India . Am Econ Rev. 2016; 106(12): 3765-99.
  5. Lilford RJ. League Tables – Not Always Bad. NIHR CLAHRC West Midlands News Blog. 28 August 2015.
  6. Lilford RJ. Between Policy and Practice – the Importance of Health Service Research in Low- and Middle-Income Countries. NIHR CLAHRC West Midlands News Blog. 27 January 2017.

And Today We Have the Naming of Parts*

Management research, health services research, operations research, quality and safety research, implementation research – a crowded landscape of words describing concepts that are, at best, not entirely distinct, and at worst synonyms. Some definitions are given in Table 1. Perhaps the easiest one to deal with is ‘operations research’, which has a rather narrow meaning and is used to describe mathematical modelling techniques to derive optimal solutions to complex problems typically dealing with the flow of objects (people) over time. So it is a subset of the broader genre covered by this collection of terms. Quality and safety research puts the cart before the horse by defining the intended objective of an intervention, rather than where in the system the intervention impacts. Since interventions at a system level may have many downstream effects, it seems illogical and indeed potentially harmful, to define research by its objective, an argument made in greater detail elsewhere.[1]

Health Services Research (HSR) can be defined as management research applied to health, and is an acceptable portmanteau term for the construct we seek to define. For those who think the term HSR leaves out the development and evaluation of interventions at service level, the term Health Services and Delivery Research (HS&DR) has been devised. We think this is a fine term to describe management research as applied to the health services, and are pleased that the NIHR has embraced the term, and now has two major funding schemes ­– the HTA programme dealing with clinical research, and the HS&DR dealing with management research. In general, interventions and their related research programmes can be neatly represented as shown in the framework below, represented in a modified Donabedian chain:

078 DCB - Figure 1

So what about implementation research then? Wikipedia defines implementation research as “the scientific study of barriers to and methods of promoting the systematic application of research findings in practice, including in public policy.” However, a recent paper in BMJ states that “considerable confusion persists about its terminology and scope.”[2] Surprised? In what respect does implementation research differ from HS&DR?

Let’s start with the basics:

  1. HS&DR studies interventions at the service level. So does implementation research.
  2. HS&DR aims to improve outcome of care (effectiveness / safety / access / efficiency / satisfaction / acceptability / equity). So does implementation research.
  3. HS&DR seeks to improve outcomes / efficiency by making sure that optimum care is implemented. So does implementation research.
  4. HS&DR is concerned with implementation of knowledge; first knowledge about what clinical care should be delivered in a given situation, and second about how to intervene at the service level. So does implementation research.

This latter concept, concerning the two types of knowledge (clinical and service delivery) that are implemented in HS&DR is a critical one. It seems poorly understood and causes many researchers in the field to ‘fall over their own feet’. The concept is represented here:

078 DCB - Figure 2HS&DR / implementation research resides in the South East quadrant.

Despite all of this, some people insist on keeping the distinction between HS&DR and Implementation Research alive – as in the recent Standards for Reporting Implementation studies (StaRI) Statement.[3] The thing being implemented here may be a clinical intervention, in which case the above figure applies. Or it may be a service delivery intervention. Then they say that once it is proven, it must be implemented, and this implementation can be studied – in effect they are arguing here for a third ring:

078 DCB - Figure 3

This last, extreme South East, loop is redundant because:

  1. Research methods do not turn on whether the research is HS&DR or so-called Implementation Research (as the authors acknowledge). So we could end up in the odd situation of the HS&DR being a before and after study, and the Implementation Research being a cluster RCT! The so-called Implementation Research is better thought of as more HS&DR – seldom is one study sufficient.
  2. The HS&DR itself requires the tenets of Implementation Science to be in place – following the MRC framework, for example – and identifying barriers and facilitators. There is always implementation in any trial of evaluative research, so all HS&DR is Implementation Research – some is early and some is late.
  3. Replication is a central tenet of science and enables context to be explored. For example, “mother and child groups” is an intervention that was shown to be effective in Nepal. It has now been ‘implemented’ in six further sites under cluster RCT evaluation. Four of the seven studies yielded positive results, and three null results. Comparing and contrasting has yielded a plausible theory, so we have a good idea for whom the intervention works and why.[4] All seven studies are implementations, not just the latter six!

So, logical analysis does not yield any clear distinction between Implementation Research on the one hand and HS&DR on the other. The terms might denote some subtle shift of emphasis, but as a communication tool in a crowded lexicon, we think that Implementation Research is a term liable to sow confusion, rather than generate clarity.

Table 1

Term Definitions Sources
Management research “…concentrates on the nature and consequences of managerial actions, often taking a critical edge, and covers any kind of organization, both public and private.” Easterby-Smith M, Thorpe R, Jackson P. Management Research. London: Sage, 2012.
Health Services Research (HSR) “…examines how people get access to health care, how much care costs, and what happens to patients as a result of this care.” Agency for Healthcare Research and Quality. What is AHRQ? [Online]. 2002.
HS&DR “…aims to produce rigorous and relevant evidence on the quality, access and organisation of health services, including costs and outcomes.” INVOLVE. National Institute for Health Research Health Services and Delivery Research (HS&DR) programme. [Online]. 2017.
Operations research “…applying advanced analytical methods to help make better decisions.” Warwick Business School. What is Operational Research? [Online]. 2017.
Patient safety research “…coordinated efforts to prevent harm, caused by the process of health care itself, from occurring to patients.” World Health Organization. Patient Safety. [Online]. 2017.
Comparative Effectiveness research “…designed to inform health-care decisions by providing evidence on the effectiveness, benefits, and harms of different treatment options.” Agency for Healthcare Research and Quality. What is Comparative Effectiveness Research. [Online]. 2017.
Implementation research “…the scientific inquiry into questions concerning implementation—the act of carrying an intention into effect, which in health research can be policies, programmes, or individual practices (collectively called interventions).” Peters DH, Adam T, Alonge O, Agyepong IA, Tran N. Implementation research: what it is and how to do it. BMJ. 2013; 347: f6753.

We have ‘audited’ David Peters’ and colleagues BMJ article and found that every attribute they claim for Implementation Research applies equally well to HS&DR, as you can see in Table 2. However, this does not mean that we should abandon ‘Implementation Science’ – a set of ideas useful in designing an intervention. For example, stakeholders of all sorts should be involved in the design; barriers and facilitators should be identified; and so on. By analogy, I think Safety Research is a back-to-front term, but I applaud the tools and insights that ‘safety science’ provides.

Table 2

Term
“…attempts to solve a wide range of implementation problems”
“…is the scientific inquiry into questions concerning implementation – the act of carrying an intention into effect, which in health research can be policies, programmes, or individual practices (…interventions).”
“…can consider any aspect of implementation, including the factors affecting implementation, the processes of implementation, and the results of implementation.”
“The intent is to understand what, why, and how interventions work in ‘real world’ settings and to test approaches to improve them.”
“…seeks to understand and work within real world conditions, rather than trying to control for these conditions or to remove their influence as causal effects.”
“…is especially concerned with the users of the research and not purely the production of knowledge.”
“…uses [implementation outcome variables] to assess how well implementation has occurred or to provide insights about how this contributes to one’s health status or other important health outcomes.
…needs to consider “factors that influence policy implementation (clarity of objectives, causal theory, implementing personnel, support of interest groups, and managerial authority and resources).”
“…takes a pragmatic approach, placing the research question (or implementation problem) as the starting point to inquiry; this then dictates the research methods and assumptions to be used.”
“…questions can cover a wide variety of topics and are frequently organised around theories of change or the type of research objective.”
“A wide range of qualitative and quantitative research methods can be used…”
“…is usefully defined as scientific inquiry into questions concerning implementation—the act of fulfilling or carrying out an intention.”

 — Richard Lilford, CLAHRC WM Director and Peter Chilton, Research Fellow

References:

  1. Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ. 2010; 341: c4413.
  2. Peters DH, Adam T, Alonge O, Agyepong IA, Tran N. Implementation research: what it is and how to do it. BMJ. 2013; 347: f6753.
  3. Pinnock H, Barwick M, Carpenter CR, et al. Standards for Reporting Implementation Studies (StaRI) Statement. BMJ. 2017; 356: i6795.
  4. Prost A, Colbourn T, Seward N, et al. Women’s groups practising participatory learning and action to improve maternal and newborn health in low-resource settings: a systematic review and meta-analysis. Lancet. 2013; 381: 1736-46.

*Naming of Parts by Henry Reed, which Ray Watson alerted us to:

Today we have naming of parts. Yesterday,

We had daily cleaning. And tomorrow morning,

We shall have what to do after firing. But to-day,

Today we have naming of parts. Japonica

Glistens like coral in all of the neighbouring gardens,

And today we have naming of parts.

Measuring Quality of Care

Measuring quality of care is not a straightforward business:

  1. Routinely collected outcome data tend to be misleading because of very poor ratios of signal to noise.[1]
  2. Clinical process (criterion based) measures require case note review and miss important errors of omission, such as diagnostic errors.
  3. Adverse events also require case note review and are prone to measurement error.[2]

Adverse event review is widely practiced, usually involving a two-stage process:

  1. A screening process (sometimes to look for warning features [triggers]).
  2. A definitive phase to drill down in more detail and refute or confirm (and classify) the event.

A recent HS&DR report [3] is important for two particular reasons:

  1. It shows that a one-stage process is as sensitive as the two-stage process. So triggers are not needed; just as many adverse events can be identified if notes are sampled at random.
  2. In contrast to (other) triggers, deaths really are associated with a high rate of adverse events (apart, of course, from the death itself). In fact not only are adverse events more common among patients who have died than among patients sampled at random (nearly 30% vs. 10%), but the preventability rates (probability that a detected adverse event was preventable) also appeared slightly higher (about 60% vs. 50%).

This paper has clear implications for policy and practice, because if we want a population ‘enriched’ for high adverse event rates (on the ‘canary in the mineshaft’ principle), then deaths provide that enrichment. The widely used trigger tool, however, serves no useful purpose – it does not identify a higher than average risk population, and it is more resource intensive. It should be consigned to history.

Lastly, England and Wales have mandated a process of death review, and the adverse event rate among such cases is clearly of interest. A word of caution is in order here. The reliability (inter-observer agreement) in this study was quite high (Kappa 0.5), but not high enough for comparisons across institutions to be valid. If cross-institutional comparisons are required, then:

  1. A set of reviewers must review case notes across hospitals.
  2. At least three reviewers should examine each case note.
  3. Adjustment must be made for reviewer effects, as well as prognostic factors.

The statistical basis for these requirements are laid out in detail elsewhere.[4] It is clear that reviewers should not review notes from their own hospitals, if any kind of comparison across institutions is required – the results will reflect the reviewers rather than the hospitals.

Richard Lilford, CLAHRC WM Director

References:

  1. Girling AJ, Hofer TP, Wu J, et al. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling studyBMJ Qual Saf. 2012; 21(12): 1052-6.
  2. Lilford R, Mohammed M, Braunholtz D, Hofer T. The measurement of active errors: methodological issues. Qual Saf Health Care. 2003; 12(s2): ii8-12.
  3. Mayor S, Baines E, Vincent C, et al. Measuring harm and informing quality improvement in the Welsh NHS: the longitudinal Welsh national adverse events study. Health Serv Deliv Res. 2017; 5(9).
  4. Manaseki-Holland S, Lilford RJ, Bishop JR, Girling AJ, Chen YF, Chilton PJ, Hofer TP; UK Case Note Review Group. Reviewing deaths in British and US hospitals: a study of two scales for assessing preventability. BMJ Qual Saf. 2016. [ePub].

Wrong Medical Theories do Great Harm but Wrong Psychology Theories are More Insidious

Back in the 1950s, when I went from nothing to something, a certain Dr Spock bestrode the world of child rearing like a colossus. Babies, said Spock, should be put down to sleep in the prone position. Only years later did massive studies show that children are much less likely to experience ‘cot death’ or develop joint problems if they are placed supine – on their backs. Although I survived prone nursing to become a CLAHRC director, tens of thousands of children must have died thanks to Dr Spock’s ill-informed theory.

So, I was fascinated by an article in the Guardian newspaper, titled ‘No evidence to back the idea of learning styles’.[1] The article was signed by luminaries from the world of neuroscience, including Colin Blakemore (who I knew, and liked, when he was head of the MRC). I decided to retrieve the article on which the Guardian piece was mainly based – a review in ‘Psychological Science in the Public Interest’.[2]

The core idea is that people have clear preferences for how they prefer to receive information (e.g. pictorial vs. verbal) and that teaching is most effective if delivered according to the preferred style. This idea is widely accepted among psychologists and educationalists, and is advocated in many current textbooks. Numerous tests have been devised to diagnose a person’s learning style so that their instruction can be tailored accordingly. Certification programmes are offered, some costing thousands of dollars. A veritable industry has grown up around this theory. The idea belongs to a larger set of ideas, originating with Jung, called ‘type theories’; the notion that people fall into distinct groups or ‘types’, from which predictions can be made. The Myers-Briggs ‘type’ test is still deployed as part of management training and I have been subjected to this instrument, despite the fact that its validity as the basis for selection or training has not been confirmed in objective studies. People seem to cling to the idea that types are critically important. That types exist is not the issue of contention (males/females; extrovert/introvert), it is what they mean (learn in different ways; perform differently in meetings) that is disputed. In the case of learning styles the hypothesis of interest is that the style (which can be observed ex ante) meshes with a certain type of instruction (the benefit of which can be observed ex post). The meshing hypothesis holds that different modes of instruction are optimal for different types of person “because different modes of presentation exploit the specific perceptual and cognitive strengths of different individuals.” This hypothesis entails the assumption that people with a certain style (based, say on a diagnostic instrument or ‘tool’) will experience better educational outcomes when taught in one way (say, pictorial) than when taught in another way (say, verbal). It is precisely this (‘meshing’) hypothesis that the authors set out to test.

Note then that finding that people have different preferences does not confirm the hypothesis. Likewise, finding that different ability levels correlate with these preferences would not confirm the hypothesis. The hypothesis would be confirmed by finding that teaching method 1 is more effective than method 2 in type A people, while teaching method 2 is more effective than teaching method 1 in type B people.

The authors find, from the voluminous literature, only four studies that test the above hypothesis. One of these was of weak design. The three stronger studies provide null results. The weak study did find a style-by-treatment interaction, but only after “the outliers were excluded for unspecified reasons.”

Of course, the null results do not exclude the possibility of an effect, particularly a small effect, as the authors point out. To shed further light on the subject they explore related literatures. First they examine aptitude (rather than just learning style preference) to see whether there is an interaction between aptitude and pedagogic method. Here the literature goes right back to Cornbach in 1957. One particular hypothesis was that high aptitude students fare better in a less structured teaching format, while those with less aptitude fare better where the format is structured and explicit. Here the evidence is mixed, such that in about half of studies, less structure suits high ability students, while more structure suits less able students – one (reasonable) interpretation for the different results is that there may be certain contexts where aptitude/treatment interactions do occur and others where they do not. Another hypothesis concerns an aspect of personality called ‘locus of control’. It was hypothesised that an internal locus of control (people who incline to believe their destiny lies in their own hands) would mesh with an unstructured format of instruction and vice versa. Here the evidence, taken in the round, tends to confirm the hypothesis.

So, there is evidence (not definitive, but compelling) for an interaction between personality and aptitude and teaching method. There is no such evidence for learning style preference. This does not mean that some students will need an idea to be explained one way while others need it explained in a different way. This is something good teachers sense as they proceed, as emphasised in a previous blog.[3] But tailoring your explanation according to the reaction of students is one thing, determining it according to a pre-test is another. In fact, the learning style hypothesis may impede good teaching by straightjacketing teaching according to a pre-determined format, rather than encouraging teachers to adapt to the needs of students in real time. Receptivity to the expressed needs of the learner seems preferable to following a script to which the learner is supposed to conform.

And why have I chosen this topic for the main News Blog article? Two reasons:

First, it shows how an idea may gain purchase in society with little empirical support, and we should be ever on our guard – the Guardian lived up to its name in this respect!

Second, because health workers are educators; we teach the next generation and we teach our peers. Also, patient communication has an undoubted educational component (see our previous main blog [4]). So we should keep abreast of general educational theory. Many CLAHRC WM projects have a strong educational dimension.

— Richard Lilford, CLAHRC WM Director

References:

  1. Hood B, Howard-Jones P, Laurillard D, et al. No Evidence to Back Idea of Learning Styles. The Guardian. 12 March 2017.
  2. Pashler H, McDaniel M, Rohrer D, Bjork R. Learning Styles: Concepts and Evidence. Psychol Sci Public Interest. 2008; 9(3): 105-19.
  3. Lilford RJ. Education Update. NIHR CLAHRC West Midlands News Blog. 2 September 2016.
  4. Lilford RJ. Doctor-Patient Communication in the NHS. NIHR CLAHRC West Midlands News Blog. 24 March 2017.

Doctor-Patient Communication in the NHS

Andrew McDonald (former Chief Executive of Independent Parliamentary Standards Authority) was recently asked by the Marie Curie charity to examine the quality of doctor-patient communication in the NHS, as discussed on BBC Radio 4’s Today programme on 13 March 2017 (you can listen online). His report concluded that communication was woefully inadequate and that patients were not getting the clear and thorough counselling that they needed in order to understand their condition and make informed choices about options in their care. Patients need to understand what is likely to happen to them, and not all patients with the same condition will want to make the same choice(s). Indeed my own work [1] is part of a large body of research, which shows that better information leads to better knowledge, which in turn affects the choices that patients make. Evidence that the medical and caring professions do not communicate in an informative and compassionate way is therefore a matter of great concern.

However, there is a paradox – feedback from patients, that communication should lie at the heart of their care, has not gone unheard. For instance, current medical training is replete with “communication skills” instruction. Why then do patients still feel dissatisfied; why have matters not improved radically? My diagnosis is that good communication is not mainly a technical matter. Contrary to what many people think, the essence of good communication does not lie in avoiding jargon or following a set of techniques – a point often emphasised by my University of Birmingham colleague John Skelton. These technical matters should not be ignored – but they are not the nub of the problem.

In my view good communication requires effort, and poor communication reflects an unwillingness to make that effort; it is mostly a question of attitude. Good communication is like good teaching. A good communicator has to take time to listen and to tailor their responses to the needs of the individual patient. These needs may be expressed verbally or non-verbally, but either way a good communicator needs to be alive to them, and to respond in the appropriate way. Sometimes this will involve rephrasing an explanation, but in other cases the good communicator will respond to emotional cues. For example a sensitive doctor will notice if, in the course of a technical explanation, a patient looks upset – the good doctor will not ignore this cue, but will acknowledge the emotion, invite the patient to discuss his or her feelings, and be ready to deal with the flood of emotion that may result. The good doctor has to do emotional work, for example showing sympathy, not just in what is said, but also in how it is said. I am afraid to say that sometimes the busyness of the doctor is simply used as an excuse to avoid interactive engagements at a deeper emotional level. Yes, bringing feelings to the surface can be uncomfortable, but enduring the discomfort is part of professional life. In fact, recent research carried out by Gill Combes in CLAHRC WM showed that doctors are reticent in bringing psychological issues into the open.[2] Deliberately ignoring emotional clues and keeping things at a superficial level is deeply unsatisfying to patients. Glossing over feelings also impedes communication regarding more technical issues, as it is very hard for a person to assimilate medical information when they are feeling emotional, or nursing bruised feelings. In the long run such a technical approach to communication impoverishes a doctors professional life.

Doctors sometimes say that they should stick to the technical and that the often lengthy business of counselling should be carried out by other health professions, such as nurses. I have argued before that this is a blatant and unforgivable abrogation of responsibility; it vitiates values that lie (and always will lie) at the heart of good medical practice.[3] The huge responsibilities that doctors carry to make the right diagnosis and prescribe the correct treatment entail a psychological intimacy, which is almost unique to medical practice and which cannot easily be delegated. The purchase that a doctor has on a patient’s psyche should not be squandered. It is a kind of power, and like all power it may be wasted, misused or used to excellent effect.

The concept I have tried to explicate is that good communication is a function of ethical practice, professional behaviour and the medical ethos. It lies at the heart of the craft of medicine. If this point is accepted, it has an important corollary – the onus for teaching communication skills lies with medical practitioners rather than with psychologists or educationalists. Doctors must be the role models for other doctors. I was fortunate in my medical school in Johannesburg to be taught by professors of Oslerian ability who inspired me in the art of practice and the synthesis of technical skill and human compassion. Some people have a particular gift for communication with patients, but the rest of us must learn and copy, be honest with ourselves when we have fallen short, and always try to do better. The most important thing a medical school must do is to nourish and reinforce the attitudes that brought the students into medicine in the first place.

— Richard Lilford, CLAHRC WM Director

References:

  1. Wragg JA, Robinson EJ, Lilford RJ. Information presentation and decisions to enter clinical trials: a hypothetical trial of hormone replacement therapy. Soc Sci Med. 2000; 51(3): 453-62.
  2. Combes G, Allen K, Sein K, Girling A, Lilford R. Taking hospital treatments home: a mixed methods case study looking at the barriers and success factors for home dialysis treatment and the influence of a target on uptake rates. Implement Sci. 2015; 10: 148.
  3. Lilford RJ. Two Ideas of What It Is to be a Doctor. NIHR CLAHRC West Midlands News Blog. August 14, 2015.

Scientists Should Not Be Held Accountable For Ensuring the Impact of Their Research

It has become more and more de rigour to expect researchers to be the disseminators of their own work. Every grant application requires the applicant to fill in a section on dissemination. We were recently asked to describe our dissemination plans as part of the editorial review process for a paper submitted to the BMJ. Only tact stopped us from responding, “To publish our paper in the BMJ”! Certainly when I started out on my scientific career it was generally accepted that the sciences should make discoveries and journals should disseminate them. The current fashion for asking researchers to take responsibility for dissemination of their work emanates, at least in part, from the empirical finding that journal articles by themselves may fail to change practice even when the evidence is strong. Furthermore, it could be argued that researchers are ideal conduits for dissemination. They have a vested interest in uptake of their findings, an intimate understanding of the research topic, and are in touch with networks of relevant practitioners. However, there are dangers in a policy where the producers of knowledge are also held accountable for its dissemination. I can think of three arguments against policies making scientists the vehicle for dissemination and uptake of their own results – scientists may not be good at it; they may be conflicted; and the idea is based on a fallacious understanding of the normative and practical link between research and action.

1. Talent for Communication
There is no good reason to think that researchers are naturally gifted in dissemination, or that this is where their inclination lies. Editors, journalists, and I suppose blog writers, clearly have such an interest. However, an inclination to communicate is not a necessary condition for becoming an excellent researcher. Specialisation is the basis for economic progress, and there is an argument that the benefits of specialisation apply to the production and communication of knowledge.

2. Objectivity
Pressurising researchers to market their own work may create perverse incentives. Researchers may be tempted to overstate their findings, or over interpret the implications for practice. There is also a fine line to be drawn between dissemination (drawing attention to findings) and advocacy (persuading people to take action based on findings). It is along the slippery slope between dissemination and advocacy that the dangers of auto-dissemination reside. The vested interest that scientists have in the uptake of their results should serve as a word of caution for those who militantly maintain that scientists should be the main promotors of their own work. The climate change scientific fraternity has been stigmatised by overzealous scientific advocacy. Expecting scientists to be the bandleader for their own product, and requiring them to demonstrate impact, has created perverse incentives.

3. Research Findings and Research Implications
With some noble exceptions, it is rare for a single piece of primary research to be sufficiently powerful to drive a change in practice. In fact replication is one of the core tenets of scientific practice. The pathway from research to change of practice should go as follows:

  1. Primary researcher conducts study and publishes results.
  2. Research results replicated.
  3. Secondary researcher conducts systematic review.
  4. Stakeholder committee develops guidelines according to established principles.
  5. Local service providers remove barriers to change in practice.
  6. Clinicians adapt a new method.

The ‘actors’ at these different stages can surely overlap, but this process nevertheless provides a necessary degree of detachment between scientific results and the actions that should follow, and it makes use of different specialisms and perspectives in translating knowledge into practice.

We would be interested to hear contrary views, but be careful to note that I am not arguing that a scientist should never be involved in dissemination of their own work, merely that this should not be a requirement or expectation.

— Richard Lilford, CLAHRC WM Director

Evaluating Interventions to Improve the Integration of Care (Among Multiple Providers and Across Multiple Sites)

Typically healthcare improvement programmes have been institution specific examining, for example hospitals, general practices or care homes. While such solipsistic quality improvement initiatives obviously have their place, they also have severe limitations for the patient of today who typically has many complex conditions and whose care is therefore fragmented across many different care providers working in different places. Such patients perceive, and are sometimes the victims of, gaps in the system. Recent attention has therefore turned to approaches to close these gaps, and I am leading an NIHR programme development grant specifically for this purpose (Improving clinical decisions and teamwork for patients with multimorbidity in primary care through multidisciplinary education and facilitation). There are many different approaches to closing these gaps in care and the Nobel Prize winner Elinor Ostrom has featured previously in this News Blog for her seminal work on barriers and facilitators to institution collaboration [1]; while my colleague, CLAHRC WM Deputy Director Graeme Currie, has approached this issue from a management science perspective.

The problem for a researcher is to measure the effectiveness of initiatives to improve care across centres. This is not natural territory for cluster RCTs since it would be necessary to randomise whole ‘health economies’ rather than just organisations such as hospitals or general practices. Furthermore, many of the outcomes that might be observed in such studies, such as standardised mortality rates, are notoriously insensitive to change.[2] The ESTHER Project in Sweden is famous for closing gaps in care across the hospital/community nexus.[3] The evaluation, however, consists of little more than stakeholder interviews where people seem to recite the perceived wisdom of the day as evidence of effectiveness. While I think it is eminently plausible that the intervention was effective, and while the statements made during the qualitative interviews may have a certain verisimilitude, this all seems very weak evidence of effectiveness. It lacks any quantification, such as could be used in a health economic model. Is there a halfway house between a cluster RCT with hard outputs like mortality on the one hand, and ‘how was it for you?’ research on the other?

While it is not easy to come up with a measurement system, there is one person who perceives the entire pathway and that is the patient. The patient is really the only person who can provide an assessment of care quality across multiple providers. There are many patient measures. Some relate to outcome, for instance health and social care related quality of life (EQ-5DL, ASCOT SCT4 and OPOQL-brief [4]). Such measures should be used in service delivery studies, but may be insensitive to change, as stated above. It is therefore important to measure patient perception of the quality of their care. However, such measurements tend to either be non-specific (e.g. LTC-6 [5]) or look at only one aspect of care, such as continuity (PPCMC),[6] treatment burden [7] or person contentedness.[8] We propose a single quality of integrated care tool incorporating dimensions that have been shown to be important to patients and are collaborating with PenCLAHRC who are working on such a tool. Constructs that should be considered include conflicting information from different caregivers; contradicting forms of treatment (such as one clinician countermanding a prescription from another caregiver); duplication or redundancy of advice and information; satisfaction with care overall and with duration of contacts. We suspect that most patients would prefer fewer, more in-depth, contacts to a larger number of rushed contacts.

It might also be possible to design more imaginative qualitative research that goes beyond simply asking questions and uses method to elicit some of their deeper feelings, by prompting their memory. One such method is photo-voice where patients are asked to take photos in various points in their care, and use these as a basis for discussion. We have used such naturalistic settings in our CLAHRC.[9] Such methods could be harnessed in the co-design of services where patients / carers are not just asked how they perceive services, but are actively involved in designing solutions.

Salient quantitative measurements as may be obtained from NHS data systems. Hospital admission and readmission rates should be measured in studies of system-wide change. An effective intervention would result in more satisfied patients with lower rates of hospital admission. What about quantifying physical health? Adverse events in general and mortality in particular have poor sensitivity, such that signal, even after risk adjustment, would only emerge from noise in an extremely large study, or in a very high-risk client group – see ‘More on Integrated Care’ in this News Blog. Adverse events and death can be consolidated into generic health measurements (QALYs/DALYs), but, again, these are insensitive for reasons given above. Evaluating methods to improve the integration of care may be an ‘inconvenient truth scenario’ [10] where it is necessary to rely on process measures and other proxies for clinical / welfare out. Since our CLAHRC is actively exploring the evaluation of service interventions to improve integration of care, we would be very interested to hear from others and explore approaches to evaluating care across care boundaries.

— Richard Lilford, CLAHRC WM Director

References:

  1. Ostrom E. Beyond Markets and States: Polycentric Governance of Complex Economic Systems. Am Econ Rev. 2010; 100(3): 641-72.
  2. Girling AJ, Hofer TP, Wu J, et al. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling study. BMJ Qual Saf. 2012; 21(12): 1052-6.
  3. Institute for Healthcare Improvement. Improving Patient Flow: The Esther Project in Sweden. Boston, MA: Institute for Healthcare Improvement, 2011.
  4. Bowling A, Hankins M, Windle G, Bilotta C, Grant R. A short measure of quality of life in older age: the performance of the brief Older People’s Quality of Life questionnaire (OPQOL-brief). Arch Gerontol Geriatr. 2013; 56: 181-7.
  5. Glasgow RE, Wagner EH, Schaefer J, Mahoney LD, Reid RJ, Greene SM. Development and validation of the Patient Assessment of Chronic Illness Care (PACIC). Med Care. 2005; 43(5): 436-44.
  6. Haggerty JL, Robergr D, Freeman GK, Beaulieu C, Breton M. Validation of a generic measure of continuity of care: When patients encounter several clinicians. Ann Fam Med. 2012; 10: 443-51.
  7. Tran VT, Harrington M, Montori VM, Barnes C, Wicks P, Ravaud P. Adaptation and validation of the Treatment Burden Questionnaire (TBQ) in English using an internet platform. BMC Medicine. 2014; 12: 109.
  8. Mercer SW, Scottish Executive. Care Measure. Scottish Executive 2004
  9. Redwood S, Gale N, Greenfield S. ‘You give us rangoli, we give you talk’ – Using an art-based activity to elicit data from a seldom heard group. BMC Medl Res Methodol. 2012; 12: 7.
  10. Lilford RJ. Integrated Care. NIHR CLAHRC West Midlands News Blog. 19 June 2015.