Category Archives: Director & Co-Directors’ Blog

Patient and Public Involvement: Direct Involvement of Patient Representatives in Data Collection

It is widely accepted that the public and patient voice should be heard loud and clear in the selection of studies, in the design of those studies, and in the interpretation and dissemination of the findings. But what about involvement of patient and the public in the collection of data? Before science became professionalised, all scientists could have been considered members of the public. Robert Hooke, for example, could have called himself architect, philosopher, physicist, chemist, or just Hooke. Today, the public are involved in data collection in many scientific enterprises. For example, householders frequently contribute data on bird populations, and Prof Brian Cox involved the public in the detection of new planets in his highly acclaimed television series. In medicine, patients have been involved in collecting data; for example patients with primary biliary cirrhosis were the data collectors in a randomised trial.[1] However, the topic of public and patient involvement in data collection is deceptively complex. This is because there are numerous procedural safeguards governing access to users of the health service and that restrict disbursement of the funds that are used to pay for research.

Let us consider first the issue of access to patients. It is not permissible to collect research data without undergoing certain procedural checks; in the UK it is necessary to be ratified by the Disclosure and Barring Service (DBS) and to have necessary permissions from the institutional authorities. You simply cannot walk onto a hospital ward and start handing out questionnaires or collecting blood samples.

Then there is the question of training. Before collecting data from patients it is necessary to be trained in how to do so, covering both salient ethical and scientific principles. Such training is not without its costs, which takes us to the next issue.

Researchers are paid for their work and, irrespective of whether the funds are publically or privately provided, access to payment is governed by fiduciary and equality/diversity legislation and guidelines. Access to scarce resources is usually governed by some sort of competitive selection process.

None of the above should be taken as an argument against patients and the public taking part in data collection. It does, however, mean that this needs to be a carefully managed process. Of course things are very much simpler if access to patients is not required. For example, conducting a literature survey would require only that the person doing it was technically competent and in many cases members of the public would already have all, or some, of the necessary skills. I would be very happy to collaborate with a retired professor of physics (if anyone wants to volunteer!). But that is not the point. The point is that procedural safeguards must be applied, and this entails management structures that can manage the process.

Research may be carried out by accessing members of the public who are not patients, or at least who are not accessed through the health services. As far as I know there are no particular restrictions on doing so, and I guess that such contact is governed by the common law covering issues such as privacy, battery, assault, and so on. The situation becomes different, however, if access is achieved through a health service organisation, or conducted on behalf of an institution, such as a university. Then presumably any member of the public wishing to collect data from other members of the public would fall under the governance arrangements of the relevant institution. The institution would have to ensure not only that the study was ethical, but that the data-collectors had the necessary skills and that funds were disbursed in accordance with the law. Institutions already deploy ‘freelance’ researchers, so I presume that the necessary procedural arrangements are already in place.

This analysis was stimulated by a discussion in the PPI committee of CLAHRC West Midlands, and represents merely my personal reflections based on first principles. It does not represent my final, settled position, let alone that of the CLAHRC WM, or any other institution. Rather it is an invitation for further comment and analysis.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Browning J, Combes B, Mayo MJ. Long-term efficacy of sertraline as a treatment for cholestatic pruritus in patients with primary biliary cirrhosis. Am J Gastroenterol. 2003; 98: 2736-41.

Cognitive Behavioural Therapy vs. Mindfulness Therapy

It is known that mindfulness therapy is effective in improving depression and, in many circumstances, in improving chronic pain (see later in News Blog). What is not so clear is whether it is better than the more standard therapy of cognitive behavioural therapy (CBT).

Cognitive behavioural therapy aims to abolish or reduce painful and harmful thoughts. Mindfulness therapy on the other hand does not seek to extirpate the depressing thoughts, but rather to help the person disassociate themselves from the harmful consequences of these thoughts. It often involves an element of meditation.

We have found three recent studies which compare CBT and mindfulness therapy head-to-head for depression.[1-3] In all three RCTs the two therapies were a dead heat. In short, both methods seem equally effective and certainly they are both better than nothing. But does this mean that they are equal; that the choice does not matter one way or the other?

In this article I argue that the fact that the two therapies all equally effective in improving mood, does not mean that they are equivalent. This is because they are designed to have different effects – abolition of harmful thoughts in one case, learning to live with them in the other. So it is reasonable to ask which one would prefer, abolishing the painful thoughts or simply learning not to be affected by them.

Philosophically, the argument behind CBT is that thoughts, at least at a certain level, are a kind of behaviour. They are a behaviour in the sense that they can be changed under conscious control. Mindfulness therapy does not attempt to ‘over-write’ thoughts. This means that the two therapies, in so far as they achieve their objectives, are not philosophically equivalent. Moreover, there are arguments in favour of removing the harmful thoughts, even if this does not result in any greater improvement in mood than the counter-factural. Consider a man whose wife is annoyed by certain movements that he is unable to control. It is surely much better, both from her point of view and from the point of view of the husband, that these painful thoughts should be removed altogether, rather than just tolerated. Alternatively, consider a person who is chronically distressed by a recurring memory of the painful death of a parent. Again, it is surely better that this person trains himself to think of another aspect of the parent’s life whenever the troubling thoughts recur, than to simply continue to remember the death, but not get upset by it.

So, I think that CBT is philosophically preferable to mindfulness therapy, even if it is no more effective in improving mood. From a philosophical point of view, it is important to develop a high rectitude way of thinking. When negative or morally questionable thoughts pop into the brain, as they do from time to time, these should be suppressed. A racist thought, for example, should be replaced with thoughts of higher rectitude. It is the purpose of the examined life to be able to control negative or bigoted thoughts and supplant them with more positive thoughts under conscious control. From this philosophical perspective CBT can be seen as an extension of the human ability to supplant negative or reprehensible thoughts with ones that are more positive or of higher rectitude. I choose CBT over mindfulness; for all that they might be equally effective in elevating mood, psychiatric treatments have implications that go beyond purely clinical outcomes – since they affect the mind there is always a philosophical dimension.

— Richard Lilford, CLAHRC WM Director

References:

  1. Manicavasagar V, Perich T, Parker G. Cognitive Predicators of Change in Cognitive Behaviour Therapy and Mindfulness-Based Cognitive Therapy for Depression. Behav Cogn Psychother. 2012; 40: 227-32.
  2. Omidi A, Mohammadkhani P, Mohammadi A, Zargar F. Comparing Mindfulness Based Cognitive Therapy and Traditional Cognitive Behavior Therapy With Treatments as Usual on Reduction of Major Depressive Symptoms. Iran Red Crescent Med J. 2013; 15(2): 142-6.
  3. Sundquist J, Lilja A, Palmér K, et al. Mindfulness group therapy in primary care patients with depression, anxiety and stress and adjustment disorders: randomised controlled trial. Br J Psychiatry. 2015; 206(2): 128-35.

‘Information is not knowledge’: Communication of Scientific Evidence and how it can help us make the right decisions

Every one of us is required to make many decisions: from small decisions, such as what shoes to wear with an outfit or whether to have a second slice of cake; to larger decisions, such as whether to apply for a new job or what school to send our children to. For decisions where the outcome can have a large impact we don’t want to play a game of ‘blind man’s buff’ and make a decision at random. We do our utmost to ensure that whatever decision we arrive at, it is the right one. We go through a process of getting hold of information from a variety of sources we trust and processing that knowledge to help us make up our minds. And in this digital age, we have access to more information than ever before.

When it comes to our health, we are often invited to be involved in making shared decisions about our own care as patients. Because it’s our health that’s at stake, this can bring pressures of not only making a decision but also making the right decision. Arriving at a wrong decision can have significant consequences, such as over- or under-medication or missing out from advances in medicine. But how do we know how to make those decisions and where do we get our information from? Before we start taking a new course of medication, for example, how can we find out if the drugs are safe and effective, and how can we find out the risks as well as the benefits?

The Academy of Medical Sciences produced a report, ‘Enhancing the use of scientific evidence to judge the potential benefits and harms of medicine’,[1] which examines what changes would be necessary to help patients make better-informed decisions about taking medication. It is often the case that there is robust scientific evidence that can be useful in helping patients and clinicians make the right choices. However, this information can be difficult to find, hard to understand, and cast adrift in a sea of poor-quality or misleading information. With so much information available, some of it conflicting – is it any surprise that in a Medical Information Survey, almost two-thirds of British adults would trust experiences of friends and family compared to data from clinical trials, which only 37% of British adults would trust?[2]

The report offers recommendations on how scientific evidence can be made available to enable people to weigh up the pros and cons of new medications and arrive at a decision they are comfortable with. These recommendations include: using NHS Choices as a ‘go to’ hub of clear, up-to-date information about medications, with information about benefits and risks that is easy to understand; improving the design, layout and content of patient information leaflets; giving patients longer appointment times so they can have more detailed discussions about medications with their GP; and a traffic-light system to be used by the media to endorse the reliability of scientific evidence.

This is all good news for anyone having to decide whether to start taking a new drug. I would welcome the facility of going to a well-designed website with clear information about the risks and benefits of taking particular drugs rather than my current approach of asking friends and family (most of whom aren’t medically trained), searching online, and reading drug information leaflets that primarily present long lists of side-effects.

Surely this call for clear, accessible information about scientific evidence is just as relevant to all areas of medical research, including applied health. Patients and the public have a right to know how scientific evidence underpinning important decisions in care is generated and to be able to understand that information. Not only do patients and the public also make decisions about aspects of their care, such as whether to give birth at home or in hospital, or whether to take a day off work to attend a health check, but they should also be able to find and understand evidence that explains why care is delivered in a particular way, such as why many GPs now use a telephone triage system before booking in-person appointments. Researchers, clinicians, patients and communicators of research all have a part to play.

In CLAHRC West Midlands, we’re trying to ‘do our bit’. We aim to make accessible a sound body of scientific knowledge through different information channels and our efforts include:

  • Involving patients and the public to write lay summaries of our research projects on our website so people can find out about the research we do.
  • Communication of research evidence in accessible formats, such as CLAHRC BITEs, which are reviewed by our Public Advisors.
  • Method Matters, a series aimed to give members of the public a better understanding of concepts in Applied Health Research.

The recommendations from the Academy of Medical Sciences can provide a useful starting point for further discussions on how we can communicate effectively in applied health research and ensure that scientific evidence, rather than media hype or incomplete or incorrect information, is the basis for decision-making.

— Magdalena Skrybant, CLAHRC WM PPIE Lead

References:

  1. The Academy of Medical Sciences. Enhancing the use of scientific evidence to judge the potential benefits and harms of medicine. London: Academy of Medical Sciences; 2017.
  2. The Academy of Medical Sciences. Academy of Medical Sciences: Medical Information Survey. London: Academy of Medical Sciences; 2016

The Beneficial Effects of Taking Part in International Research: an Old Chestnut Revisited

Two recent and well-written articles grapple with this question of whether or not clinical trials are beneficial, net of any benefit conferred by the therapeutic modalities evaluated in those trials.[1] [2]

The first study from the Netherlands concerns the effect of taking part in clinical trials where controls are made up of people not participating in trials (presumably because they were not offered entry in the trial).[1] This is the topic of a rather extensive literature, including a study to which I contributed.[3] The latter study found that the putative ‘trial effect’ applied only in circumstances where care given to control patients was not protocol-directed. In other words, our results suggested that the ‘trial effect’ was really a ‘protocol effect’. In that case the effect should be ephemeral and disappear as greater proportions of care become protocolised. And that is what appears to have happened – Lin, et al.[1] report no benefit to trial participants versus non-trial patients for the highly protocolised disease Hodgkin lymphoma. They speculate that while participation in trials does not affect individual patient care in the short-term, hosting trials does sensitise clinicians at an institutional level, so that they are more likely than clinicians from non-participating hospitals to practice evidence-based care. However, they offer no direct evidence for this assertion. Such evidence is, however, provided by the next study.

The effects of high participation rates in clinical trials at the hospital level is evaluated in an elegant study recently published in the prestigious journal ‘Gut’.[2] The team of authors (that includes prominent civil servants and many distinguished cancer specialists and statisticians) compared outcomes from colon cancer according to the extent to which the hospital providing treatment participated in trials. This ingenious study was accomplished by linking the NIHR’s data on clinical trials participation to cancer registry data and Hospital Episode Statistics. It turned out that risk-adjusted survival was significantly better in the high participation hospitals than in lower participation hospitals, even after substantial risk-adjustment. “Residual confounding” do I hear you say? Perhaps, but the authors have two further lines of evidence for the causal explanation. First, they documented a dose-response; the greater the level of participation, the greater the improvement in survival. Of course, an unknown confounder that was correlated with participation rates would produce just such a finding. The second line of evidence is more impressive – the longer the duration over which a hospital had sustained high participation rates, the greater the effect. Again, of course, this argument is not impregnable – duration might not serve as a good Instrumental Variable. How might the case be further strengthened (or refuted)? By unravelling the theoretical pathway between explanatory and outcome variables.[4] Since this is a database study, the process variables that might mediate the putative effect were not available to the authors. However, separate studies have indeed found an association between improved processes of care and trial participation.[5] Taken in the round, I think that a cause/effect explanation holds (>90% of my probability density favours the causal explanation).

— Richard Lilford, CLAHRC WM Director

References:

  1. Liu L, Giusti F, Schaapveld M, et al. Survival differences between patients with Hodgkin lymphoma treated inside and outside clinical trials. A study based on the EORTC-Netherlands Cancer Registry linked data with 20 years of follow-up. Br J Haematol. 2017; 176: 65-75.
  2. Downing A, Morris EJA, Corrigan N, et al. High hospital research participation and improved colorectal cancer survival outcomes: a population-based study. Gut. 2017; 66: 89-96.
  3. Braunholtz DA, Edwards SJ, Lilford RJ. Are randomized clinical trials good for us (in the short term)? Evidence for a “trial effect”. J Clin Epidemiol. 2001; 54(3): 217-24.
  4. Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end pointsBMJ. 2010; 341: c4413.
  5. Selby P. The impact of the process of clinical research on health service outcomes. Ann Oncol. 2011; 22(s7): vii2-4.

Private Consultations More Effective than Public Provision in Rural India

Doing work across high-income countries (CLAHRC WM) and lower income countries (CLAHRC model for Africa) provides interesting opportunities to compare and contrast. For example, our work on user fees in Malawi [1] mirrors that in high-income countries [2] – in both settings, relatively small increments in out-of-pocket expenses results in a large decrease in demand and does so indiscriminately (the severity of disease among those who access services is not shifted towards more serious cases). However, the effect of private versus public provision of health care is rather more nuanced.

News Blog readers are likely aware of the famous RAND study in the US.[3] People were randomised to receive their health care on a fee-for-service basis (‘privately’) vs. on a block contract basis (as in a public service). The results showed that fee-for-service provision resulted in more services being provided (interpreted as over-servicing), but that patients were more satisfied clients, compared to those experiencing public provision. Clinical quality was no different. In contrast, a study from rural India [4] found that private provision results in markedly improved quality compared to public provision, albeit with a degree of over-servicing.

The Indian study used ‘standardised patients’ (SPs) to measure the quality of care during consultations covering three clinical scenarios – angina, asthma and the parent of a child with dysentery. The care SPs received was scored against an ideal standard. Private providers spent more time/effort collecting the data essential for making a correct diagnosis, and were more likely to give treatment appropriate to the condition. First, they compared private providers with public providers and found that the former spent 30% more time gathering information from the SPs than the public providers. Moreover, the private providers were more likely to be present when the patient turned up for a consultation. There was a positive correlation between the magnitude of fees charged by private providers and time spent eliciting symptoms and signs, and the probability that the correct treatment would be provided. However, the private providers are often not doctors, so this result could reflect different professional mix, at least in part. To address this point, a second study was done whereby the same set of doctors were presented with the same clinical cases – a ‘dual sample’. The results were even starker, with doctors spending twice as long with each patient when seen privately.

Why were these results from rural India so different from the RAND study? The authors suggest that taking a careful history and examination is part of the culture for US doctors, and that they had reach a kind of asymptote, such that context made little difference to this aspect of their behaviour. Put another way, there was little headroom for an incentive system to drive up quality of care. However, in low-income settings where public provision is poorly motivated and regulated, fee-for-service provision drives up quality. The same seems to apply to education, where private provision was found to be of higher quality than public provision in low-income settings – see previous News Blog.[5]

However, it should be acknowledged that none of the available alternatives in rural India were good ones. For example, the probability of receiving the correct diagnosis varied across the private and public provider, but never exceeded 15%, while the rate of correct treatment varied from 21% to about 50%. Doctors were more likely than other providers to provide the correct diagnosis. A great deal of treatment was inappropriate. CLAHRC West Midlands’ partner organisation in global health is conducting a study of service provision in slums with a view to devising affordable models of improving health care.[6]

— Richard Lilford, CLAHRC WM Director

References:

  1. Watson SI, Wroe EB, Dunbar EL, et al. The impact of user fees on health services utilization and infectious disease diagnoses in Neno District, Malawi: a longitudinal, quasi-experimental study. BMC Health Serv Res. 2016; 16: 595.
  2. Carrin G & Hanvoravongchai P. Provider payments and patient charges as policy tools for cost-containment: How successful are they in high-income countries? Hum Resour Health. 2003; 1: 6.
  3. Brook RH, Ware JE, Rogers WH, et al. The effect of coinsurance on the health of adults. Results from the RAND Health Insurance Experiment. Santa Monica, CA: RAND Corporation, 1984.
  4. Das J, Holla A, Mohpal A, Muralidharan K. Quality and Accountability in Healthcare Delivery: Audit-Study Evidence from Primary Care in India . Am Econ Rev. 2016; 106(12): 3765-99.
  5. Lilford RJ. League Tables – Not Always Bad. NIHR CLAHRC West Midlands News Blog. 28 August 2015.
  6. Lilford RJ. Between Policy and Practice – the Importance of Health Service Research in Low- and Middle-Income Countries. NIHR CLAHRC West Midlands News Blog. 27 January 2017.

The New and Growing Interest in Mental Health: Where Should it Be Directed?

Mental health provision and mental health research are undergoing something of a renaissance. The subject has been the priority of successive governments, more people are entering mental health professions, and mental health attracts a financial premium under the Research Evaluation Framework, through which universities receive care funding. The biological basis of many mental health diseases has recently been unravelled – see for instance past News Blogs on the molecular biology of schizophrenia, and Alzheimer’s disease.[1] [2] From a philosophical standpoint the mind is now seen as a function of the brain, just as circulating the blood is a function of the heart. The interaction between the brain and the rest of the body, first discovered by observations on Alexis St. Martin in 1822, and later seen in ‘Tom’ in 1947,[3] is now a major source of investigation (see another article in this News Blog on a part of the brain called the amygdala).

>Much of this renewed attention on mental illness carries the, often implicit, implication that mental health treatment should improve. This is undoubtedly the case for many diseases at the severe end of the psychiatric spectrum. One does, however, have to wonder whether the traditional medical model that serves us well in diseases such as schizophrenia and autism, is really the right way to go for other conditions such as depression and anxiety, especially in their milder forms. Depression, one often reads, affects 30% of the population. But 30% represents a choice of threshold, since the definition of ‘caseness’ turns on where the line is drawn. If set at roughly one-third of the population one has to wonder about the logistics of supplying sufficient treatment. And even if the logistics can be managed, it still seems wrong to make ‘cases’ of fully a third of the human race. To put this another way, common problems, such as depression and obesity are best tackled at the societal level. Therapeutic services can then deal with the most serious end of the spectrum – people who really should be given a diagnostic label. This would seem to be the way to go for (at least) two reasons. First, many people (especially at the milder end of the spectrum, where normality elides into diseases) do not present to health services. Their mental health is important. Second, the brain is a ‘learning machine’ and it is hard to reverse harmful behaviours, such as eating disorders, once they have been firmly encoded in neural circuits. Mental health practitioners therefore have a preventive / public health responsibility to intervene by encouraging a wider ‘psycho-prophylactic’ approach. And this topic needs research support every bit as much as therapy. A population level approach would seem to have two broad components – a supportive environment, and encouraging resilience in the population.

Let us consider a supportive environment. Reducing bullying in schools is an archetypal example of an intervention to create a psychotropic environment. There is clear and present evidence that the victim (but not the perpetrator) is harmed by bullying, and there is also good evidence that the problem can be prevented.[4] How a psycho-therapeutic environment may look in other respects is less clear-cut. Workplace culture is likely to be important. The Whitehall studies show that a feeling of powerlessness is associated with stress and illness,[5] but putting this right is not a simple manner. For example, it is widely believed that an optimistic, or so-called ‘positive’, outlook is helpful in the workplace, but the experimental evidence actually points the other way. Being realistic about difficulties ahead and (often low) chances of success, is more helpful than a culture of poorly titrated optimism.[6]

There are many specific groups that are at risk of mental suffering and where environmental modification may help. While the workplace is stressful and a source of anxiety and depression, it has its antithesis in the loneliness that often accompanies old age. There is a fashion to try to keep everyone living independently in their homes for as long as possible. However, such an environment is likely to lead to increasing isolation. I think that communal living should be encouraged in the declining years between retirement and death.[7]

What about resilience in the population? To a degree, the workplace will always be stressful since competing interests and time pressures are inevitable. How can we increase resilience? Taking part in guides and scouts is associated with better mental health outcomes in young people.[8] Exercise has positive benefits on mental health across the age spectrum,[9] and team sports seem particularly beneficial. It is possible that we can encourage ‘mental hygiene’ by talking about it and encouraging healthy mental behaviours. I have a tendency to self-pity and so practice a kind of cognitive behavioural therapy on myself – I think of role models and count my blessings. Others practice ‘mindfulness’. We need to learn more about how to build resilience through experience. Where lies the balance between a bland life devoid of competition, and a ruthless environment creating ingrained winners and losers? I hypothesise that an environment where people are encouraged to have a go, but where coercion is avoided and failure is seen as par for the course, will prepare children for life’s vicissitudes. However, I suspect we are in the foothills of discovery in this regard.

There is always a temptation to screen for illness when it cannot be fully prevented, but the screening can often do more harm than good, and this is true in mental health as well as a physical context. Certainly, routine debriefing after a major incident or difficult childbirth appears to be at best unhelpful. CLAHRC WM collaborator Swaran Singh and colleagues showed that screening for the prodromal symptoms of schizophrenia is also unhelpful as it produces an extremely high false positive rate.[10] Again, working out when screening is of net benefit is an important task for the future.

In conclusion, none of what I have written should be seen as a criticism of therapeutic research and practice. Rather, I argue for a broadening of scope, not only to find things that are predictive of poor mental health, but to find workable methods to improve mental health at a population level. Public mental health is an enduring topic in CLAHRC WM.

— Richard Lilford, CLAHRC WM Director

References:

  1. Lilford RJ. Psychiatry Comes of Age. NIHR CLAHRC West Midlands News Blog. 11 March 2016.
  2. Lilford RJ. A Fascinating Account of the Opening up of an Area of Scientific Enquiry. NIHR CLAHRC West Midlands News Blog. 11 November 2016.
  3. Wolf S. Stress and the Gut. Gastroenterol. 1967. 52(2):288-9.
  4. Menesini E & Salmivalli C. Bullying in schools: the state of knowledge and effective interventions. Psychol Health Med. 2017; 22(s1): 240-53.
  5. Bell R, Britton A, Brunner E, et al. Work Stress and Health: the Whitehall II study. London: Council of Civil Service Unions / Cabinet Office; 2004.
  6. Lilford RJ. Managing Staff: A Role for Tough Love? NIHR CLAHRC West Midlands News Blog. 2 September 2016.
  7. Lilford RJ. Encouraging Elderly People to Live Independent Lives: Bad Idea? NIHR CLAHRC West Midlands News Blog. 16 April 2014.
  8. Lilford RJ. Does Being a Guide or Scout as a Child Promote Mental Health in Adulthood?. NIHR CLAHRC West Midlands News Blog. 25 November 2016.
  9. Lilford RJ. On the High Prevalence of Mental Disorders. NIHR CLAHRC West Midlands News Blog. 7 March 2014.
  10. Perry BI, McIntosh G, Welch S, Singh S, Rees K. The association between first-episode psychosis and abnormal glycaemic control: systematic review and meta-analysis. Lancet Psychiatry. 2016; 3(11): 1049-58.

And Today We Have the Naming of Parts*

Management research, health services research, operations research, quality and safety research, implementation research – a crowded landscape of words describing concepts that are, at best, not entirely distinct, and at worst synonyms. Some definitions are given in Table 1. Perhaps the easiest one to deal with is ‘operations research’, which has a rather narrow meaning and is used to describe mathematical modelling techniques to derive optimal solutions to complex problems typically dealing with the flow of objects (people) over time. So it is a subset of the broader genre covered by this collection of terms. Quality and safety research puts the cart before the horse by defining the intended objective of an intervention, rather than where in the system the intervention impacts. Since interventions at a system level may have many downstream effects, it seems illogical and indeed potentially harmful, to define research by its objective, an argument made in greater detail elsewhere.[1]

Health Services Research (HSR) can be defined as management research applied to health, and is an acceptable portmanteau term for the construct we seek to define. For those who think the term HSR leaves out the development and evaluation of interventions at service level, the term Health Services and Delivery Research (HS&DR) has been devised. We think this is a fine term to describe management research as applied to the health services, and are pleased that the NIHR has embraced the term, and now has two major funding schemes ­– the HTA programme dealing with clinical research, and the HS&DR dealing with management research. In general, interventions and their related research programmes can be neatly represented as shown in the framework below, represented in a modified Donabedian chain:

078 DCB - Figure 1

So what about implementation research then? Wikipedia defines implementation research as “the scientific study of barriers to and methods of promoting the systematic application of research findings in practice, including in public policy.” However, a recent paper in BMJ states that “considerable confusion persists about its terminology and scope.”[2] Surprised? In what respect does implementation research differ from HS&DR?

Let’s start with the basics:

  1. HS&DR studies interventions at the service level. So does implementation research.
  2. HS&DR aims to improve outcome of care (effectiveness / safety / access / efficiency / satisfaction / acceptability / equity). So does implementation research.
  3. HS&DR seeks to improve outcomes / efficiency by making sure that optimum care is implemented. So does implementation research.
  4. HS&DR is concerned with implementation of knowledge; first knowledge about what clinical care should be delivered in a given situation, and second about how to intervene at the service level. So does implementation research.

This latter concept, concerning the two types of knowledge (clinical and service delivery) that are implemented in HS&DR is a critical one. It seems poorly understood and causes many researchers in the field to ‘fall over their own feet’. The concept is represented here:

078 DCB - Figure 2HS&DR / implementation research resides in the South East quadrant.

Despite all of this, some people insist on keeping the distinction between HS&DR and Implementation Research alive – as in the recent Standards for Reporting Implementation studies (StaRI) Statement.[3] The thing being implemented here may be a clinical intervention, in which case the above figure applies. Or it may be a service delivery intervention. Then they say that once it is proven, it must be implemented, and this implementation can be studied – in effect they are arguing here for a third ring:

078 DCB - Figure 3

This last, extreme South East, loop is redundant because:

  1. Research methods do not turn on whether the research is HS&DR or so-called Implementation Research (as the authors acknowledge). So we could end up in the odd situation of the HS&DR being a before and after study, and the Implementation Research being a cluster RCT! The so-called Implementation Research is better thought of as more HS&DR – seldom is one study sufficient.
  2. The HS&DR itself requires the tenets of Implementation Science to be in place – following the MRC framework, for example – and identifying barriers and facilitators. There is always implementation in any trial of evaluative research, so all HS&DR is Implementation Research – some is early and some is late.
  3. Replication is a central tenet of science and enables context to be explored. For example, “mother and child groups” is an intervention that was shown to be effective in Nepal. It has now been ‘implemented’ in six further sites under cluster RCT evaluation. Four of the seven studies yielded positive results, and three null results. Comparing and contrasting has yielded a plausible theory, so we have a good idea for whom the intervention works and why.[4] All seven studies are implementations, not just the latter six!

So, logical analysis does not yield any clear distinction between Implementation Research on the one hand and HS&DR on the other. The terms might denote some subtle shift of emphasis, but as a communication tool in a crowded lexicon, we think that Implementation Research is a term liable to sow confusion, rather than generate clarity.

Table 1

Term Definitions Sources
Management research “…concentrates on the nature and consequences of managerial actions, often taking a critical edge, and covers any kind of organization, both public and private.” Easterby-Smith M, Thorpe R, Jackson P. Management Research. London: Sage, 2012.
Health Services Research (HSR) “…examines how people get access to health care, how much care costs, and what happens to patients as a result of this care.” Agency for Healthcare Research and Quality. What is AHRQ? [Online]. 2002.
HS&DR “…aims to produce rigorous and relevant evidence on the quality, access and organisation of health services, including costs and outcomes.” INVOLVE. National Institute for Health Research Health Services and Delivery Research (HS&DR) programme. [Online]. 2017.
Operations research “…applying advanced analytical methods to help make better decisions.” Warwick Business School. What is Operational Research? [Online]. 2017.
Patient safety research “…coordinated efforts to prevent harm, caused by the process of health care itself, from occurring to patients.” World Health Organization. Patient Safety. [Online]. 2017.
Comparative Effectiveness research “…designed to inform health-care decisions by providing evidence on the effectiveness, benefits, and harms of different treatment options.” Agency for Healthcare Research and Quality. What is Comparative Effectiveness Research. [Online]. 2017.
Implementation research “…the scientific inquiry into questions concerning implementation—the act of carrying an intention into effect, which in health research can be policies, programmes, or individual practices (collectively called interventions).” Peters DH, Adam T, Alonge O, Agyepong IA, Tran N. Implementation research: what it is and how to do it. BMJ. 2013; 347: f6753.

We have ‘audited’ David Peters’ and colleagues BMJ article and found that every attribute they claim for Implementation Research applies equally well to HS&DR, as you can see in Table 2. However, this does not mean that we should abandon ‘Implementation Science’ – a set of ideas useful in designing an intervention. For example, stakeholders of all sorts should be involved in the design; barriers and facilitators should be identified; and so on. By analogy, I think Safety Research is a back-to-front term, but I applaud the tools and insights that ‘safety science’ provides.

Table 2

Term
“…attempts to solve a wide range of implementation problems”
“…is the scientific inquiry into questions concerning implementation – the act of carrying an intention into effect, which in health research can be policies, programmes, or individual practices (…interventions).”
“…can consider any aspect of implementation, including the factors affecting implementation, the processes of implementation, and the results of implementation.”
“The intent is to understand what, why, and how interventions work in ‘real world’ settings and to test approaches to improve them.”
“…seeks to understand and work within real world conditions, rather than trying to control for these conditions or to remove their influence as causal effects.”
“…is especially concerned with the users of the research and not purely the production of knowledge.”
“…uses [implementation outcome variables] to assess how well implementation has occurred or to provide insights about how this contributes to one’s health status or other important health outcomes.
…needs to consider “factors that influence policy implementation (clarity of objectives, causal theory, implementing personnel, support of interest groups, and managerial authority and resources).”
“…takes a pragmatic approach, placing the research question (or implementation problem) as the starting point to inquiry; this then dictates the research methods and assumptions to be used.”
“…questions can cover a wide variety of topics and are frequently organised around theories of change or the type of research objective.”
“A wide range of qualitative and quantitative research methods can be used…”
“…is usefully defined as scientific inquiry into questions concerning implementation—the act of fulfilling or carrying out an intention.”

 — Richard Lilford, CLAHRC WM Director and Peter Chilton, Research Fellow

References:

  1. Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ. 2010; 341: c4413.
  2. Peters DH, Adam T, Alonge O, Agyepong IA, Tran N. Implementation research: what it is and how to do it. BMJ. 2013; 347: f6753.
  3. Pinnock H, Barwick M, Carpenter CR, et al. Standards for Reporting Implementation Studies (StaRI) Statement. BMJ. 2017; 356: i6795.
  4. Prost A, Colbourn T, Seward N, et al. Women’s groups practising participatory learning and action to improve maternal and newborn health in low-resource settings: a systematic review and meta-analysis. Lancet. 2013; 381: 1736-46.

*Naming of Parts by Henry Reed, which Ray Watson alerted us to:

Today we have naming of parts. Yesterday,

We had daily cleaning. And tomorrow morning,

We shall have what to do after firing. But to-day,

Today we have naming of parts. Japonica

Glistens like coral in all of the neighbouring gardens,

And today we have naming of parts.

Measuring Quality of Care

Measuring quality of care is not a straightforward business:

  1. Routinely collected outcome data tend to be misleading because of very poor ratios of signal to noise.[1]
  2. Clinical process (criterion based) measures require case note review and miss important errors of omission, such as diagnostic errors.
  3. Adverse events also require case note review and are prone to measurement error.[2]

Adverse event review is widely practiced, usually involving a two-stage process:

  1. A screening process (sometimes to look for warning features [triggers]).
  2. A definitive phase to drill down in more detail and refute or confirm (and classify) the event.

A recent HS&DR report [3] is important for two particular reasons:

  1. It shows that a one-stage process is as sensitive as the two-stage process. So triggers are not needed; just as many adverse events can be identified if notes are sampled at random.
  2. In contrast to (other) triggers, deaths really are associated with a high rate of adverse events (apart, of course, from the death itself). In fact not only are adverse events more common among patients who have died than among patients sampled at random (nearly 30% vs. 10%), but the preventability rates (probability that a detected adverse event was preventable) also appeared slightly higher (about 60% vs. 50%).

This paper has clear implications for policy and practice, because if we want a population ‘enriched’ for high adverse event rates (on the ‘canary in the mineshaft’ principle), then deaths provide that enrichment. The widely used trigger tool, however, serves no useful purpose – it does not identify a higher than average risk population, and it is more resource intensive. It should be consigned to history.

Lastly, England and Wales have mandated a process of death review, and the adverse event rate among such cases is clearly of interest. A word of caution is in order here. The reliability (inter-observer agreement) in this study was quite high (Kappa 0.5), but not high enough for comparisons across institutions to be valid. If cross-institutional comparisons are required, then:

  1. A set of reviewers must review case notes across hospitals.
  2. At least three reviewers should examine each case note.
  3. Adjustment must be made for reviewer effects, as well as prognostic factors.

The statistical basis for these requirements are laid out in detail elsewhere.[4] It is clear that reviewers should not review notes from their own hospitals, if any kind of comparison across institutions is required – the results will reflect the reviewers rather than the hospitals.

Richard Lilford, CLAHRC WM Director

References:

  1. Girling AJ, Hofer TP, Wu J, et al. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling studyBMJ Qual Saf. 2012; 21(12): 1052-6.
  2. Lilford R, Mohammed M, Braunholtz D, Hofer T. The measurement of active errors: methodological issues. Qual Saf Health Care. 2003; 12(s2): ii8-12.
  3. Mayor S, Baines E, Vincent C, et al. Measuring harm and informing quality improvement in the Welsh NHS: the longitudinal Welsh national adverse events study. Health Serv Deliv Res. 2017; 5(9).
  4. Manaseki-Holland S, Lilford RJ, Bishop JR, Girling AJ, Chen YF, Chilton PJ, Hofer TP; UK Case Note Review Group. Reviewing deaths in British and US hospitals: a study of two scales for assessing preventability. BMJ Qual Saf. 2016. [ePub].

Wrong Medical Theories do Great Harm but Wrong Psychology Theories are More Insidious

Back in the 1950s, when I went from nothing to something, a certain Dr Spock bestrode the world of child rearing like a colossus. Babies, said Spock, should be put down to sleep in the prone position. Only years later did massive studies show that children are much less likely to experience ‘cot death’ or develop joint problems if they are placed supine – on their backs. Although I survived prone nursing to become a CLAHRC director, tens of thousands of children must have died thanks to Dr Spock’s ill-informed theory.

So, I was fascinated by an article in the Guardian newspaper, titled ‘No evidence to back the idea of learning styles’.[1] The article was signed by luminaries from the world of neuroscience, including Colin Blakemore (who I knew, and liked, when he was head of the MRC). I decided to retrieve the article on which the Guardian piece was mainly based – a review in ‘Psychological Science in the Public Interest’.[2]

The core idea is that people have clear preferences for how they prefer to receive information (e.g. pictorial vs. verbal) and that teaching is most effective if delivered according to the preferred style. This idea is widely accepted among psychologists and educationalists, and is advocated in many current textbooks. Numerous tests have been devised to diagnose a person’s learning style so that their instruction can be tailored accordingly. Certification programmes are offered, some costing thousands of dollars. A veritable industry has grown up around this theory. The idea belongs to a larger set of ideas, originating with Jung, called ‘type theories’; the notion that people fall into distinct groups or ‘types’, from which predictions can be made. The Myers-Briggs ‘type’ test is still deployed as part of management training and I have been subjected to this instrument, despite the fact that its validity as the basis for selection or training has not been confirmed in objective studies. People seem to cling to the idea that types are critically important. That types exist is not the issue of contention (males/females; extrovert/introvert), it is what they mean (learn in different ways; perform differently in meetings) that is disputed. In the case of learning styles the hypothesis of interest is that the style (which can be observed ex ante) meshes with a certain type of instruction (the benefit of which can be observed ex post). The meshing hypothesis holds that different modes of instruction are optimal for different types of person “because different modes of presentation exploit the specific perceptual and cognitive strengths of different individuals.” This hypothesis entails the assumption that people with a certain style (based, say on a diagnostic instrument or ‘tool’) will experience better educational outcomes when taught in one way (say, pictorial) than when taught in another way (say, verbal). It is precisely this (‘meshing’) hypothesis that the authors set out to test.

Note then that finding that people have different preferences does not confirm the hypothesis. Likewise, finding that different ability levels correlate with these preferences would not confirm the hypothesis. The hypothesis would be confirmed by finding that teaching method 1 is more effective than method 2 in type A people, while teaching method 2 is more effective than teaching method 1 in type B people.

The authors find, from the voluminous literature, only four studies that test the above hypothesis. One of these was of weak design. The three stronger studies provide null results. The weak study did find a style-by-treatment interaction, but only after “the outliers were excluded for unspecified reasons.”

Of course, the null results do not exclude the possibility of an effect, particularly a small effect, as the authors point out. To shed further light on the subject they explore related literatures. First they examine aptitude (rather than just learning style preference) to see whether there is an interaction between aptitude and pedagogic method. Here the literature goes right back to Cornbach in 1957. One particular hypothesis was that high aptitude students fare better in a less structured teaching format, while those with less aptitude fare better where the format is structured and explicit. Here the evidence is mixed, such that in about half of studies, less structure suits high ability students, while more structure suits less able students – one (reasonable) interpretation for the different results is that there may be certain contexts where aptitude/treatment interactions do occur and others where they do not. Another hypothesis concerns an aspect of personality called ‘locus of control’. It was hypothesised that an internal locus of control (people who incline to believe their destiny lies in their own hands) would mesh with an unstructured format of instruction and vice versa. Here the evidence, taken in the round, tends to confirm the hypothesis.

So, there is evidence (not definitive, but compelling) for an interaction between personality and aptitude and teaching method. There is no such evidence for learning style preference. This does not mean that some students will need an idea to be explained one way while others need it explained in a different way. This is something good teachers sense as they proceed, as emphasised in a previous blog.[3] But tailoring your explanation according to the reaction of students is one thing, determining it according to a pre-test is another. In fact, the learning style hypothesis may impede good teaching by straightjacketing teaching according to a pre-determined format, rather than encouraging teachers to adapt to the needs of students in real time. Receptivity to the expressed needs of the learner seems preferable to following a script to which the learner is supposed to conform.

And why have I chosen this topic for the main News Blog article? Two reasons:

First, it shows how an idea may gain purchase in society with little empirical support, and we should be ever on our guard – the Guardian lived up to its name in this respect!

Second, because health workers are educators; we teach the next generation and we teach our peers. Also, patient communication has an undoubted educational component (see our previous main blog [4]). So we should keep abreast of general educational theory. Many CLAHRC WM projects have a strong educational dimension.

— Richard Lilford, CLAHRC WM Director

References:

  1. Hood B, Howard-Jones P, Laurillard D, et al. No Evidence to Back Idea of Learning Styles. The Guardian. 12 March 2017.
  2. Pashler H, McDaniel M, Rohrer D, Bjork R. Learning Styles: Concepts and Evidence. Psychol Sci Public Interest. 2008; 9(3): 105-19.
  3. Lilford RJ. Education Update. NIHR CLAHRC West Midlands News Blog. 2 September 2016.
  4. Lilford RJ. Doctor-Patient Communication in the NHS. NIHR CLAHRC West Midlands News Blog. 24 March 2017.

Publishing Health Economic Models

It has increasingly become de rigueur – if not necessary – to publish the primary data collected as part of clinical trials and other research endeavours. In 2015 for example, the British Medical Journal stipulated that a pre-condition of publication of all clinical trials was the guarantee to make anonymised patient-level data available on reasonable request.[1] Data repositories, from which data can be requested such as the Yoda Project, and from which data can be directly downloaded such as Data Dryad provide a critical service for researchers wanting to make their data available and transparent. The UK Data Service also provides access to an extensive range of quantitative and, more recently, qualitative data from studies focusing on matters relating to society, economics and populations. Publishing data enables others to replicate and verify (or otherwise) original findings and, potentially, to answer additional research questions and add to knowledge in a particularly cost-effective manner.

At present, there is no requirement for health economic models to be published. The ISPOR-SMDM Good Research Practices Statement advocates publishing of sufficient information to meet their goals of transparency and validation.[2] In terms of transparency, the Statement notes that this should include sufficiently detailed documentation “to enable those with the necessary expertise and resources to reproduce the model”. The need to publish the model itself is specifically refuted, using the following justification: “Building a model can require a significant investment in time and money; if those who make such investments had to give their models away without restriction, the incentives and resources to build and maintain complex models could disappear”. This justification may be relatively hard to defend for “single-use” models that are not intended to be reused. Although the benefits of doing so are limited, publishing such models would still be useful if a decision-maker facing a different cost structure wanted to evaluate the cost-effectiveness of a specific intervention in their own context. The publication of any economic model would also allow for external validation which would likely be stronger than internal validation (which could be considered marking one’s own homework).

The most significant benefits of publication are most likely to arise from the publication of “general” or “multi-application” models because those seeking to adapt, expand or develop the original model would not have to build it from scratch, saving time and money (recognising this process would be facilitated by the publication of the technical documentation from the original model). Yet it is for these models that not publishing gives developers a competitive advantage in any further funding bids in which a similar model is required. This confers partial monopoly status in a world where winning grant income is becoming ever more critical. However, I like to believe most researchers also want to maximise the health and wellbeing of society: am aim rarely achieved by monopolies. The argument for publication gets stronger when society has paid (via taxation) for the development of the original model. It is also possible that the development team benefit from publication through increased citations and even the now much sought after impact. For example, the QRISK2 calculator used to predict cardiovascular risk is available online and its companion paper [3] has earned Julia Hippisley-Cox and colleagues almost 700 citations.

Some examples of published economic models exist, such as a costing model for selection processes for speciality training in the UK. While publication of more – if not all – economic models is not an unrealistic aim, it is also necessary to respect intellectual property rights. We welcome your views on whether existing good practice for transparency in health economic modelling should be extended to include the model itself.

— Celia Taylor, Associate Professor

References:

  1. Loder E, & Groves T. The BMJ requires data sharing on request for all trials. BMJ. 2015; 350: h2373.
  2. Eddy DM, Hollingworth W, Caro JJ, et al. Model transparency and validation: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force–7. Med Decis Making. 2012; 32(5): 733-43.
  3. Hippisley-Cox J, Coupland C, Vinogradova Y, et al. Predicting cardiovascular risk in England and Wales: prospective derivation and validation of QRISK2. BMJ. 2008; 336(7659): 1475-82.