Category Archives: Director & Co-Directors’ Blog

Measuring Quality of Care

Measuring quality of care is not a straightforward business:

  1. Routinely collected outcome data tend to be misleading because of very poor ratios of signal to noise.[1]
  2. Clinical process (criterion based) measures require case note review and miss important errors of omission, such as diagnostic errors.
  3. Adverse events also require case note review and are prone to measurement error.[2]

Adverse event review is widely practiced, usually involving a two-stage process:

  1. A screening process (sometimes to look for warning features [triggers]).
  2. A definitive phase to drill down in more detail and refute or confirm (and classify) the event.

A recent HS&DR report [3] is important for two particular reasons:

  1. It shows that a one-stage process is as sensitive as the two-stage process. So triggers are not needed; just as many adverse events can be identified if notes are sampled at random.
  2. In contrast to (other) triggers, deaths really are associated with a high rate of adverse events (apart, of course, from the death itself). In fact not only are adverse events more common among patients who have died than among patients sampled at random (nearly 30% vs. 10%), but the preventability rates (probability that a detected adverse event was preventable) also appeared slightly higher (about 60% vs. 50%).

This paper has clear implications for policy and practice, because if we want a population ‘enriched’ for high adverse event rates (on the ‘canary in the mineshaft’ principle), then deaths provide that enrichment. The widely used trigger tool, however, serves no useful purpose – it does not identify a higher than average risk population, and it is more resource intensive. It should be consigned to history.

Lastly, England and Wales have mandated a process of death review, and the adverse event rate among such cases is clearly of interest. A word of caution is in order here. The reliability (inter-observer agreement) in this study was quite high (Kappa 0.5), but not high enough for comparisons across institutions to be valid. If cross-institutional comparisons are required, then:

  1. A set of reviewers must review case notes across hospitals.
  2. At least three reviewers should examine each case note.
  3. Adjustment must be made for reviewer effects, as well as prognostic factors.

The statistical basis for these requirements are laid out in detail elsewhere.[4] It is clear that reviewers should not review notes from their own hospitals, if any kind of comparison across institutions is required – the results will reflect the reviewers rather than the hospitals.

Richard Lilford, CLAHRC WM Director

References:

  1. Girling AJ, Hofer TP, Wu J, et al. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling studyBMJ Qual Saf. 2012; 21(12): 1052-6.
  2. Lilford R, Mohammed M, Braunholtz D, Hofer T. The measurement of active errors: methodological issues. Qual Saf Health Care. 2003; 12(s2): ii8-12.
  3. Mayor S, Baines E, Vincent C, et al. Measuring harm and informing quality improvement in the Welsh NHS: the longitudinal Welsh national adverse events study. Health Serv Deliv Res. 2017; 5(9).
  4. Manaseki-Holland S, Lilford RJ, Bishop JR, Girling AJ, Chen YF, Chilton PJ, Hofer TP; UK Case Note Review Group. Reviewing deaths in British and US hospitals: a study of two scales for assessing preventability. BMJ Qual Saf. 2016. [ePub].

Wrong Medical Theories do Great Harm but Wrong Psychology Theories are More Insidious

Back in the 1950s, when I went from nothing to something, a certain Dr Spock bestrode the world of child rearing like a colossus. Babies, said Spock, should be put down to sleep in the prone position. Only years later did massive studies show that children are much less likely to experience ‘cot death’ or develop joint problems if they are placed supine – on their backs. Although I survived prone nursing to become a CLAHRC director, tens of thousands of children must have died thanks to Dr Spock’s ill-informed theory.

So, I was fascinated by an article in the Guardian newspaper, titled ‘No evidence to back the idea of learning styles’.[1] The article was signed by luminaries from the world of neuroscience, including Colin Blakemore (who I knew, and liked, when he was head of the MRC). I decided to retrieve the article on which the Guardian piece was mainly based – a review in ‘Psychological Science in the Public Interest’.[2]

The core idea is that people have clear preferences for how they prefer to receive information (e.g. pictorial vs. verbal) and that teaching is most effective if delivered according to the preferred style. This idea is widely accepted among psychologists and educationalists, and is advocated in many current textbooks. Numerous tests have been devised to diagnose a person’s learning style so that their instruction can be tailored accordingly. Certification programmes are offered, some costing thousands of dollars. A veritable industry has grown up around this theory. The idea belongs to a larger set of ideas, originating with Jung, called ‘type theories’; the notion that people fall into distinct groups or ‘types’, from which predictions can be made. The Myers-Briggs ‘type’ test is still deployed as part of management training and I have been subjected to this instrument, despite the fact that its validity as the basis for selection or training has not been confirmed in objective studies. People seem to cling to the idea that types are critically important. That types exist is not the issue of contention (males/females; extrovert/introvert), it is what they mean (learn in different ways; perform differently in meetings) that is disputed. In the case of learning styles the hypothesis of interest is that the style (which can be observed ex ante) meshes with a certain type of instruction (the benefit of which can be observed ex post). The meshing hypothesis holds that different modes of instruction are optimal for different types of person “because different modes of presentation exploit the specific perceptual and cognitive strengths of different individuals.” This hypothesis entails the assumption that people with a certain style (based, say on a diagnostic instrument or ‘tool’) will experience better educational outcomes when taught in one way (say, pictorial) than when taught in another way (say, verbal). It is precisely this (‘meshing’) hypothesis that the authors set out to test.

Note then that finding that people have different preferences does not confirm the hypothesis. Likewise, finding that different ability levels correlate with these preferences would not confirm the hypothesis. The hypothesis would be confirmed by finding that teaching method 1 is more effective than method 2 in type A people, while teaching method 2 is more effective than teaching method 1 in type B people.

The authors find, from the voluminous literature, only four studies that test the above hypothesis. One of these was of weak design. The three stronger studies provide null results. The weak study did find a style-by-treatment interaction, but only after “the outliers were excluded for unspecified reasons.”

Of course, the null results do not exclude the possibility of an effect, particularly a small effect, as the authors point out. To shed further light on the subject they explore related literatures. First they examine aptitude (rather than just learning style preference) to see whether there is an interaction between aptitude and pedagogic method. Here the literature goes right back to Cornbach in 1957. One particular hypothesis was that high aptitude students fare better in a less structured teaching format, while those with less aptitude fare better where the format is structured and explicit. Here the evidence is mixed, such that in about half of studies, less structure suits high ability students, while more structure suits less able students – one (reasonable) interpretation for the different results is that there may be certain contexts where aptitude/treatment interactions do occur and others where they do not. Another hypothesis concerns an aspect of personality called ‘locus of control’. It was hypothesised that an internal locus of control (people who incline to believe their destiny lies in their own hands) would mesh with an unstructured format of instruction and vice versa. Here the evidence, taken in the round, tends to confirm the hypothesis.

So, there is evidence (not definitive, but compelling) for an interaction between personality and aptitude and teaching method. There is no such evidence for learning style preference. This does not mean that some students will need an idea to be explained one way while others need it explained in a different way. This is something good teachers sense as they proceed, as emphasised in a previous blog.[3] But tailoring your explanation according to the reaction of students is one thing, determining it according to a pre-test is another. In fact, the learning style hypothesis may impede good teaching by straightjacketing teaching according to a pre-determined format, rather than encouraging teachers to adapt to the needs of students in real time. Receptivity to the expressed needs of the learner seems preferable to following a script to which the learner is supposed to conform.

And why have I chosen this topic for the main News Blog article? Two reasons:

First, it shows how an idea may gain purchase in society with little empirical support, and we should be ever on our guard – the Guardian lived up to its name in this respect!

Second, because health workers are educators; we teach the next generation and we teach our peers. Also, patient communication has an undoubted educational component (see our previous main blog [4]). So we should keep abreast of general educational theory. Many CLAHRC WM projects have a strong educational dimension.

— Richard Lilford, CLAHRC WM Director

References:

  1. Hood B, Howard-Jones P, Laurillard D, et al. No Evidence to Back Idea of Learning Styles. The Guardian. 12 March 2017.
  2. Pashler H, McDaniel M, Rohrer D, Bjork R. Learning Styles: Concepts and Evidence. Psychol Sci Public Interest. 2008; 9(3): 105-19.
  3. Lilford RJ. Education Update. NIHR CLAHRC West Midlands News Blog. 2 September 2016.
  4. Lilford RJ. Doctor-Patient Communication in the NHS. NIHR CLAHRC West Midlands News Blog. 24 March 2017.

Publishing Health Economic Models

It has increasingly become de rigueur – if not necessary – to publish the primary data collected as part of clinical trials and other research endeavours. In 2015 for example, the British Medical Journal stipulated that a pre-condition of publication of all clinical trials was the guarantee to make anonymised patient-level data available on reasonable request.[1] Data repositories, from which data can be requested such as the Yoda Project, and from which data can be directly downloaded such as Data Dryad provide a critical service for researchers wanting to make their data available and transparent. The UK Data Service also provides access to an extensive range of quantitative and, more recently, qualitative data from studies focusing on matters relating to society, economics and populations. Publishing data enables others to replicate and verify (or otherwise) original findings and, potentially, to answer additional research questions and add to knowledge in a particularly cost-effective manner.

At present, there is no requirement for health economic models to be published. The ISPOR-SMDM Good Research Practices Statement advocates publishing of sufficient information to meet their goals of transparency and validation.[2] In terms of transparency, the Statement notes that this should include sufficiently detailed documentation “to enable those with the necessary expertise and resources to reproduce the model”. The need to publish the model itself is specifically refuted, using the following justification: “Building a model can require a significant investment in time and money; if those who make such investments had to give their models away without restriction, the incentives and resources to build and maintain complex models could disappear”. This justification may be relatively hard to defend for “single-use” models that are not intended to be reused. Although the benefits of doing so are limited, publishing such models would still be useful if a decision-maker facing a different cost structure wanted to evaluate the cost-effectiveness of a specific intervention in their own context. The publication of any economic model would also allow for external validation which would likely be stronger than internal validation (which could be considered marking one’s own homework).

The most significant benefits of publication are most likely to arise from the publication of “general” or “multi-application” models because those seeking to adapt, expand or develop the original model would not have to build it from scratch, saving time and money (recognising this process would be facilitated by the publication of the technical documentation from the original model). Yet it is for these models that not publishing gives developers a competitive advantage in any further funding bids in which a similar model is required. This confers partial monopoly status in a world where winning grant income is becoming ever more critical. However, I like to believe most researchers also want to maximise the health and wellbeing of society: am aim rarely achieved by monopolies. The argument for publication gets stronger when society has paid (via taxation) for the development of the original model. It is also possible that the development team benefit from publication through increased citations and even the now much sought after impact. For example, the QRISK2 calculator used to predict cardiovascular risk is available online and its companion paper [3] has earned Julia Hippisley-Cox and colleagues almost 700 citations.

Some examples of published economic models exist, such as a costing model for selection processes for speciality training in the UK. While publication of more – if not all – economic models is not an unrealistic aim, it is also necessary to respect intellectual property rights. We welcome your views on whether existing good practice for transparency in health economic modelling should be extended to include the model itself.

— Celia Taylor, Associate Professor

References:

  1. Loder E, & Groves T. The BMJ requires data sharing on request for all trials. BMJ. 2015; 350: h2373.
  2. Eddy DM, Hollingworth W, Caro JJ, et al. Model transparency and validation: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force–7. Med Decis Making. 2012; 32(5): 733-43.
  3. Hippisley-Cox J, Coupland C, Vinogradova Y, et al. Predicting cardiovascular risk in England and Wales: prospective derivation and validation of QRISK2. BMJ. 2008; 336(7659): 1475-82.

Doctor-Patient Communication in the NHS

Andrew McDonald (former Chief Executive of Independent Parliamentary Standards Authority) was recently asked by the Marie Curie charity to examine the quality of doctor-patient communication in the NHS, as discussed on BBC Radio 4’s Today programme on 13 March 2017 (you can listen online). His report concluded that communication was woefully inadequate and that patients were not getting the clear and thorough counselling that they needed in order to understand their condition and make informed choices about options in their care. Patients need to understand what is likely to happen to them, and not all patients with the same condition will want to make the same choice(s). Indeed my own work [1] is part of a large body of research, which shows that better information leads to better knowledge, which in turn affects the choices that patients make. Evidence that the medical and caring professions do not communicate in an informative and compassionate way is therefore a matter of great concern.

However, there is a paradox – feedback from patients, that communication should lie at the heart of their care, has not gone unheard. For instance, current medical training is replete with “communication skills” instruction. Why then do patients still feel dissatisfied; why have matters not improved radically? My diagnosis is that good communication is not mainly a technical matter. Contrary to what many people think, the essence of good communication does not lie in avoiding jargon or following a set of techniques – a point often emphasised by my University of Birmingham colleague John Skelton. These technical matters should not be ignored – but they are not the nub of the problem.

In my view good communication requires effort, and poor communication reflects an unwillingness to make that effort; it is mostly a question of attitude. Good communication is like good teaching. A good communicator has to take time to listen and to tailor their responses to the needs of the individual patient. These needs may be expressed verbally or non-verbally, but either way a good communicator needs to be alive to them, and to respond in the appropriate way. Sometimes this will involve rephrasing an explanation, but in other cases the good communicator will respond to emotional cues. For example a sensitive doctor will notice if, in the course of a technical explanation, a patient looks upset – the good doctor will not ignore this cue, but will acknowledge the emotion, invite the patient to discuss his or her feelings, and be ready to deal with the flood of emotion that may result. The good doctor has to do emotional work, for example showing sympathy, not just in what is said, but also in how it is said. I am afraid to say that sometimes the busyness of the doctor is simply used as an excuse to avoid interactive engagements at a deeper emotional level. Yes, bringing feelings to the surface can be uncomfortable, but enduring the discomfort is part of professional life. In fact, recent research carried out by Gill Combes in CLAHRC WM showed that doctors are reticent in bringing psychological issues into the open.[2] Deliberately ignoring emotional clues and keeping things at a superficial level is deeply unsatisfying to patients. Glossing over feelings also impedes communication regarding more technical issues, as it is very hard for a person to assimilate medical information when they are feeling emotional, or nursing bruised feelings. In the long run such a technical approach to communication impoverishes a doctors professional life.

Doctors sometimes say that they should stick to the technical and that the often lengthy business of counselling should be carried out by other health professions, such as nurses. I have argued before that this is a blatant and unforgivable abrogation of responsibility; it vitiates values that lie (and always will lie) at the heart of good medical practice.[3] The huge responsibilities that doctors carry to make the right diagnosis and prescribe the correct treatment entail a psychological intimacy, which is almost unique to medical practice and which cannot easily be delegated. The purchase that a doctor has on a patient’s psyche should not be squandered. It is a kind of power, and like all power it may be wasted, misused or used to excellent effect.

The concept I have tried to explicate is that good communication is a function of ethical practice, professional behaviour and the medical ethos. It lies at the heart of the craft of medicine. If this point is accepted, it has an important corollary – the onus for teaching communication skills lies with medical practitioners rather than with psychologists or educationalists. Doctors must be the role models for other doctors. I was fortunate in my medical school in Johannesburg to be taught by professors of Oslerian ability who inspired me in the art of practice and the synthesis of technical skill and human compassion. Some people have a particular gift for communication with patients, but the rest of us must learn and copy, be honest with ourselves when we have fallen short, and always try to do better. The most important thing a medical school must do is to nourish and reinforce the attitudes that brought the students into medicine in the first place.

— Richard Lilford, CLAHRC WM Director

References:

  1. Wragg JA, Robinson EJ, Lilford RJ. Information presentation and decisions to enter clinical trials: a hypothetical trial of hormone replacement therapy. Soc Sci Med. 2000; 51(3): 453-62.
  2. Combes G, Allen K, Sein K, Girling A, Lilford R. Taking hospital treatments home: a mixed methods case study looking at the barriers and success factors for home dialysis treatment and the influence of a target on uptake rates. Implement Sci. 2015; 10: 148.
  3. Lilford RJ. Two Ideas of What It Is to be a Doctor. NIHR CLAHRC West Midlands News Blog. August 14, 2015.

Sustainability and Transformation Plans in the English NHS

Sustainability and Transformation Plans (STPs) are the latest in a long line of approaches to strategic health care planning over a large population footprint. These latest iterations were based on a one million plus population, looked at a five year timescale, were led by local partners (often acute trusts, but sometimes, as in Birmingham and Solihull, by the Local Authority), and focused inevitably on financial pressures. The plans were published in December 2016 and now the challenge to the STP communities is further refinement of the plans and, of course, implementation.

The Health Service Journal (HSJ) reviewed the content of the STPs in November 2016 and highlighted three common and unsurprising areas of focus: further development of community based approaches to care (notably aligned to the New Models of Care discussed in the CLAHRC WM News Blog of 27 January; see also https://www.england.nhs.uk/ourwork/new-care-models/); reconfiguration of secondary and tertiary services; and sharing of back office and clinical support functions. More interestingly, the HSJ noted an absence of focus on social care, patient/ clinical/ wider stakeholder engagement and on prevention and wellbeing.

The King’s Fund has produced two reviews of how STPS have developed in November 2016 and February 2017. These have been based on interviews with the same sub set of leaders , as well as other analyses. Both have reached similar conclusions. Recommendations have included the need to: increase involvement of wider stakeholders; strengthen governance and accountability arrangements and leadership ( including full time teams ) to support implementation; support longer term transformation with money, e.g. new models of care, not just short term financial sustainability; stress-test assumptions and timescales to ensure they are credible and deliverable, then communicate with local populations about their implementation honestly; and finally, align national support behind their delivery, e.g. support, regulation, performance management and procurement guidance.

A specific recommendation relates to the need to ensure robust community alternatives are in place before hospital bed numbers are reduced. The service has received strong guidance about this latter point from NHS England in the last few weeks. Various other Thinktanks have also produced more or less hopeful commentaries on STPs, such as Reform, The Centre for Health and Public Interest and the IPPR; they all say they cannot be ignored.

Already, in March 2017, the context is shifting: yet again, ‘winter pressures’ have been high profile and require a NHS response; the scale of the social care crisis has become even more prominent; there is a national push to accelerate and support change in primary care provision.

Furthermore, the role of CCG is changing in response: some are merging to create bigger population bases which may or may not be the same as STP geography; some GP leaders are moving into the new primary care provider organisations; the majority of CCGs will be ‘doing their own’ primary care commissioning for the first time just as the pace of primary care change is increasing; some commissioning functions may shift to new care models such as accountable care arrangements. It is clear that for some geographies and services the STP approach could work, but more local and more national responses to specific services and in specific places will continue to be needed. All these issues will influence how the STPs play out in the local context.

— Denise McLellan

Clinical Research Stands Out Among Disciplines for Being Largely Atheoretical

A recent paper in the BMJ (see our recent Director’s Choice) described the (null) result in a RCT of physiotherapy for ankle injury.[1] The broader implications of this finding were discussed in neither the discussion section of the paper itself, nor in the accompanying editorial.[2] The focus was confined entirely on the ankle joint, with not a thought given to implications for strains around other joints. The theory by which physiotherapy may produce an effect, and why this might apply to some joints and not others, did not enter the discourse. The ankle joint study is no exception, such an atheoretical approach is de rigour in medical journals, and it seems to distinguish clinical research from nearly everything else – most scientific endeavours try to find out what results mean – they seek to explain, not just describe. Pick up an economics journal and you will find, in the introduction, an extensive rationale for the study. Only when the theory that the study seeks to explicate has been thoroughly dealt with do the methods and results follow. An article in a physics journal will use data to populate a mathematical model that embodies theory. Clinical medicines’ parent discipline – the life sciences – are also heavily coloured by theory – Watson and Crick famously built their model (theory) entirely on other researchers’ data.

The premise that theory features less prominently in medical journals compared to the journals of other disciplines is based on my informal observations; my evidence is anecdotal. However, the impression is confirmed by colleagues with experience that ranges across academic disciplines. In due course I hope to stimulate work in our CLAHRC, or with a broader constituency of News Blog readers, to further examine the prominence given to theory across disciplines. In the meantime, if the premise is accepted, contingent questions arise – why is theory less prominent in medicine and is this a problem?

Regarding the first point, it was not ever thus. When I was studying medicine in the late 1960s / early 1970s ‘evidence-based medicine’ lay in the future – it was all theory then, even if the theory was rather shallow and often implicit. With the advent of RCTs and increased use of meta-analysis it became apparent that we had often been duped by theory. Many treatments that were supported by theory turned out to be useless (like physiotherapy for sprained ankles), or harmful (like steroids for severe head injury). At this point there was a (collective) choice to be made. Evidence could have been seen as a method to refine theory and thereby influence practice. Alternatively, having been misdirected by theory in the past, its role could have been extirpated (or downgraded) so that the evidence became the direct basis for practice. Bradford Hill, in his famous talk,[3] clearly favoured the former approach, but the profession, perhaps encouraged by some charismatic proponents of evidence-based medicine, seems to have taken the second route. It would be informative to track the evolution of thought and practice through an exegesis of historical documents since what I am suggesting is itself a theory – albeit a theory which might have verisimilitude for many readers.

But does it matter? From a philosophy of science point of view the answer is ‘yes’. Science is inductive, meaning that results from one place and time must be extrapolated to another. Such an extrapolation requires judgement – the informed opinion that the results can be transferred / generalised / particularised across time and place. And what is there to inform such a judgement but theory? So much for philosophy of science, but is there any evidence from practice to support the idea that an atheoretical approach is harmful? This is an inevitably tricky topic to study because the counterfactual cannot be observed directly – would things have turned out differently under an imaginary counterfactual where theory was given more prominence? Perhaps, if theory had been given more weight, we would have extrapolated from previous data and realised earlier that it is better to treat all HIV infected people with antivirals, not just those with supressed immune systems.[4] Likewise, people have over-interpreted null results of adjuvant chemotherapy in rare tumours when they could have easily ‘borrowed strength’ from positive trials in more common, yet biologically similar, cancers.[5] [6]

In the heady days of evidence-based medicine many clear cut results emerged concerning no treatment versus a proposed new method. Now we have question inflation among a range of possible treatments and diminishing headroom for improvement – not all possible treatments can be tested across all possible conditions – we are going to have to rely more on network meta-analyses, database studies and also on theory.

Richard Lilford, CLAHRC WM Director

References:

  1. Brison RJ, Day AG, Pelland L, et al. Effect of early supervised physiotherapy on recovery from acute ankle sprain: randomised controlled trial. BMJ. 2016; 355: i5650.
  2. Bleakley C. Supervised physiotherapy for mild or moderate ankle sprain. BMJ. 2016; 355: i5984.
  3. Hill AB. The environment and disease: Association or causation? Proc R Soc Med. 1965; 58(5): 295-300.
  4. Thompson MA, Aberg JA, Hoy JF, et al. Antiretroviral Treatment of Adult HIV Infection. 2012 Recommendations of the International Antiviral Society – USA Panel. JAMA. 2012; 308(4): 387-402.
  5. Chen Y-F, Hemming K, Chilton PJ, Gupta KK, Altman DG, Lilford RJ. Scientific hypotheses can be tested by comparing the effects of one treatment over many diseases in a systematic review. J Clin Epidemiol. 2014; 67: 1309-19.
  6. Bowater RJ, Abdelmalik SM, Lilford RJ. Efficacy of adjuvant chemotherapy after surgery when considered over all cancer types: a synthesis of meta-analyses. Ann Surg Oncol. 2012; 19(11): 3343-50.

 

Scientists Should Not Be Held Accountable For Ensuring the Impact of Their Research

It has become more and more de rigour to expect researchers to be the disseminators of their own work. Every grant application requires the applicant to fill in a section on dissemination. We were recently asked to describe our dissemination plans as part of the editorial review process for a paper submitted to the BMJ. Only tact stopped us from responding, “To publish our paper in the BMJ”! Certainly when I started out on my scientific career it was generally accepted that the sciences should make discoveries and journals should disseminate them. The current fashion for asking researchers to take responsibility for dissemination of their work emanates, at least in part, from the empirical finding that journal articles by themselves may fail to change practice even when the evidence is strong. Furthermore, it could be argued that researchers are ideal conduits for dissemination. They have a vested interest in uptake of their findings, an intimate understanding of the research topic, and are in touch with networks of relevant practitioners. However, there are dangers in a policy where the producers of knowledge are also held accountable for its dissemination. I can think of three arguments against policies making scientists the vehicle for dissemination and uptake of their own results – scientists may not be good at it; they may be conflicted; and the idea is based on a fallacious understanding of the normative and practical link between research and action.

1. Talent for Communication
There is no good reason to think that researchers are naturally gifted in dissemination, or that this is where their inclination lies. Editors, journalists, and I suppose blog writers, clearly have such an interest. However, an inclination to communicate is not a necessary condition for becoming an excellent researcher. Specialisation is the basis for economic progress, and there is an argument that the benefits of specialisation apply to the production and communication of knowledge.

2. Objectivity
Pressurising researchers to market their own work may create perverse incentives. Researchers may be tempted to overstate their findings, or over interpret the implications for practice. There is also a fine line to be drawn between dissemination (drawing attention to findings) and advocacy (persuading people to take action based on findings). It is along the slippery slope between dissemination and advocacy that the dangers of auto-dissemination reside. The vested interest that scientists have in the uptake of their results should serve as a word of caution for those who militantly maintain that scientists should be the main promotors of their own work. The climate change scientific fraternity has been stigmatised by overzealous scientific advocacy. Expecting scientists to be the bandleader for their own product, and requiring them to demonstrate impact, has created perverse incentives.

3. Research Findings and Research Implications
With some noble exceptions, it is rare for a single piece of primary research to be sufficiently powerful to drive a change in practice. In fact replication is one of the core tenets of scientific practice. The pathway from research to change of practice should go as follows:

  1. Primary researcher conducts study and publishes results.
  2. Research results replicated.
  3. Secondary researcher conducts systematic review.
  4. Stakeholder committee develops guidelines according to established principles.
  5. Local service providers remove barriers to change in practice.
  6. Clinicians adapt a new method.

The ‘actors’ at these different stages can surely overlap, but this process nevertheless provides a necessary degree of detachment between scientific results and the actions that should follow, and it makes use of different specialisms and perspectives in translating knowledge into practice.

We would be interested to hear contrary views, but be careful to note that I am not arguing that a scientist should never be involved in dissemination of their own work, merely that this should not be a requirement or expectation.

— Richard Lilford, CLAHRC WM Director

Evaluating Interventions to Improve the Integration of Care (Among Multiple Providers and Across Multiple Sites)

Typically healthcare improvement programmes have been institution specific examining, for example hospitals, general practices or care homes. While such solipsistic quality improvement initiatives obviously have their place, they also have severe limitations for the patient of today who typically has many complex conditions and whose care is therefore fragmented across many different care providers working in different places. Such patients perceive, and are sometimes the victims of, gaps in the system. Recent attention has therefore turned to approaches to close these gaps, and I am leading an NIHR programme development grant specifically for this purpose (Improving clinical decisions and teamwork for patients with multimorbidity in primary care through multidisciplinary education and facilitation). There are many different approaches to closing these gaps in care and the Nobel Prize winner Elinor Ostrom has featured previously in this News Blog for her seminal work on barriers and facilitators to institution collaboration [1]; while my colleague, CLAHRC WM Deputy Director Graeme Currie, has approached this issue from a management science perspective.

The problem for a researcher is to measure the effectiveness of initiatives to improve care across centres. This is not natural territory for cluster RCTs since it would be necessary to randomise whole ‘health economies’ rather than just organisations such as hospitals or general practices. Furthermore, many of the outcomes that might be observed in such studies, such as standardised mortality rates, are notoriously insensitive to change.[2] The ESTHER Project in Sweden is famous for closing gaps in care across the hospital/community nexus.[3] The evaluation, however, consists of little more than stakeholder interviews where people seem to recite the perceived wisdom of the day as evidence of effectiveness. While I think it is eminently plausible that the intervention was effective, and while the statements made during the qualitative interviews may have a certain verisimilitude, this all seems very weak evidence of effectiveness. It lacks any quantification, such as could be used in a health economic model. Is there a halfway house between a cluster RCT with hard outputs like mortality on the one hand, and ‘how was it for you?’ research on the other?

While it is not easy to come up with a measurement system, there is one person who perceives the entire pathway and that is the patient. The patient is really the only person who can provide an assessment of care quality across multiple providers. There are many patient measures. Some relate to outcome, for instance health and social care related quality of life (EQ-5DL, ASCOT SCT4 and OPOQL-brief [4]). Such measures should be used in service delivery studies, but may be insensitive to change, as stated above. It is therefore important to measure patient perception of the quality of their care. However, such measurements tend to either be non-specific (e.g. LTC-6 [5]) or look at only one aspect of care, such as continuity (PPCMC),[6] treatment burden [7] or person contentedness.[8] We propose a single quality of integrated care tool incorporating dimensions that have been shown to be important to patients and are collaborating with PenCLAHRC who are working on such a tool. Constructs that should be considered include conflicting information from different caregivers; contradicting forms of treatment (such as one clinician countermanding a prescription from another caregiver); duplication or redundancy of advice and information; satisfaction with care overall and with duration of contacts. We suspect that most patients would prefer fewer, more in-depth, contacts to a larger number of rushed contacts.

It might also be possible to design more imaginative qualitative research that goes beyond simply asking questions and uses method to elicit some of their deeper feelings, by prompting their memory. One such method is photo-voice where patients are asked to take photos in various points in their care, and use these as a basis for discussion. We have used such naturalistic settings in our CLAHRC.[9] Such methods could be harnessed in the co-design of services where patients / carers are not just asked how they perceive services, but are actively involved in designing solutions.

Salient quantitative measurements as may be obtained from NHS data systems. Hospital admission and readmission rates should be measured in studies of system-wide change. An effective intervention would result in more satisfied patients with lower rates of hospital admission. What about quantifying physical health? Adverse events in general and mortality in particular have poor sensitivity, such that signal, even after risk adjustment, would only emerge from noise in an extremely large study, or in a very high-risk client group – see ‘More on Integrated Care’ in this News Blog. Adverse events and death can be consolidated into generic health measurements (QALYs/DALYs), but, again, these are insensitive for reasons given above. Evaluating methods to improve the integration of care may be an ‘inconvenient truth scenario’ [10] where it is necessary to rely on process measures and other proxies for clinical / welfare out. Since our CLAHRC is actively exploring the evaluation of service interventions to improve integration of care, we would be very interested to hear from others and explore approaches to evaluating care across care boundaries.

— Richard Lilford, CLAHRC WM Director

References:

  1. Ostrom E. Beyond Markets and States: Polycentric Governance of Complex Economic Systems. Am Econ Rev. 2010; 100(3): 641-72.
  2. Girling AJ, Hofer TP, Wu J, et al. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling study. BMJ Qual Saf. 2012; 21(12): 1052-6.
  3. Institute for Healthcare Improvement. Improving Patient Flow: The Esther Project in Sweden. Boston, MA: Institute for Healthcare Improvement, 2011.
  4. Bowling A, Hankins M, Windle G, Bilotta C, Grant R. A short measure of quality of life in older age: the performance of the brief Older People’s Quality of Life questionnaire (OPQOL-brief). Arch Gerontol Geriatr. 2013; 56: 181-7.
  5. Glasgow RE, Wagner EH, Schaefer J, Mahoney LD, Reid RJ, Greene SM. Development and validation of the Patient Assessment of Chronic Illness Care (PACIC). Med Care. 2005; 43(5): 436-44.
  6. Haggerty JL, Robergr D, Freeman GK, Beaulieu C, Breton M. Validation of a generic measure of continuity of care: When patients encounter several clinicians. Ann Fam Med. 2012; 10: 443-51.
  7. Tran VT, Harrington M, Montori VM, Barnes C, Wicks P, Ravaud P. Adaptation and validation of the Treatment Burden Questionnaire (TBQ) in English using an internet platform. BMC Medicine. 2014; 12: 109.
  8. Mercer SW, Scottish Executive. Care Measure. Scottish Executive 2004
  9. Redwood S, Gale N, Greenfield S. ‘You give us rangoli, we give you talk’ – Using an art-based activity to elicit data from a seldom heard group. BMC Medl Res Methodol. 2012; 12: 7.
  10. Lilford RJ. Integrated Care. NIHR CLAHRC West Midlands News Blog. 19 June 2015.

The Evolving Role of the CLAHRC in the Use of Evidence to Improve Care and Outcomes in Service Settings

If we are to use public funds to support research, there is an assumption that the outcome of that research will be improvements to the service. This exchange, however is problematic. CLAHRCs are set up to address this interface in a particular way, namely to evaluate service interventions. As well as generating new knowledge for the system, there is a wider aspiration of building a system-wide ‘habit’ of using evidence to drive service change and evaluating the output.

As part of the consideration of how CLAHRC West Midlands evolves, we would like to hear readers’ views as to how well it has done and what it should do in the future.

The use of evidence to improve practice in service settings has demand and supply side factors. The service has to want to use evidence, be supported to use evidence, and have the capacity to make changes in response. On the research ‘supply’ side, there has to be a suitable body of existing evidence, researchers have to have the skills and capacity to develop suitable research methods and to convey the outcomes in a usable form.

Even if all these factors co-exist, barriers, such as changed external environments, resistance to change, and timing issues, can thwart the exchange.

CLAHRC WM has tried to address this in a number of ways. It has created new roles:

  • Embedded posts: academic researchers jointly funded by service and research institutions, working on agreed projects within a service setting
  • Diffusion fellows: experienced practitioners supported to undertake research in a service area.

Patients and the public are central to driving the direction of research: their involvement at all stages of the research cycle means that topics are relevant to them and meet their needs. In addition, CLAHRC WM has employed a range of dissemination methods, both traditional and innovative, to share research findings. These include publishing summaries of evaluations completed, running workshops and, indeed, regular publication of this articles in this blog.

Service evaluation is not the only form of research being undertaken within service institutions, nor is CLAHRC WM the only source of evaluation support. With the current focus on integration, there is a question as to how CLAHRC WM could be better integrated within the service’s own research and development strategies. However, one has to be mindful that the budget for CLAHRC WM is tiny compared to the billions of health care spent in the West Midlands each year and therefore it has to take care to target its resources.

In future blogs we will look more closely at some of these issues, with interviews with those occupying embedded/ diffusion roles. Meanwhile, we would welcome your views and thoughts as to how CLAHRC WM should evolve in this regard, so please comment or get in touch; it would be much appreciated.

— Denise McLellan

Between Policy and Practice – the Importance of Health Service Research in Low- and Middle-Income Countries

There is a large and growing literature on disease and its causes in low- and middle-income countries (LMICs) – not only infectious disease, but also non-communicable diseases. Endless studies are published on disease incidence and prevalence, for example. There is also a substantial literature on policy / health systems,[1] much captured in the Health Systems Evidence database.[2] This deals with topics such as general taxation vs. contributory insurance, financial incentives for providers, and use of private providers to extend coverage.

However, how to provide health services given general policy and a certain profile of disease is less well studied. Issues such as skill mix (e.g. who should do what), distribution of services (e.g. hospital vs. clinic vs. home) and coverage (e.g. how many nurses or clinics are needed per head of population) are less well studied. For example, there have been calls for Africa to increase the capacity of Community Health Workers (CHW) to one million, but no-one knows the optimal mix of CHWs to nurses to medical officers to doctors, for example. Likewise, the mix of outreach services (e.g. CHWs), clinics, pharmacies, private facilities, and traditional healers that can best serve populations is very unclear according to a recent Lancet commission.[3] The situation in slums is positively chaotic. One could sit in an arm chair and propose a service configuration for slum environments of 10,000 people that looks like this:

071-dcb-figure-1

The role of CHWs could be narrow (vaccination, child malnutrition), intermediate (vaccination, child malnutrition, sexual and reproductive health), or broad (all of the above, plus hypertension, obesity prevention, adherence to treatment, detection of depression, etc.). HIV and TB screening and treatment maintenance could be separate or included in the above, and so on.

Note that decisions about workforce and how and where the workforce is deployed have to be made irrespective of how care is financed, or whether financial or other incentives are used – decisions are still needed about who is to be incentivised to do what. And people do not appear overnight, so training (and the associated costs) must be included in cost and economic models. Of course, the range of possibilities according to per capita wealth in a country is large, but we do not know what good looks like in countries of approximately equal wealth. Here is the rub – it is much easier to study a diseases and its determinants than to study health services. Yet another study to link pollution to illness is easy to write as an applicant and understand as a reviewer. But talk about skill mix and eyes glaze over. Yet there is little point in measuring disease ever more precisely if there is no service to do anything about it.

— Richard Lilford, CLAHRC WM Director

References:

  1. Mills A. Health Care Systems in Low- and Middle-Income Countries. New Engl J Med. 2014; 370: 552-7.
  2. McMaster University. Health systems evidence. Hamilton, Canada: McMaster University. 2017.
  3. McPake B, & Hanson K. Managing the public–private mix to achieve universal health coverage. Lancet. 2016; 388: 622-30.