Biological Underpinnings of Chronic Fatigue?

A recent synopsis in Nature describes a study showing that immune cells in patients with chronic fatigue behave differently in vitro to those in healthy controls.[1] This suggests that the disease is not psychosomatic, argues the synopsis. That is not just out of date thinking, it is long out of date – the inter-relationship between mind and body has been known for over a century. The article suggests that gut bacteria may differ in chronic fatigue syndrome. If this hypothesis is confirmed then maybe the condition originates somatically and affects the brain, not the other way around. We develop this idea further in the next exciting instalment of your news blog.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Maxmen A. Biological underpinnings of chronic fatigue emerge. Nature. 2017; 543: 602.

Infection Sensitisation

A previous News Blog [1] discussed the finding that a previous infection with one strain of Dengue fever can sensitise a person so that infection with a second strain will be more severe than would otherwise have been the case. There is new evidence that such Antibody Dependent Enhancement (ADE) may cross species barriers, such that a person sensitised with one type of flavivirus, say Dengue, is more likely to have a severe illness if inflected with another flavivirus, such as the Zika virus.[2] Such cross species ADE has obvious implications for vaccination programmes. In previous News Blogs [3] we have drawn attention to cross-resistance such that vaccines protect against non-target organisms (e.g. small pox vaccines provide protection against HIV). The above paper shows that the reverse can also occur. This is not the first time that vaccination has been shown to have adverse consequences.[4]

— Richard Lilford, CLAHRC WM Director

References:

  1. Lilford RJ. Three hits hypothesis. NIHR CLAHRC West Midlands News Blog. 7 April 2017.
  2. Cohen J. Dengue may bring out the worst in Zika. Science. 2017; 355(6332): 1362.
  3. Lilford RJ. Two papers try to answer the question – do vaccinations for one communicable disease offer protection against others? NIHR CLAHRC West Midlands News Blog. 27 January 2017.
  4. Guzman MG, Alvarez M, Halstead SB. Secondary infection as a risk factor for dengue hemorrhagic fever/dengue shock syndrome: an historical perspective and role of antibody-dependent enhancement of infection. Arch Virol. 2013; 158(7): 1445-59.

Measuring Quality of Care

Measuring quality of care is not a straightforward business:

  1. Routinely collected outcome data tend to be misleading because of very poor ratios of signal to noise.[1]
  2. Clinical process (criterion based) measures require case note review and miss important errors of omission, such as diagnostic errors.
  3. Adverse events also require case note review and are prone to measurement error.[2]

Adverse event review is widely practiced, usually involving a two-stage process:

  1. A screening process (sometimes to look for warning features [triggers]).
  2. A definitive phase to drill down in more detail and refute or confirm (and classify) the event.

A recent HS&DR report [3] is important for two particular reasons:

  1. It shows that a one-stage process is as sensitive as the two-stage process. So triggers are not needed; just as many adverse events can be identified if notes are sampled at random.
  2. In contrast to (other) triggers, deaths really are associated with a high rate of adverse events (apart, of course, from the death itself). In fact not only are adverse events more common among patients who have died than among patients sampled at random (nearly 30% vs. 10%), but the preventability rates (probability that a detected adverse event was preventable) also appeared slightly higher (about 60% vs. 50%).

This paper has clear implications for policy and practice, because if we want a population ‘enriched’ for high adverse event rates (on the ‘canary in the mineshaft’ principle), then deaths provide that enrichment. The widely used trigger tool, however, serves no useful purpose – it does not identify a higher than average risk population, and it is more resource intensive. It should be consigned to history.

Lastly, England and Wales have mandated a process of death review, and the adverse event rate among such cases is clearly of interest. A word of caution is in order here. The reliability (inter-observer agreement) in this study was quite high (Kappa 0.5), but not high enough for comparisons across institutions to be valid. If cross-institutional comparisons are required, then:

  1. A set of reviewers must review case notes across hospitals.
  2. At least three reviewers should examine each case note.
  3. Adjustment must be made for reviewer effects, as well as prognostic factors.

The statistical basis for these requirements are laid out in detail elsewhere.[4] It is clear that reviewers should not review notes from their own hospitals, if any kind of comparison across institutions is required – the results will reflect the reviewers rather than the hospitals.

Richard Lilford, CLAHRC WM Director

References:

  1. Girling AJ, Hofer TP, Wu J, et al. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling studyBMJ Qual Saf. 2012; 21(12): 1052-6.
  2. Lilford R, Mohammed M, Braunholtz D, Hofer T. The measurement of active errors: methodological issues. Qual Saf Health Care. 2003; 12(s2): ii8-12.
  3. Mayor S, Baines E, Vincent C, et al. Measuring harm and informing quality improvement in the Welsh NHS: the longitudinal Welsh national adverse events study. Health Serv Deliv Res. 2017; 5(9).
  4. Manaseki-Holland S, Lilford RJ, Bishop JR, Girling AJ, Chen YF, Chilton PJ, Hofer TP; UK Case Note Review Group. Reviewing deaths in British and US hospitals: a study of two scales for assessing preventability. BMJ Qual Saf. 2016. [ePub].

An Interesting Report of Quality of Care Enhancement Strategies Across England, Germany, Sweden, the Netherlands, and the USA

An interesting paper from the Berlin University of Technology compares the quality enhancement systems across the above countries with respect to measuring, reporting and rewarding quality.[1] This paper is an excellent resource for policy and health service researchers. The US has the most developed system of quality-related payments (P4P) of the five countries. England wisely uses only process measures to reward performance, while the US and Germany include patient outcomes. The latter are unfair because of signal to noise issues,[2] and the risk-adjustment fallacy.[3] [4] Above all, remember Lilford’s axiom – never base rewards or sanctions on a measurement over which service providers do not feel they have control.[5] It is true, as the paper argues, that rates of adherence to a single process seldom correlate with outcome. But this is a signal to noise problem. ‘Proving’ that processes are valid takes huge RCTs, even when the process is applied to 0% (control arm) vs. approaching 100% (intervention arm) of patients. So how could an improvement from say 40% to 60% in adherence to clinical process show up in routinely collected data?[6] I have to keep on saying it – collect outcome data, but in rewarding or penalising institutions on the basis of comparative performance – process, process, process.

— Richard Lilford, CLAHRC WM Director

References:

  1. Pross C, Geissler A, Busse R. Measuring, Reporting, and Rewarding Quality of Care in 5 Nations: 5 Policy Levers to Enhance Hospital Quality Accountability. Milbank Quart. 2017; 95(1): 136-83.
  2. Girling AJ, Hofer TP, Wu J, et al. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling study. BMJ Qual Saf. 2012; 21: 1052-6.
  3. Mohammed MA, Deeks JJ, Girling A, et al. Evidence of methodological bias in hospital standardised mortality ratios: retrospective database study of English hospitals. BMJ. 2009; 338: b780.
  4. Lilford R, & Pronovost P. Using hospital mortality rates to judge hospital performance: a bad idea that just won’t go away. BMJ. 2010; 340: c2016.
  5. Lilford RJ. Important evidence on pay for performance. NIHR CLAHRC West Midlands News Blog. 20 November 2015.
  6. Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ. 2010; 341: c4413.

Computer Interpretation of Foetal Heart Rates Does Not Help Distinguishing Babies That Need a Caesarean from Those That Do Not

In an earlier life I was involved in obtaining treatment costs for a pilot trial of computerised foetal heart monitoring versus standard foetal heart monitoring (CTG). The full trial, funded by NIHR, has now been published in the Lancet,[1] featuring Sara Kenyon from our CLAHRC WM theme 1. With over 46,000 participants the trial found no difference in a composite measure of foetal outcome or intervention rates. Perinatal mortality was only 3 per 10,000 women across both arms and the incidence of hypoxic encephalopathy was less than 1 per 1,000. Of course, the possibility of an educational effect from the computer decision support (‘contamination’) may have reduced the observed effect, but this could only be tested by a cluster trial. However, such a design would create its own set of problems, such as loss of precision and bias through interaction between method used and baseline risk across interventions and control sites. Also, the control group was not care as usual, but the visual display IT system shorn of its decision support (artificial intelligence) module.[2] Some support for the idea that control condition affected care in a positive direction, making any marginal effect of decision support hard to detect, comes from the low event rate across both study arms. Meanwhile, the lower than expected baseline event rates mean that any improvement in outcome will be hard to detect in future studies. So here is another topic that, like vitamin D given routinely to elderly people,[3] now sits below the “horizon of science” – the combination of low event rates and low plausible effect sizes mean that we can move on from this subject – at least in a high-income context. If you want to use the computerised method, and its costs are immaterial, then there is no reason not to; economics aside there appear to be no trade-offs here, since both benefits and harms were null.

— Richard Lilford, CLAHRC WM Director

References:

  1. The INFANT Collaborative Group. Computerised interpretation of fetal heart rate during labour (INFANT): a randomised controlled trial. Lancet. 2017.
  2. Keith R. The INFANT study – a flawed design foreseen. Lancet. 2017.
  3. Lilford RJ. Effects of Vitamin D Supplements. NIHR CLAHRC West Midlands News Blog. 24 March 2017.

Small Pollution Particles May Pass Directly into the Brain through the Snout

Yes, they appear to be able to follow the pathway used by smell neurons and thus pass directly from the olfactory membrane into the brain, i.e. not going via the lung and bloodstream. Experiments in rodents using radio-labelled nano-particles show that very small particles really can penetrate directly through the roof of the nose and pass into the brain along olfactory neurons.[1] Here these particles set in motion an inflammatory process, which activates micro-glia (brain type macrophages), which attack neurons and lead to amyloid deposits – the hall mark of dementia. People who are exposed to particles have a high risk of dementia,[2] and animals randomised to be exposed (or not) to pollution particles acquire brain amyloid and manifest cognitive decline. So there you have it – there is growing and quite compelling evidence that pollution particles are bad news for humans and other animals. It is time to act – phase out diesel cars, incentivise car manufacturers to clean up emissions, gradually increase tax on cars/lorries/fuels, incentivise cycling in cities (and make it safer), and build rail lines. But none of this will happen without public support so proselytise and increase susceptibility to the message by increasing science teaching in schools. In the end, lots of things come back to the intellectual sophistication of the average citizen. In the meantime I suspect that an increasing proportion of people will adopt face masks, although I do not know how effective they are in trapping particles.

— Richard Lilford, CLAHRC WM Director

References:

  1. Underwood E. The Polluted Brain. Science. 2017; 355(6323): 342-5.
  2. Chen H, Kwong JC, Copes R, et al. Living near major roads and the incidence of dementia, Parkinson’s disease, and multiple sclerosis: a population-based cohort study. Lancet. 2017; 389(10070): 718-26.

Three Hits Hypothesis

Quite a lot of diseases are brought about by the conflation of two factors. Mice infected with certain herpes viruses suffer no ill-effect unless a helminth infestation supervenes. Oral allergy syndrome arises when a certain pollen interacts with certain foods (usually raw fruits, vegetables and nuts). The hygiene hypothesis says that lack of exposure to certain gut bacteria sensitises the body to allergic reactions to a range of environmental allergens. The pathway for disease involves three hits:

Genetically predisposed person –> Exposure 1 –> Exposure 2 –> Disease.

An intriguing example of a three-hit condition is the severe disease of children – Burkitt’s lymphoma. This cancer arises in germinal centres of lymph nodes in the neck. It is known that Epstein-Barr (EB) virus infection is necessary for endemic Burkitt’s lymphoma to develop because it prevents apoptosis (cell death) when certain mutations occur in the cell. But endemic Burkitt’s lymphoma only occurs in the malaria belt, and why this is so has been a mystery until the last few years. Now we know that the malaria parasite Plasmodium falciparum ‘upregulates’ an enzyme that causes mutations in DNA in lymph cells. These mutations are a normal part of antibody production since rearrangements of chromosome segments is necessary for antibody specificity. But in people with falciparum malaria, the effect ‘spills over’ to cause mutations of cancer genes. The double hit of EB plus malaria sets the scene for carcinogenesis.[1] Why in the neck – perhaps because lymph cells in the necks of children work particularly hard eradicating throat and ear infections, in which case there is a ‘four hits’ hypothesis!

— Richard Lilford, CLAHRC WM Director

References:

  1. Thorley-Lawson D, Deitsch KW, Duca KA, Torgbor C. The Link between Plasmodium falciparum Malaria and Endemic Burkitt’s Lymphoma—New Insight into a 50-Year-Old Enigma. PLoS Pathog. 2016; 12(1): e1005331.

Important Notice: A New Online Repository for Research Results

Such a repository has now been launched – The Wellcome-Gates repository, established by the world’s second largest and largest medical research charities respectively, and run by a firm called F1000.[1] Research funded by Gates can only be published here. This is another big milestone in the gradual shake-up of the scientific publication sector.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. The Economist. The findings of medical research are disseminated too slowly. The Economist. 25 March 2017.

Wrong Medical Theories do Great Harm but Wrong Psychology Theories are More Insidious

Back in the 1950s, when I went from nothing to something, a certain Dr Spock bestrode the world of child rearing like a colossus. Babies, said Spock, should be put down to sleep in the prone position. Only years later did massive studies show that children are much less likely to experience ‘cot death’ or develop joint problems if they are placed supine – on their backs. Although I survived prone nursing to become a CLAHRC director, tens of thousands of children must have died thanks to Dr Spock’s ill-informed theory.

So, I was fascinated by an article in the Guardian newspaper, titled ‘No evidence to back the idea of learning styles’.[1] The article was signed by luminaries from the world of neuroscience, including Colin Blakemore (who I knew, and liked, when he was head of the MRC). I decided to retrieve the article on which the Guardian piece was mainly based – a review in ‘Psychological Science in the Public Interest’.[2]

The core idea is that people have clear preferences for how they prefer to receive information (e.g. pictorial vs. verbal) and that teaching is most effective if delivered according to the preferred style. This idea is widely accepted among psychologists and educationalists, and is advocated in many current textbooks. Numerous tests have been devised to diagnose a person’s learning style so that their instruction can be tailored accordingly. Certification programmes are offered, some costing thousands of dollars. A veritable industry has grown up around this theory. The idea belongs to a larger set of ideas, originating with Jung, called ‘type theories’; the notion that people fall into distinct groups or ‘types’, from which predictions can be made. The Myers-Briggs ‘type’ test is still deployed as part of management training and I have been subjected to this instrument, despite the fact that its validity as the basis for selection or training has not been confirmed in objective studies. People seem to cling to the idea that types are critically important. That types exist is not the issue of contention (males/females; extrovert/introvert), it is what they mean (learn in different ways; perform differently in meetings) that is disputed. In the case of learning styles the hypothesis of interest is that the style (which can be observed ex ante) meshes with a certain type of instruction (the benefit of which can be observed ex post). The meshing hypothesis holds that different modes of instruction are optimal for different types of person “because different modes of presentation exploit the specific perceptual and cognitive strengths of different individuals.” This hypothesis entails the assumption that people with a certain style (based, say on a diagnostic instrument or ‘tool’) will experience better educational outcomes when taught in one way (say, pictorial) than when taught in another way (say, verbal). It is precisely this (‘meshing’) hypothesis that the authors set out to test.

Note then that finding that people have different preferences does not confirm the hypothesis. Likewise, finding that different ability levels correlate with these preferences would not confirm the hypothesis. The hypothesis would be confirmed by finding that teaching method 1 is more effective than method 2 in type A people, while teaching method 2 is more effective than teaching method 1 in type B people.

The authors find, from the voluminous literature, only four studies that test the above hypothesis. One of these was of weak design. The three stronger studies provide null results. The weak study did find a style-by-treatment interaction, but only after “the outliers were excluded for unspecified reasons.”

Of course, the null results do not exclude the possibility of an effect, particularly a small effect, as the authors point out. To shed further light on the subject they explore related literatures. First they examine aptitude (rather than just learning style preference) to see whether there is an interaction between aptitude and pedagogic method. Here the literature goes right back to Cornbach in 1957. One particular hypothesis was that high aptitude students fare better in a less structured teaching format, while those with less aptitude fare better where the format is structured and explicit. Here the evidence is mixed, such that in about half of studies, less structure suits high ability students, while more structure suits less able students – one (reasonable) interpretation for the different results is that there may be certain contexts where aptitude/treatment interactions do occur and others where they do not. Another hypothesis concerns an aspect of personality called ‘locus of control’. It was hypothesised that an internal locus of control (people who incline to believe their destiny lies in their own hands) would mesh with an unstructured format of instruction and vice versa. Here the evidence, taken in the round, tends to confirm the hypothesis.

So, there is evidence (not definitive, but compelling) for an interaction between personality and aptitude and teaching method. There is no such evidence for learning style preference. This does not mean that some students will need an idea to be explained one way while others need it explained in a different way. This is something good teachers sense as they proceed, as emphasised in a previous blog.[3] But tailoring your explanation according to the reaction of students is one thing, determining it according to a pre-test is another. In fact, the learning style hypothesis may impede good teaching by straightjacketing teaching according to a pre-determined format, rather than encouraging teachers to adapt to the needs of students in real time. Receptivity to the expressed needs of the learner seems preferable to following a script to which the learner is supposed to conform.

And why have I chosen this topic for the main News Blog article? Two reasons:

First, it shows how an idea may gain purchase in society with little empirical support, and we should be ever on our guard – the Guardian lived up to its name in this respect!

Second, because health workers are educators; we teach the next generation and we teach our peers. Also, patient communication has an undoubted educational component (see our previous main blog [4]). So we should keep abreast of general educational theory. Many CLAHRC WM projects have a strong educational dimension.

— Richard Lilford, CLAHRC WM Director

References:

  1. Hood B, Howard-Jones P, Laurillard D, et al. No Evidence to Back Idea of Learning Styles. The Guardian. 12 March 2017.
  2. Pashler H, McDaniel M, Rohrer D, Bjork R. Learning Styles: Concepts and Evidence. Psychol Sci Public Interest. 2008; 9(3): 105-19.
  3. Lilford RJ. Education Update. NIHR CLAHRC West Midlands News Blog. 2 September 2016.
  4. Lilford RJ. Doctor-Patient Communication in the NHS. NIHR CLAHRC West Midlands News Blog. 24 March 2017.

Can Thinking Make It So?

When we think of risk factors for mortality we properly think behaviours (e.g. smoking / obesity) or genetics (e.g. family history). What about psychological factors – can unhappiness increase your risk of risk of cancer? Well, Batty and colleagues [1] have tackled this problem as follows:

  1. They assembled 16 prospective cohort studies where behaviours and psychological state had been measured and in which participants were followed up to see if cancer developed.
  2. They obtained the raw data and obtained an individual patient meta-analysis.
  3. They adjusted for the usual things known to increase risk of cancer (obesity, smoking, etc).
  4. They calculated relative risk of cancer according to antecedent psychological state.

They found a positive correlation between psychological distress and risk of cancer. But causality might have run the other way – (occult) cancers may have been the cause of psychological distress, not the other way round. So:

  1. They ‘left censored’ the data, thereby widening the gap between the point in time where the psychological state was measured and the point where cancer supervened.

The association between psychological state and cancer death persisted, even when they were separated by many years. What is the explanation?

  1. Failure to fully control for all behaviours (although behaviour could be the mechanism through which the cancer risk is increased in people with depression, in which case they ‘over-controlled’).
  2. Reduced natural killer cell function.
  3. Increased steroid levels, which can apparently affect DNA repair in some way.
  4. Some mechanism yet to be discovered.

In any event, the findings are intriguing, for all that practical implications may be limited.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Batty GD, Russ TC, Stamatakis E, Kivimäki M. Psychological distress in relation to site specific cancer mortality: pooling of unpublished data from 16 prospective cohort studies. BMJ. 2017; 356: j108.