All posts by clahrcwm

Gluten Sensitivity but no Antibodies?

Consider the case of my good friend who developed gluten sensitivity in midlife. Subsequently he went on a gluten-free diet – his wife found this a terrible nuisance. So she surreptitiously re-introduced wheat to his diet. Within no time my friend complained and that he had been wrong, his symptoms had reoccurred despite no apparent exposure to wheat. He was disappointed with his wife when she had to confess to her clandestine challenge to his physiology. But I think she behaved like a true scientist!

The single case represented by my friend has been repeated on a larger-scale many times. The results have been the same; many people with gluten sensitivity manifest symptoms when challenged in blind studies.[1] Furthermore, unlike many types of putative psychosomatic illness, people with gluten sensitivity do not manifest different responses on psychological testing for depression or anxiety compared with those of the general population.

So what is the cause of this somatopsychic condition? It turns out that there are two main theories each with some evidence in their favour.[2] The theory that I prefer is called FODMAPs, based on the idea that wheat is a potent source of fermentable, short chain carbohydrates. These carbohydrates are poorly absorbed and thus ferment in the gut causing the typical symptoms of bloating, distention and discomfort. The alternative theory is that wheat, perhaps in the presence of certain alterations in the microbiome, causes an inflammatory reaction in the liver that is associated with symptoms.

It will be important to discern the cause, since treatment of excessive fermentation would consist of a more general reduction of foods containing large proportions of fermentable carbohydrates.

— Richard Lilford, CLAHRC WM Director

References:

  1. Skodje GI, Sarna VK, Minelle IH, Rolfsen KL, Muir JG, Gibson PR, Veierød MB, Henriksen C, Lundin KEA. Fructan, Rather Than Gluten, Induces Symptoms in Patients With Self-Reported Non-Celiac Gluten Sensitivity. Gastroenterol. 2018; 154: 529-39.
  2. Servick K. The war on gluten. Science. 2018; 360: 848-51.
Advertisements

Vaccinating Against Mosquitoes

Getting bit by a mosquito could potentially lead to a wide variety of infections – dengue, yellow fever, Zika, malaria, etc. The usual method to try to prevent the spread of these diseases is vaccination, but this is hindered in most of these diseases due to the various sub-types and strain variations. But what if there was another way? That is what Jessica Manning and colleagues are looking into – developing a vaccine against mosquito saliva.[1] When a mosquito bites a person, it first injects its saliva into the blood stream before drinking blood, which triggers the person’s innate immune response. This immune response can then inadvertently help spread any pathogens through the lymphatic system. However, the authors hypothesise that by vaccinating a person against the saliva itself then the body will have a different, targeted immune response, which can hopefully destroy the pathogens before they spread and cause infection. A proof of principle has already been shown in animals that have been vaccinated against sand fly saliva, which prevents infection by Leishmania.

Although there is still a long way to go, it is an interesting approach that should be closely monitored.

— Peter Chilton, Research Fellow

Reference:

  1. Manning JE, Morens DM, Kamhawi S, Valenzuela JG, Memoli M. Mosquito Saliva: The Hope for a Universal Arbovirus Vaccine? J Infect Dis. 2018; 218 (1): 7-15.

Interim Guidelines for Studies of the Uptake of New Knowledge Based on Routinely Collected Data

CLAHRC West Midlands and CLAHRC East Midlands use Hospital Episode Statistics (HES) to track the effect of new knowledge from effectiveness studies on implementation of the findings from those studies. Acting on behalf of CLAHRCs we have studied uptake of findings from the HTA programme over a five year period (2011-15). We use the HES database to track uptake of study treatments where the use of that treatment is recorded on the HES database – most often these are studies of surgical procedures. We conduct time series analyses to examine the relationship between publication of apparently clear-cut findings and the implementation (or not) of those findings. We have encountered some bear traps in this apparently simple task, which must be carried out with an eye to detail. Our work is ongoing, but here we alert practitioners to some things to look out for based on the literature and our experience. First, note that the use of time series to study clinical practice based on routine data is both similar and different from the use of control charts in statistical process control. For the latter purpose, News Blog readers are referred to the American National Standard (2018).[1] Here are some bear-traps/issues to consider when using databases for the former purpose – namely to scrutinise databases for changes in treatment for a given condition:

  1. Codes. By a long way, the biggest problem you will encounter is the selection of codes. The HTA RCT on treatment of ankle fractures [2] described the type of fracture in completely different language to that used in the HES data. We did the best we could, seeking expert help from an orthopaedic surgeon specialising in the lower limb. Some thoughts:
    1. State the codes or code combinations used. In a recent paper, Costa and colleagues did not state all the codes used in the denominator for their statistics on uptake of treatment for fractures of the lower radius.[3] This makes it impossible to replicate their findings.
    2. Give the reader a comprehensive list of relevant codes highlighting those that you selected. This increases transparency and comparability, and can be included as an appendix.
    3. When uncertain, start with a narrow set of codes that seem to correspond most closely to indications for treatment in the research studies, but also provide results for a wider range – these may reflect ‘spill-over’ effects of study findings or miscoding. Again, the wider search can be included as an appendix, and serves as a kind of sensitivity analysis.
    4. If possible, examine coding practice by examining local databases that may contain detailed clinical information with the routine codes generated by that same institution. This provides empirical information on coding accuracy. We did this with respect to use of tight-fitting casts to treat unstable ankle fracture (found to be non-inferior to more invasive surgical plates [4]) and found that the procedure was coded in different ways. We combined these three codes in our study, although this increases measurement error (reducing the signal) on the assumption that these codes are not specific.
  2. Denominators.
    1. In some cases denominators cannot be ascertained. We encountered this problem in our analysis of surgery for oesophageal reflux, where surgery was found more effective than medical treatment.[5] The counterfactual here is medical therapy that can be delivered in various settings and that is not specific for the index condition. Here we simply had to examine the effects of the trial results on the number of operations carried out country-wide. Seasonal effects are a potential problem with denominator-free data.
    2. For surgical procedures, the procedure should be combined with the counterfactual procedure from the trial to create a denominator. The denominator can also be expanded to include other procedures for the same operation if this makes sense clinically.
  3. Data-interval. The more frequent the index procedure, then the shorter the appropriate interval. If the number of observations falls below a certain threshold, then the data cannot be reported to protect patient privacy, and a wider interval must be used. A six month interval seemed suitable for many surgical procedures.
  4. Of protocols and hypotheses. We have found that the detailed protocol must emerge as an iterative process including discussion with clinical experts. But we think there should be a ‘general’ prior hypothesis for this kind of work. So we specified the dates of publication of the HTA report as our pre-set time point – the equivalent of the primary hypothesis. We applied this date line for all of the procedures examined. However, solipsistic focus on this data line would obviously lead to an impoverished understanding, so we follow a three phase process inspired by Fichte’s thesis-antithesis-synthesis-thesis model [6]:
    1. We test the hypothesis that a linear model fits the data using a CUSUM (cumulative sum) test. The null hypothesis is that the cumulative sum of recursive residuals has an expected value of 0. If it wanders outside the 95% confidence band at any point in time, this indicates that the coefficients have changed and a single linear model does not fit the data.
    2. If the above test indicates a change in the coefficients, we use a Wald test to identify the point at which the model has a break. We estimate two separate models before and after the break data and the slopes/intercepts are compared.
    3. Last we ‘check by members’ and discuss with experts who can fill us in on when guidelines emerged and when other trials may have been published – ideally a literature review would complement this process.
  5. Interpretation. In the absence of contemporaneous controls, cause and effect inference must be cautious.

This is an initial iteration of our thoughts on this topic. However, increasing amounts of data are being captured in routine systems, and databases are increasingly constructed in real time since they are used primarily as a clinical tool. So we thought it would be helpful to start laying down some procedural rules for retrospective use of data to determine long-term trends. We invite readers to comment, enhance and extended this analysis.

— Richard Lilford, CLAHRC WM Director

— Katherine Reeves, Statistical Intelligence Analyst at UHBFT Health Informatics Centre

References:

  1. ASTM International. Standard Practice for Use of Control Charts in Statistical Process Control. Active Standard ASTM E2587. West Conshohocken, PA: ASTM International; 2018.
  2. Keene DJ, Mistry D, Nam J, et al. The Ankle Injury Management (AIM) trial: a pragmatic, multicentre, equivalence randomised controlled trial and economic evaluation comparing close contact casting with open surgical reduction and internal fixation in the treatment of unstable ankle fractures in patients aged over 60 years. Health Technol Assess. 20(75): 1-158.
  3. Costa ML, Jameson SS, Reed MR. Do large pragmatic randomised trials change clinical practice? Assessing the impact of the Distal Radius Acute Fracture Fixation Trial (DRAFFT). Bone Joint J. 2016; 98-B: 410-3.
  4. Willett K, Keene DJ, Mistry D, et al. Close Contact Casting vs Surgery for Initial Treatment of Unstable Ankle Fractures in Older Adults. A Randomized Clinical Trial. JAMA. 2016; 316(14): 1455-63.
  5. Grant A, Wileman S, Ramsay C, et al. The effectiveness and cost-effectiveness of minimal access surgery amongst people with gastro-oesophageal reflux disease – a UK collaborative study. The REFLUX trial. Health Technol Assess. 2008; 12(31): 1–214.
  6. Fichte J. Early Philosophical Writings. Trans. and ed. Breazeale D. Ithaca, NY: Cornell University Press, 1988

A Randomised Trial of the Effect of Theological Training on Health and Welfare Outcomes: Whatever Next

The CLAHRC WM Director’s heroes, Adam Smith and Max Weber, argued that religiosity promotes diligence and wealth.[1] [2] But how to separate the effect of religion from the effect of being the kind of person who is religious? Only a randomised trial could do this. And, yes, it has been done.[3]

One hundred and sixty pastors were recruited for this study, which was based in the Philippines. The pastors each provided 15 weekly meetings to a total of 6,276 poor Filipino households that were randomised to either receive the programme or not. The intervention group had increased religiosity and income, but they were no more satisfied with life. The study suggested that intervention households had improved their levels of hygiene but discord within the family also seemed to increase. What does the CLAHRC WM Director make of this? Firstly, human beings are primed to be receptive to religious messages – they affect us and it was ever thus. However, the effects are not all necessarily beneficial. And, of course, religious instruction introduced later in life is not the same as growing up in a religious family.

— Richard Lilford, CLAHRC WM Director

References:

  1. Smith A. An Inquiry into the Nature and Cause of the Wealth of Nations. London, UK: W Strahan & T Cadell; 1776.
  2. Weber M. The Protestant Ethic and the Spirit of Capitalism. London, UK: Unwin Hyman; 1930.
  3. Bryan GT, Choi JJ, Karlan D. Randomizing Religion: The Impact of Protestant Evangelism on Economic Outcomes. NBER Working Paper Series. Working Paper No. 24278. 2018.

Intravenous Fluids – Use with Care

So many of the things that were taken for granted when I was training in medicine have been overturned during my subsequent career. In my student days, academic doctors thought they could largely work things out from patho-physiological principles – this was before the rise of clinical epidemiology and ‘evidence-based medicine’. So we gave steroids for head injury, shaving before surgery, enemas before childbirth – the list goes on. Fluid management has changed as much as anything. I remember my (outstanding) professor of surgery, Prof DuPlessis, saying that blood transfusion pre-surgery should be given until blood pressure is fully restored. Wrong – just give enough to give enough to keep vital organs perfused, otherwise you will provoke more bleeding. We were told to replace colloid to maintain intra-vascular volume in shocked patients. Wrong – albumin and starch substitutes leak across damaged capillary membranes and impede organ (brain, kidney, lung) function. The main treatment protocol for children with diabetic ketoacidosis has been slow rehydration with isotonic fluids, as rapid administration was feared to lead to brain injury. Potentially wrong – a recent RCT in the New England Journal of Medicine found no significant differences between various rates of administration.[1] Current guidelines for patients following major abdominal surgery is to administer a restrictive intravenous-fluid strategy. Seems wrong – a recent trial found there were no differences in disability-free survival between patients who underwent a restrictive or liberal fluid regimen, and that the restrictive fluid regimen was associated with a higher rate of acute kidney injury.[2] Use balanced crystalloids rather than saline to avoid salt overload? Probably wrong.[3] Rapidly restore blood volume in shocked children with septicaemia – spectacularly wrong, as discussed in a recent News Blog.[4] So what should you do? How much of what should be used for which patient? I am honoured to chair the steering committee for a large factorial trial of treatment of severe pneumonia in East Africa where two fluid ‘replacement’ regimes will be compared (nasogastric feeds of breast milk / formula milk / cow’s milk vs. intravenous fluid infusion). In the meantime the lesson for doctors may be the same as that for actors – ‘less is more.’

— Richard Lilford, CLAHRC WM Director

References:

  1. Kuppermann N, Ghetti S, Schunk JE, et al. Clinical Trial of Fluid Infusion Rates for Pediatric Diabetic Ketoacidosis. New Engl J Med. 2018; 378: 2275-87.
  2. Myles PS, Bellomo R, Corcoran T, et al. Restrictive versus Liberal Fluid Therapy for Major Abdominal Surgery. New Engl J Med. 2018; 378: 2263-74.
  3. Myburgh J. Patient-Centered Outcomes and Resuscitation Fluids. New Engl J Med. 2018: 378: 862-3.
  4. Lilford RJ. Raising Blood Pressure in Sepsis Patients. NIHR CLAHRC West Midlands News Blog. 13 October 2017.

A JAMA Article that Spectacularly Misses the Point

A recent article in JAMA Surgery examined complication rates from bariatric surgery across a large number of hospitals in the US.[1] They found implausibly large differences ranging from 0.6% to 10.3%; a 17-fold difference. There was no real effect of surgical volume on these outcome rates. The authors go on at some length about risk adjustment and sampling variation, thereby spectacularly missing the point. Different observers determined the adverse events in different centres. In general, observers have a very low reliability. Such high Inter-observer variation has been demonstrated for wound infections and anastomotic leak in numerous studies. If you want to compare hospitals, then unless you have a very firm outcome such as death, you must have lots of observers and all the observers must examine different institutions. Those who try to drive up quality and safety need a much more sophisticated understanding of measurement theory.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Ibrahaim AM, Ghaferi AA, Thumma JR, Dimick JB. Variation in Outcomes at Bariatric Surgery Centers of Excellence. JAMA Surg. 2017; 152(7): 629-36.

More on why AI Cannot Displace Your Doctor Anytime Soon

News blog readers will be familiar with my profound scepticism about the role of artificial intelligence (AI) in medicine.[1] I have consistently made the point that there is no clear outcome to much medical process. This is quite different to a game of Go where, in the end, you either win or lose. Moreover, AI can simply replicate human error by replicating faulty parts of human processes. I previously used the example of racial bias in police work as an example.[2] Also, when you take a history, then the questions you ask are informed by medical logic or intuition. And eliciting the correct answer is partly a matter of good empathetic approach, as pointed out beautifully in a recent article by Alastair Denniston and colleagues.[3] So comparing AI with a physician is really comparing a physician with physician plus AI.

A further important article on the limitations of AI that has recently come out in the journal Science.[4] The article explains how AI can outperform human operators on a game of Space Invaders; but if the game is suddenly altered until all but one alien is removed, the AI performance deteriorates. A human player can immediately spot the problem, whereas the AI system is flummoxed for many iterations. The article explains how AI is coming full circle. First, computer scientists tried to mimic expert performance at a task. Then, AI completely bypassed the expert by means of a self-learning neural network. They declared victory when ‘AlphaGo’ beat Go champion Ke Jie. That was the high water mark for AI, and although a few enthusiasts declared victory,[5] serious AI scientists have turned back to human intelligence to inform their algorithms. They are even starting to study how children learn and using this knowledge in AI systems.

— Richard Lilford, CLAHRC WM Director

References:

  1. Lilford RJ. Update on AI. NIHR CLAHRC West Midlands News Blog. 1 June 2018.
  2. Lilford RJ. How Accurate Are Computer Algorithms Really? NIHR CLAHRC West Midlands News Blog. 26 January 2018.
  3. Liu X, Keane PA, Denniston AK. Time to regenerate: the doctor in the age of artificial intelligence. J Roy Soc Med. 2018; 111(4): 113-6.
  4. Hutson M. How researchers are teaching AI to learn like a child. Science. 24 May 2018.
  5. Lilford RJ. Computer Beats Champion Player at Go – What Does This Mean for Medical Diagnosis? NIHR CLAHRC West Midlands News Blog. 8 April 2016.

Giving Feedback to Patient and Public Advisors: New Guidance for Researchers

Whenever we are asked for our opinion we expect to be thanked and we also like to know if what we have contributed has been useful. If a statistician/qualitative researcher/health economist has contributed to a project, they would (rightfully) expect some acknowledgement and whether their input had been incorporated. As patient and public contributors are key members of the research team, providing valuable insights that shape research design and deliver, it’s right to assume that they should also receive feedback on their contributions. But a recent study led by Dr Elspeth Mathie (CLAHRC East of England) found that routine feedback to PPI contributors is the exception rather than the rule. The mixed methods study (questionnaire and semi-structured interviews) found that feedback was given in a variety of formats with variable satisfaction with feedback. A key finding was that nearly 1 in 5 patient and public contributors (19%) reported never having received feedback for their involvement.[1]

How should feedback be given to public contributors?

There should be no ‘one size fits all’ approach to providing feedback to public contributors. The study recommends early conversations between researchers and public contributors to determine what kind of feedback should be given to contributors and when. The role of a Public and Patient Lead can help to facilitate these discussions and ensure feedback is given and received throughout a research project. Three main categories of feedback were identified:

  • Acknowledgement of contributions – Whether input was received and saying ‘thanks’
  • Information about the impact of contributions – Whether input was useful and how it was incorporated into the project;
  • Study success and progress – Information on whether a project was successful (e.g. securing grant funding/gaining ethical approval) and detail about how the project is progressing.

 

What are the benefits to providing feedback for public contributors?

The study also explored benefits of giving feedback to contributors. Feedback can:

  • Increase motivation for public contributors to be involved in future research projects;
  • Help improve a contributor’s input into future project (if they know what has been useful, they can provide more of the same);
  • Build the public contributor’s confidence;
  • Help the researcher reflect on public involvement and the impact it has on research.

 

What does good feedback look like?

Researchers, PPI Leads and public contributors involved in the feedback study have co-produced Guidance for Researchers on providing feedback for public contributors to research.[2] The guidance explores the following:

  • Who gives feedback?
  • Why is PPI feedback important?
  • When to include PPI feedback in research cycle?
  • What type of feedback?
  • How to give feedback?

Many patient and public contributors get involved in research to ‘make a difference’. This Guidance will hopefully help ensure that all contributors learn how their contributions have made a difference and will also inspire them to continue to provide input to future research projects.

— Magdalena Skrybant, PPIE Lead

References:

  1. Mathie E, Wythe H, Munday D, et al. Reciprocal relationships and the importance of feedback in patient and public involvement: A mixed methods study. Health Expect. 2018.
  2. Centre for Research in Public Health and Community Care. Guidance for Researchers: Feedback. 2018

Update on AI

A recent article in Science [1] pointed out that scientists have to tweak their AI systems to get them to give the correct answer. But I have a different problem with AI – how do you know that the supposed right answer is actually right? In a game of Go this issue does not arise. You either win or you lose. But medicine is not like that. The machine may diagnose thyroid cancer. You take a biopsy and find thyroid cancer. But is this not the same thing as cases of thyroid cancer found in clinical practice – the machine may be unmasking cases that would never have come to light.[2] In a previous blog we pointed out that machine learning can replicate human bias – for instance, if police are more likely to charge black male youths than equally offending elderly white women, then the machine will learn precisely the wrong lesson, as pointed out in a previous News Blog.[3]

— Richard Lilford, CLAHRC WM Director

References:

  1. Hutson M. Has artificial intelligence become alchemy? Science. 2018; 360: 478.
  2. Lilford RJ. Thyroid Cancer: Another Indolent Tumour Prone to Massive Over Diagnosis. NIHR CLAHRC West Midlands News Blog. 24 March 2017.
  3. Lilford RJ. How Accurate are Computer Algorithms Really? NIHR CLAHRC West Midlands News Blog. 26 January 2018.

Are You Getting Enough?

Most people are aware of the importance of getting a good night’s sleep, but for many actually achieving this amidst work, household chores, children and needing to binge the latest television series, it is difficult. How dangerous is a lack of sleep though? A recent study [1] looked at data from over 43,000 Swedish people, followed up over 13 years, and found that adults (under the age of 65) who slept for fewer than five hours a night all week have a higher mortality risk compared to those who sleep for six or seven hours (hazard ratio 1.65, 95% confidence intervals 1.22-2.23). However, this could be counteracted by getting longer sleep on the weekend – people who had no more than five hours in the week, but were able to get at least eight hours on the weekend had no increased mortality. On the other end of the scale, the research also found that those people who regularly slept for more than eight hours also had a higher rate of mortality compared to those with six or seven hours (hazard ratio 1.25, 95% CI 1-05-1.50). After the age of 65 there doesn’t appear to be any differences. Of course, it is unknown what the causal relationship is between sleep and mortality, and the authors suggest that underlying health problems could be the cause of both extreme sleep patterns and increased mortality.

— Peter Chilton, Research Fellow

References:

  1. Åkerstedt T, Ghilotti F, Grotta A, Zhao H, Adami HO, Trolle-Lagerros Y, Bellocco R. Sleep duration and mortality – does weekend sleep matter? J Sleep Res. 2018.