Is Fertility Control a Demand or Supply-Side Issue?

In China, population control has been achieved by coercive means. But in other countries it must be controlled by increasing access to contraceptives or by stimulating demand by non-coercive means. Where does the main barrier lie in low- and middle-income countries? Miller and Babiarz (2016) tackle this old chest – not by means of a systematic review. They include only experimental and instrumental variable-based studies.[1]

They find few RCTs. The most famous of these experiments is the Matlab study in Bangladesh,[2] where 141 areas were randomised in a cluster trial. The intervention group received visits every two weeks by community reproductive health workers who provided information and access to contraception. This intense, and arguably unscalable, intervention yielded a drop in fertility of about 20% over a lifetime. A slightly smaller effect was found in a further trial, this time of 37 clusters in Ghana. However, the two remaining RCTs consisted of family services that were grafted onto an existing program (HIV services in Kenya and micro-credit in Ethiopia), and both yielded null results.

Instrumental variable studies depend on a sudden increase or decrease in supply that does not seem to be attributable to a change in demand. Mostly these take the form of a stepwise roll-out of services (Iran after 1989; Colombia in 1965) and find reductions of around 20% infertility. Likewise fertility increased and then decreased in Romania when a ban on abortion, the main method of contraception in that country, was imposed and then lifted.[3] I guess the best one can say is that, absent China style enforcement, contraceptive provision is a necessary, but not sufficient condition for fertility control.

— Richard Lilford, CLAHRC WM Director


  1. Miller G & Babiarz KS. Family Planning Program Effects: Evidence from Microdata. Popul Dev Rev. 2016; 42(1): 7-26.
  2. Schultz TP. Population policies, fertility, women’s human capital and child quality. In: Handbook of Development Economics Volume 4. 2007, pp. 3249–303.
  3. Pop-Eleches C. The Impact of an Abortion Ban on Socioeconomic Outcomes of Children: Evidence from Romania. J Polit Econ. 2006; 114(4): 744-73.

The Benefits of Expanding Health Insurance in the US

The Oregon experiment, which we have cited previously,[1] was of limited scale and follow-up was for two years only. It is the only RCT in expanded insurance in the US and shows improvements in financial security, improved perceived health, increased use of services, and less depression. Other health outcomes were not affected. Was this because of its limited sample size and short follow-up? Probably yes, given the results of a number of natural experiments involving other plans, such as the Massachusetts insurance plan,[2] various studies of the Affordable Care Act,[3] and a study of Medicare when it was introduced nationwide back in the 1960s.[4] The resulting evidence is neatly summarised in a recent article in the New England Journal of Medicine.[5] The evidence is not conclusive by the very nature of the topic. However, taken in the round, the evidence suggest that public payment for health service yields real health benefits, but it is likely that these benefits take many years to materialise.

— Richard Lilford, CLAHRC WM Director


  1. Lilford RJ. Oregon Experiment. NIHR CLAHRC West Midlands News Blog. 6 March 2015.
  2. Sommers BD, Long SK, Baicker K. Changes in mortality after Massachusetts health care reform: a quasi-experimental study. Ann Intern Med. 2014; 160: 585-93.
  3. Sommers BD, Blendon RJ, Orav EJ, Epstein AM. Changes in utilization and health among low-income adults after Medicaid expansion or expanded private insurance. JAMA Intern Med. 2016; 176: 1501-9.
  4. Medicare
  5. Sommers BD & Kesselheim AS. Massachusetts’ Proposed Medicaid Reforms — Cheaper Drugs and Better Coverage? N Engl J Med. 2018; 378: 109-11.

How Accurate are Computer Algorithms Really?

The use of computers to replace tasks previously done by hand continues to become more prevalent, from using machine learning to analyse database studies,[1] to algorithms that recommend whether someone should receive a bank loan or be shortlisted for a job interview. Another area that uses such predictive algorithms is in the criminal justice system, where they are often used to predict criminal behaviour, such as locations of crime ‘hotspots’, the likelihood of whether defendants will attend their court hearing, and/or whether someone will reoffend. However, there is concern as to the accuracy and fairness of these systems.[2]

In an article in Science Advances,[3] Dressel and Faris compared a commercially available criminal risk assessment tool against assessment by untrained participants on accuracy of deciding whether a defendant would reoffend within two years. These participants were recruited via an online system and paid $1, with a bonus of $5 if the accuracy of their predictions was high (to incentivise them to treat the task seriously). The computer algorithm assessed 137 features of 1000 defendants and their past criminal record, while the volunteers were given a statement containing seven features (sex, age, criminal history) of a subset of 50 defendants. Comparing the results showed no significant difference (p=0.045) between the accuracy of the algorithm (65.2%) and the participants (62.8%). Pooling the participant responses (‘wisdom of the crowd’) showed similar accuracy (67.0%) (p=0.85). Further analysis showed that participants’ prediction accuracy were slightly more sensitive and less biased than that of the algorithm; while they were similar in terms of fairness regarding race of the defendant. Perhaps with participants who are well versed in criminal justice, or who are well trained, their accuracy could be higher than that of the computer?

The authors then went on to recreate the accuracy of the commercial computer algorithm using a simpler standard linear predictor, and found that inputting only two features (age and total number of previous convictions) gave results as accurate as the algorithm using 137 features.

— Peter Chilton, Research Fellow


  1. Lilford RJ. Machine Learning and the Demise of the Standard Clinical Trial! NIHR CLAHRC West Midlands News Blog. 10 November 2017.
  2. Lilford RJ. Machine Learning. NIHR CLAHRC West Midlands News Blog. 11 November 2016.
  3. Dressel J & Farid H. The accuracy, fairness, and limits of predicting recidivism. Sci Adv. 2018; 4(1): eeao5580.

Intraperitoneal Chemotherapy for Ovarian Cancer

The CLAHRC WM Director hates ovarian cancer – it spreads throughout the abdominal cavity and is horrible to behold it at surgery. He has often wondered if topical chemotherapy could help control this dreaded disease. In the UK one in 52 women will be diagnosed with ovarian cancer within their lifetime, with around 7,400 new cases and around 4,100 deaths in 2014.[1] Standard treatment is surgery to excise the tumour, followed by intravenously administered chemotherapy, or vice-versa. Can topical (intraperitoneal) chemotherapy improve outcomes compared to the standard intravenous method? Previous research of combined intravenous and intraperitoneal chemotherapy has shown an increase in overall survival in patients with ovarian cancer, but there are a number of limitations that have affected widespread adoption. Researchers in the Netherlands conducted a study to see if delivering the intraperitoneal chemotherapy immediately after surgery could show similar effectiveness, while overcoming these limitations.[2]

This was a randomised trial of 245 patients with ovarian cancer who had already undergone three cycles of chemotherapy. Patients underwent surgery with hyperthermic intraperitoneal chemotherapy (HIPEC) administered at the end of the procedure or not, followed by another three cycles of chemotherapy. HIPEC is where the abdomen is heated prior to applying the chemotherapy drugs. This hyperthermia results in a number of cellular reactions, including increasing the penetration of chemotherapy drugs into the tissue, impairing the ability of cancer cells to repair DNA, thus increasing their sensitivity, and inducing apoptosis.

Results showed significantly fewer deaths and disease recurrence in those patients who underwent HIPEC immediately during surgery, than in those who did not (hazard ratio 0.66, 95% CI 0.50-0.87; p=0.003). Further the patients in the HIPEC group had a median recurrence-free survival of 14.2 months, compared to 10.7 months. At follow-up (median of 4.7 years), 62% of patients who had undergone surgery without HIPEC had died, compared to 50% of patients who had received HIPEC (p=0.02). Median survival was 45.7 months compared to 33.9 months. Adverse events were similar in both groups.

— Peter Chilton, Research Fellow


  1. Cancer Research UK. Ovarian Cancer Statistics. 2018.
  2. van Driel WJ, Koole SN, Sikorska K, et al. Hyperthermic Intraperitoneal Chemotherapy in Ovarian Cancer. New Engl J Med. 2018; 378: 230-40.

So Where Are We up to with Alcohol and Health?

First, let me come clean – I am a moderate drinker. No doubt about it. Five nights a week on a mean of two glasses, and two nights on a mean of three glasses. These are average sized glasses, so let’s say 24 units (1.5 x 16) per week. I love wine and seek good news…

The story so far:

  1. There is a ‘J-shaped’ curve of the association between alcohol and many diseases.[1]
    093 - Alcohol j curve
    * Cancer does not follow this pattern. Cancers of mouth, throat and gullet are almost certainly increased, and probably breast too.[2]
  2. But Mendelian randomisation (inheriting genes predisposing to alcohol consumption) does not show a J-shaped curve – risk rises incrementally.[3]
  3. Longitudinal studies show that, on one dimension of cognition, decline is faster in linear relationship to alcohol dose, and this finding ‘triangulates’ with a drop in right-sided hippocampal volume (detected by MRI) in relation to alcohol intake.[4]

Conclusion: the J-shaped curve is an artefact of selection bias.

So what’s new? First, a meta-analysis of longitudinal studies [5] shows a substantial protective effect against dementia for low to moderate alcohol intake (RR 0.63, 0.53-0.75) and also in Alzheimer’s disease (RR 0.57, 0.44-0.74). Second, there some evidence from these studies that chronic drinking is protective of cognitive decline, while episodic drinking is harmful at the same total intake. Third, a new longitudinal study suggests that chronic (i.e. non-binge) drinking is indeed protective against cognitive impairment in older people.[6]

This new study (the Rancho Bernardo study) is based on a cohort of 6,339 middle-class residents of a suburb in San Diego. Of the surviving residents, 2,479 attended a research clinic in 1985 where detailed alcohol histories were elicited. The participants were followed up every four years with cognitive tests. Co-variates were collected and added sequentially to a logistic regression model, starting with those (e.g. sex and age) least likely to be on the causal pathway linking alcohol to outcome. The APOE genotype was examined as an interaction term. Potential confounding effects of diet were also examined. Various sensitivity analyses were conducted. Drinking up to 3 units per day after age 65, and 4 units per day at a younger age significantly increased the chance of healthy survival, with an odds ratio exceeding 2. The J curve is there in the data, with the probability of healthy longevity increasing through no, low, moderate and even heavy drinking, only to decline again when drinking was ‘excessive’ (meaning over 4 drinks per day aged under 65 and over 3 per day for men over 65, and 3 or 2 drinks per day in younger or older women. And, yes, more frequent drinking is better than episodic drinking at a given intake – ORs of Cognitively Health Longevity increased three-fold with daily drinking vs. not drinking at all, but only two-fold if drinking was ‘infrequent’. Conclusions were robust to various sensitivity analyses.

What is the truth? No person knoweth it! But the idea that regular, moderate drinking offers some protective effects to trade-off against cancer risk has empirical support. I wonder if there are different genes predisposing to binge vs. steady drinking? I hypothesise that the genes are associated with poor impulse control leading to binge drinking. I hope that this hypothesis will now be put to an empirical test. Another question, of course, concerns the type of drink. The middle-class people in the Rancho Bernardo study may have favoured wine over other drinks – I hope so!

— Richard Lilford, CLAHRC WM Director


  1. Di Castelnuovo A, Costanzo  S, Bagnardi  V, Donati  MB, Iacoviello  L, de Gaetano    Alcohol dosing and total mortality in men and women: an updated meta-analysis of 34 prospective studies.  Arch Intern Med. 2006; 166(22): 2437-45.
  2. Lilford RJ. Oh Dear – Evidence Against Alcohol Accumulates. NIHR CLAHRC West Midlands News Blog. 7 December, 2017.
  3. Holmes MV, Dale CE, Zuccolo L, et al. Association between alcohol and cardiovascular disease: Mendelian randomisation analysis based on individual participant data. BMJ. 2014; 349: g4164.
  4. Lilford RJ. Alcohol and its Effects. NIHR CLAHRC West Midlands News Blog. 18 August, 2017.
  5. Peters R, Peters J, Warner J, Beckett N, Bulpitt C. Alcohol, dementia and cognitive decline in the elderly: a systematic review. Age Ageing. 2008; 37(5): 505-12.
  6. Richard EL, Kritz-Silverstein D, Laughlin GA, Fung TT, Barrett-Connor E, McEvoy LK. Alcohol Intake and Cognitively Healthy Longevity in Community-Dwelling Adults: The Rancho Bernardo Study. J Alzheimer’s Dis. 2017; 59: 803-14.

A Calming Scent

In a previous News Blog we looked at a study investigating associations between body odour and attractiveness to strangers.[1] But what about the smell of someone we already love? A recent study randomly assigned 96 women to smell the scent of either their partner, a stranger, or a neutral unworn shirt, before exposing them to stress through a standardised mock job interview and an unanticipated mental arithmetic task.[2] The results found that women exposed to their partner’s scent perceived lower levels of stress both before and after the stressor task (though not during). Further women exposed to a stranger’s scent had higher levels of cortisol throughout the study, which is released in response to stress.

Perhaps providing worn clothing from a loved one could be a useful coping strategy for people who have been separated, for example, in elderly patients in care homes.

— Peter Chilton, Research Fellow


  1. Lilford RJ. The Scent of a Woman – Not as Important as Once Thought. NIHR CLAHRC West Midlands News Blog. 24 November 2017.
  2. Hofer MK, Collins HK, Whillans AV, Chen FS. Olfactory Cues From Romantic Partners and Strangers Influence Women’s Responses to Stress. J Person Soc Psychol. 2018; 114(1): 1-9.

New Framework to Guide the Evaluation of Technology-Supported Services

Heath and care providers are looking to digital technologies to enhance care provision and fill gaps where resource is limited. There is a very large body of research on their use, brought together in reviews, which among many others, include, establishing effectiveness in behaviour change for smoking cessation and encouraging adherence to ART,[1] demonstrating improved utilisation of maternal and child health services in low- and middle-income countries,[2] and delineating the potential for improvement in access to health care for marginalised groups.[3] Frameworks to guide health and care providers when considering the use of digital technologies are also numerous. Mehl and Labrique’s framework aims to help a low- or middle-income country consider how they can use digital mobile health innovation to help succeed in the ambition to achieving universal health coverage.[4] The framework tells us what is somewhat obvious, but by bringing it together it provides a powerful tool for thinking, planning, and countering pressure from interest groups with other ambitions. The ARCHIE framework developed by Greenhalgh, et al.[5] is a similar tool but for people with the ambition of using telehealth and telecare to improve the daily lives of individuals living with health problems. It sets out principles for people developing, implementing, and supporting telehealth and telecare systems so they are more likely to work. It is a framework that, again, can be used to counter pressure from interest groups more interested in the product than the impact of the product on people and the health and care service. Greenhalgh and team have now produced a further framework that is very timely as it provides us with a tool for thinking through the potential for scale-up and sustainability of health and care technologies.[6]

Greenhalgh, et al. reviewed 28 previously published technology implementation frameworks in order to develop their framework, and use their own studies of digital assistive technologies to test the framework. Like the other frameworks this provides health and care providers with a powerful tool for thinking, planning and resisting. The Domains in the Framework include, among others, the health condition, the technology, the adopter system (staff, patients, carers), the organisation, and the Domain of time – how the technology embeds and is adapted over time. For each Domain in the Framework the question is asked whether it is simple, complicated or complex in relation to scale-up and sustainability of the technology. For example, the nature of the condition: is it well understood and predictable (simple), or poorly understood and unpredictable (complex)? Asking this question for each Domain allows us to avoid the pitfall of thinking something is simple when it is in reality complex. For example, there may be a lot of variability in the health condition between patients, but the technology may have been designed with a simplified textbook notion of the condition in mind. I suggest that even where clinicians are involved in the design of interventions, it is easy for them to forget how often they see patients that are not like the textbook, as they, almost without thinking, deploy their skills to adapt treatment and management to the particular patient. Greenhalgh, et al. cautiously conclude that “it is complexity in multiple domains that poses the greatest challenge to scale-up, spread and sustainability”. They provide examples where unrecognised complexity stops in its tracks the use of a technology.

— Frances Griffiths, Professor of Medicine in Society


  1. Free C, Phillips G, Galli L. The effectiveness of mobile-health technology-based health behaviour change or disease management interventions for health care consumers: a systematic review. PLoS Med. 2013;10:e1001362.
  2. Sondaal SFV, Browne JL, Amoakoh-Coleman M, Borgstein A, Miltenburg AS, Verwijs M, et al. Assessing the Effect of mHealth Interventions in Improving Maternal and Neonatal Care in Low- and Middle-Income Countries: A Systematic Review. PLoS One. 2016;11(5):e0154664.
  3. Huxley CJ, Atherton H, Watkins JA, Griffiths F. Digital communication between clinician and patient and the impact on marginalised groups: a realist review in general practice. Br J Gen Pract. 2015;65(641):e813-21.
  4. Mehl G, Labrique A. Prioritising integrated mHealth strategies for universal health coverage. Science. 2014;345:1284.
  5. Greenhalgh T, Procter R, Wherton J, Sugarhood P, Hinder S, Rouncefield M. What is quality in assisted living technology? The ARCHIE framework for effective telehealth and telecare services. BMC Medicine. 2015;13(1):91.
  6. Greenhalgh T, Wherton J, Papoutsi C, Lynch J, Hughes G, A’Court C, et al. Beyond Adoption: A New Framework for Theorizing and Evaluating Nonadoption, Abandonment, and Challenges to the Scale-Up, Spread, and Sustainability of Health and Care Technologies. J Med Internet Res. 2017;19(11):e367.

High ‘Tight’ is Tight Enough for Control of Type 2 Diabetes

Two recent papers touch on the important subject of drug treatment of type 2 diabetes.[1] [2] The first paper deals with the risks of ‘tight’ control, and the second examines the effect of ‘tight control’ on the microvascular complications of diabetes. So what is meant by ‘control’ vs. ‘tight control’. ‘Control’ brings HbA1c levels into the range 7-8%, while ‘tight control’ brings it under 7%. Both papers cast doubt on the value of ‘tight control’ vs. just ‘control’ achieved by pharmacological means. The first paper points out that ‘tight’ pharmacological control is associated with an increased risk of sudden death when compared to ‘control’.[1] This is thought to result from an increased incidence and severity of severe hypoglycaemic episodes when insulin doses are ramped up to achieve ‘tight’ control. The second paper, based on a review of RCT evidence,[2] finds that microvascular disease (causing blindness, renal failure, leg ulcers) is not measurably reduced by ‘tight control’ vs. ‘control’. So there we have it – ‘tight’ pharmacological control introduces the hazard of sudden death for no countervailing benefit in long-term outcomes. To put this another way, going from ‘control’ to ‘tight control’ increases the risk of sudden death for little, if any, compensatory advantage.

If there are limits to what can be achieved by ramping up pharmacological treatment, then what about dieting to the point that diabetes goes into remission? The evidence suggest that three-quarters of people with type 2 diabetes will achieve remission if they lose at least 15kg of weight. Bariatric surgery is highly effective in resulting in sustained weight loss.[3] Up to 10% of people can achieve a 15kg drop in weight by dieting alone, but about one-third of them revert each year. Nevertheless, it is worth trying hard to achieve weight loss because societal and personal gains are immense. And we have argued before for an inexpensive model to increase access to bariatric surgery.[4] [5]

Thank you to Ewan Hamnett for drawing my attention to this paper.

— Richard Lilford, CLAHRC WM Director


  1. McCombie L, Leslie W, Taylor R, Kennon B, Sattar N, Lean MEJ. Beating type 2 diabetes into remission. BMJ. 2017; 358: j4030.
  2. Rodriguez-Gutierrez R & Montori VM. Glycemic Control for Patients with Type 2 Diabetes: Our Evolving Faith in the Face of Evidence. Circ Cardiovasc Qual Outcomes. 2016; 9(5): 504-12.
  3. Schaeur PR, Bhatt DL, Kirwan JP, et al. Bariatric Surgery versus Intensive Medical Therapy for Diabetes – 5-Year OutcomesNew Engl J Med. 2017; 376: 641-51.
  4. Lilford RJ. Bariatric Surgery – Improve Five-Year Outcomes. NIHR CLAHRC West Midlands News Blog. 23 June, 2017.
  5. Lilford RJ. Is It Safe for One Surgeon to Oversee Two Operations Concurrently? NIHR CLAHRC West Midlands News Blog. 27 October, 2017.

Patient’s experience of hospital care at weekends

The “weekend effect”, whereby patients admitted to hospitals during weekends appear to be associated with higher mortality compared with patients who are admitted during weekdays, has received substantial attention from the health service community and the general public alike.[1] Evidence of the weekend effect was used to support the introduction of ‘7-day Service’ policy and associated changes to junior doctor’s contracting arrangement by the NHS,[2-4] which have further propelled debates surrounding the nature and causes of the weekend effect.

Members of the CLAHRC West Midlands are closely involved in the HiSLAC project,[5] which is an NIHR HS&DR Programme funded project led by Professor Julian Bion (University of Birmingham) to evaluate the impact of introducing 7-day consultant-led acute medical services. We are undertaking a systematic review of the weekend effect as part of the project,[6] and one of our challenges is to catch up with the rapidly growing literature fuelled by the public and political attention. Despite that hundreds of papers on this topic have been published, there has been a distinct gap in the academic literature – most of the published papers focus on comparing hospital mortality rates between weekends and weekdays, but virtually no study have compared quantitatively the experience and satisfaction of patients between weekends and weekdays. This was the case until we found a study recently published by Chris Graham of the Picker Institute, who has unique access to data not in the public domain, i.e. the dates of admission to hospital given by the respondents.[7]

This interesting study examined data from two nationwide surveys of acute hospitals in 2014 in England: the A&E department patient survey (with 39,320 respondents representing a 34% response rate) and the adult inpatient survey (with 59,083 respondents representing a 47% response rate). Patients admitted at weekends were less likely to respond compared to those admitted during weekdays, but this was accounted for by patient and admission characteristics (e.g. age groups). Contrary to the inference that would be made on care quality based on hospital mortality rates, respondents attending hospital A&E department during weekends actually reported better experiences with regard to ‘doctors and nurses’ and ‘care and treatment’ compared with those attending during weekdays. Patients who were admitted to hospital through A&E during weekends also rated information given to them in the A&E more favourably. No other significant differences in the reported patient experiences were observed between weekend and weekday A&E visits and hospital admissions. [7]

As always, some cautions are needed when interpreting these intriguing findings. First, as the author acknowledged, patients who died following the A&E visits/admissions were excluded from the surveys, and therefore their experiences were not captured. Second, although potential differences in case mix including age, sex, urgency of admission (elective or not), requirement of a proxy for completing the surveys and presence of long-term conditions were taken into account in the aforementioned findings, the statistical adjustment did not include important factors such as main diagnosis and disease severity which could confound patient experience. Readers may doubt whether these factors could overturn the finding. In that case the mechanisms by which weekend admission may lead to improved satisfaction Is unclear. It is possible that patients have different expectations in terms of hospital care that they receive by day of the week and consequently may rate the same level of care differently. The findings from this study are certainly a very valuable addition to the growing literature that starts to unfold the complexity behind the weekend effect, and are a further testament that measuring care quality based on mortality rates alone is unreliable and certainly insufficient, a point that has long been highlighted by the Director of the CLAHRC West Midlands and other colleagues.[8] [9] Our HiSLAC project continues to collect and examine qualitative,[10] quantitative,[5] [6] and economic [11] evidence related to this topic, so watch the space!

— Yen-Fu Chen, Principal Research Fellow


  1. Lilford RJ, Chen YF. The ubiquitous weekend effect: moving past proving it exists to clarifying what causes it. BMJ Qual Saf 2015;24(8):480-2.
  2. House of Commons. Oral answers to questions: Health. 2015. House of Commons, London.
  3. McKee M. The weekend effect: now you see it, now you don’t. BMJ 2016;353:i2750.
  4. NHS England. Seven day hospital services: the clinical case. 2017.
  5. Bion J, Aldridge CP, Girling A, et al. Two-epoch cross-sectional case record review protocol comparing quality of care of hospital emergency admissions at weekends versus weekdays. BMJ Open 2017;7:e018747.
  6. Chen YF, Boyal A, Sutton E, et al. The magnitude and mechanisms of the weekend effect in hospital admissions: A protocol for a mixed methods review incorporating a systematic review and framework synthesis. Systems Review 2016;5:84.
  7. Graham C. People’s experiences of hospital care on the weekend: secondary analysis of data from two national patient surveys. BMJ Qual Saf 2017;29:29.
  8. Girling AJ, Hofer TP, Wu J, et al. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling study. BMJ Qual Saf 2012;21(12):1052-56.
  9. Lilford R, Pronovost P. Using hospital mortality rates to judge hospital performance: a bad idea that just won’t go away. BMJ 2010;340:c2016.
  10. Tarrant C, Sutton E, Angell E, Aldridge CP, Boyal A, Bion J. The ‘weekend effect’ in acute medicine: a protocol for a team-based ethnography of weekend care for medical patients in acute hospital settings. BMJ Open 2017;7: e016755.
  11. Watson SI, Chen YF, Bion JF, Aldridge CP, Girling A, Lilford RJ. Protocol for the health economic evaluation of increasing the weekend specialist to patient ratio in hospitals in England. BMJ Open 2018:In press.

Worms – Not Just Useful in the Garden

It is known that worms are ‘old infections’ and that old infections tend to manipulate their host’s immune system to their advantage – they use the immune system to hide from attack by the immune system. It is not altogether surprising, then, that worms can affect non-infective diseases. Previous research has shown infections protecting people from atopy.[1] Now it turns out that worm infestation might also offer protection against inflammatory bowel disease.[2] One possibility is that they do this by altering intestinal flora and reducing the load of bacteria that promote infection.[2] [3] Certain people who are predisposed to inflammatory bowel disease might gain protection from worm infestation.

— Richard Lilford, CLAHRC WM Director


  1. Smits HH, Everts B, Hartgers FC, Yazdanbakhsh M. Chronic Helminth Infections Protect Against Allergic Diseases by Active Regulatory Processes. Curr Allergy Asthma Rep. 2010; 10(1): 3-12.
  2. Ramanan D, Bowcutt R, Lee SC, et al. Helminth infection promotes colonization resistance via type 2 immunity. Science. 2016; 352(6285): 608-12.
  3. Leslie M. Parasitic worms may prevent Crohn’s disease by altering bacterial balance. Science. 24 April 2016.