Tag Archives: Peter Chilton

Vitamin D and Schizophrenia

A number of studies have suggested an association between an increased risk of schizophrenia and being born in winter months. However, it is not clear why this is. There are a number of possible explanations for the association, such as an increase in maternal infections during the winter, seasonal availability of fruits and vegetables during pregnancy, or an increase in environmental toxins during the summer. Another explanation put forward is exposure to and levels of prenatal vitamin D. A research team from Australia and Denmark [1] looked at archived dried blood spots taken at the birth of 1,301 Danes who had received a schizophrenia diagnosis, and compared them to those taken from matched controls. They found that those in the lowest quintile for vitamin D concentration had a significantly increased risk of schizophrenia (incidence rate ratio = 1.44, 95%CI 1.12-1.85, p=0.004). Their level of vitamin D was consistent with the standard definition of being vitamin D deficient. Further analyses also showed that there were seasonal fluctuations in vitamin D levels, with the lowest being found in people born in winter or spring. The authors suggest conducting an RCT to provide vitamin D supplements to pregnant women should be the next step, though the CLAHRC WM Director does not think the sample size calculation would ‘stand up’.

— Peter Chilton, Research Fellow


  1. Eyles DW, Trzaskowski M, Vinkhuyzen AAE, et al. The association between neonatal vitamin D status and risk of schizophrenia. Nature Scientific Reports. 2018; 8: 17692.

Childhood IQ and Mortality

Many studies have shown an association between childhood intelligence and mortality. However, most studies have been conducted with male participants, and potential mechanisms for the putative association are poorly understood. A recent paper looked at a large sample of Swedish people in an attempt to clarify these issues.[1]

The authors looked at IQ data from 19,919 Swedes who were 13 years old at the time (9,817 women), along with socioeconomic data from their childhood and middle age over the following 53 years. The analysis found an association between lower IQ and increased all-cause mortality. A one standard deviation decrease in IQ was associated with increased risk of all-cause mortality in both men (hazard ratio 1.31, 95% CI 1.23-1.39) and women (HR 1.16, 95% CI 1.08-1.25). Most causes of death were associated with lower IQ in men, while in women a lower IQ was associated with an increased risk of death from cancer and cardiovascular disease. When the authors adjusted for childhood socioeconomic factors the associations were slightly attenuated; but were further attenuated when adjusting for adulthood factors – considerably in men (overall mortality HR=1.17, 95% CI 1.08-1.26), and almost completely in women (HR 1.02, 95% CI 0.93-1.12). These results suggest that it is the social and socioeconomic circumstances in adulthood that contribute to the association between IQ and mortality, particularly in women, though the authors state that more research is needed to clarify the pathways linking childhood IQ and mortality across genders.

— Peter Chilton, Research Fellow


  1. Wallin AS, Allebeck P, Gustafsson J-E, Hemmingsson T. Childhood IQ and mortality during 53 years’ follow-up of Swedish men and women. J Epidemiol Community Health. 2018; 72(10): 926-32.

Can Diet Help Maintain Brain Health?

A recent study in the journal of Neurology looked at the long-term effects high fruit and vegetable intake had on a person’s cognitive function.[1] The authors were able to research and follow-up 27,842 US men over a 26 year period. These men were middle-aged (mean age of 51 years) and were or had been health professionals.

Every four years, from 1986 to 2002, they completed questionnaires looking at their eating habits, and then completed subjective cognitive function questionnaires in 2008 and 2012. Logistic regression of the data found significant individual associations between higher intakes of vegetables (around six servings a day compared to two), fruits (around three servings a day compared to half) and fruit juice (once a day compared to less than once a month) and lower odds of moderate or poor subjective cognitive function. These associations remained significant after adjusting for non-dietary factors and total energy intake, though adjusting for dietary factors weakened the association with fruit intake. Daily consumption of orange juice (compared to less than one serving per month) was associated with much lower odds of poor subjective cognitive function, with an adjusted odds ratio of 0.53 (95% CI 0.43-0.67). Meanwhile the adjusted odds ratios for vegetables were 0.83 (05% CI 0.76-0.92) for moderate, and 0.66 (0.55-0.80) for poor subjective cognitive function. The authors also found that high intake of fruit and vegetables at the start of the study period was associated with a lower risk of poor subjective cognitive function at the end of the study. Although the study does not prove a causal link, the fact that the association lasted the length of the study support the idea that vegetable and fruit consumption may help avert memory loss.

— Peter Chilton, Research Fellow


  1. Yuan C, Fondell E, Bhushan A, Ascherio A, Okereke OI, Grodstein F, Willett WC. Long-term intake of vegetables and fruits and subjective cognitive function in US men. Neurology. 2018.

Ethical Dilemmas for Cars

Imagine a scenario in the not-too distant future where self-driving cars have become commonplace. You and your family are being driven along by such a car, splitting your attention between checking your emails, talking to your passengers, and occasionally looking at the road ahead of you. As you are passing a lorry parked on the other side of the road, a person runs out into the road in front of you. What would you expect the artificial intelligence controlling your car to do? The obvious choice is to try to avoid hitting the person – immediately applying the brakes and swerving to the side. But what if the brakes fail? What is the likely risk of harm to yourself and your passengers if you hit the lorry? If avoiding a collision is impossible what should the AI choose to do?

A large team of psychologists, anthropologists and economists from various countries created an online quiz posing various ethical dilemmas to the public.[1] The respondents were given 13 scenarios involving an unavoidable fatal collision, and asked to decide whether to swerve or do nothing, and thus who to spare – for example, humans (vs. pets), young people (vs. old), people of higher status (vs. lower), females (vs. males), healthier people (vs. unhealthier), pedestrians (vs. passengers), etc. After 18 months they had received responses from 233 countries/territories, amounting to more than 40 million decisions. Overall, the researchers found the biggest difference in options were for sparing a human over a pet, a group of people over an individual, and a younger person over an older person. The most ‘spared characters’ were baby, girl, boy and pregnant woman; while the least were cat, criminal, and dog.

However, analysis of further questions revealed three distinct groups of countries, aligned by shared morals – predominantly Western countries (e.g. North America, Europe, Australia); predominantly Eastern countries (e.g. Japan, Indonesia, Pakistan); and predominantly Southern countries (e.g. Central and South America), along with France. Group differences included that the first group were more likely than the other two to choose inaction over swerving; the second group were more likely to choose to spare pedestrians or the lawful; while the third group were more likely to choose to spare females, the fit, the young, and those of higher status.

Of course, in real-life situations an artificial intelligence would need to be able to identify different people with certainty, and would need to factor in the probability of death, harm, etc. but this is an important first step. Before self-driving cars can even start to be allowed on real-life roads there needs to be in-depth global discussions about the ethical dilemmas artificial intelligences will face, and ensure that all car manufacturers take heed of such principles.

— Peter Chilton, Research Fellow


  1. Awad E, Dsouza S, Kim R, et al. The Moral Machine experiment. Nature. 2018.

Counter Intuitive Findings in Cervical Cancer Surgery

In recent years there has been an increase in the use of minimally invasive surgeries for a number of cancers, with many, such as uterine, colorectal, or gastric cancers, showing similar survival rates to traditional open surgery. Although there hasn’t been much specific evidence for the use of minimally invasive hysterectomy in patients with cervical cancer, it has steadily become adopted in a number of countries. Traditional open surgery for hysterectomy has been associated with considerable perioperative and long-term complications, while minimally invasive hysterectomy has been shown to reduce risk of infection and improve recovery times.

The New England Journal of Medicine has recently published the results of two separate studies looking at differences in survival rates following minimally invasive surgery (laparoscopy) compared to open surgery (laparotomy) for radical hysterectomy in cervical cancer patients.[1][2] One study, by Ramirez, et al., was a randomised controlled trial conducted in 33 centres across the globe,[1] while the other by Melamed, et al., was an observational study using a US dataset.[2] Both looked at a similar subset of patients with a similar period of follow-up.

In the RCT 563 patients underwent one of the two types of hysterectomy, and follow-up at four and a half years showed a significant difference in disease-free survival – 86.0% of those who had undergone minimally invasive surgery compared to 96.5% who had undergone open surgery (difference of -10.6 percentage points, 95% CI -16.4 to -4.7). Further, the minimally invasive surgery was associated with a lower rate of overall survival at three years (93.8% vs. 99.0%) with a hazard ratio for death of 6.00 (95% CI 1.77-20.30).

In the other, observational, study, the authors looked at 2,461 women who underwent a hysterectomy and found that after four years 90.9% of those who had minimally invasive surgery survived, compared to 94.7% of those who had undergone open surgery (hazard ratio 1.65, 95% CI 1.22-2.22). Looking at a longer time period of data, the widespread adoption of minimally invasive surgery in 2006 coincided with a decline in the four year relative survival rate of 0.8% per year (p=0.01).

So, here we have another two studies where the results of the randomised trial broadly agree with those from the observational study,[3] and with a large and significant effect. Looking at the methods used this counter intuitive effect is not accounted for by a more complex excision being performed during the open surgery. Instead, it may be that something to do with the technique – could manipulation of the cervix during the laparoscopy, or exposure of the tumour to circulating CO2, lead to the release of cancerous cells into the blood stream of the patient?

What we would like to know from News Blog readers is whether they know of any studies where someone has counted (using PCR or cell separation) to see if cancer cells are released into circulation when a tumour is manipulated. Please let us know.

— Peter Chilton, Research Fellow


  1. Ramirez PT, Frumovitz M, Pareja R, et al. Minimally Invasive versus Abdominal Radical Hysterectomy for Cervical Cancer. New Engl J Med. 2018.
  2. Melamed A, Margul DJ, Chen L, Keating NL. Survival after Minimally Invasive Radical Hysterectomy for Early-Stage Cervical Cancer. New Engl J Med. 2018.
  3. Lilford RJ. RCTs versus Observational Studies: This Time in the Advertisement Industry. NIHR CLAHRC West Midlands News Blog. 29 June 2018.

Lead Exposure and DALYs

It is well known that exposure to lead can cause a number of health problems, such as cognitive impairment, cardiovascular problems, low birth weight, etc. Exposure is also associated with a decreased life expectancy and economic output. While many countries have banned the use of lead in products such as petrol and paints, leading to significant declines in the levels of lead recorded in a person’s blood (termed blood lead levels – BLLs) there are still numerous other sources of exposure. In India, for example, studies found elevated BLLs in the population more than ten years after leaded petrol was phased out; sources include from lead smelting sites, some ayurvedic medicines, cosmetics, contaminated food, and contaminated tube wells, rivers and soil. In order to assess the extent of elevated BLLs in India, Ericson and colleagues conducted a meta-analysis of 31 studies totalling 67 samples.[1] Overall, they found a mean BLL of 6.86 μg/dL (95% CI: 4.38-9.35) in children, and 7.52 μg/dL (95% CI: 5.28-9.76) in adults (who did not work with lead). As a reference, the CDC deem a BLL of 5 μg/dL as requiring prompt medical investigation, “based on the 97.5% of BLL distribution among children… in the United States”.[2] From these figures the authors estimated that such high levels of exposure resulted in a DALY loss of 4.9 million (95% CI 3.9-5.6) in 2012. Further, data from other studies suggest that a BLL of 0.1-1.0 μg/dl contributes to loss of a single IQ point, meaning the levels of lead seen in these children would result in an average loss of four IQ points (95% CI 2.5-4.7).

The authors fear that a significant amount of the lead exposure stems from used lead batteries used in motor vehicles, which are often processed informally, and thus call for better regulations and larger studies.

Peter Chilton, Research Fellow


  1. Ericson B, Dowling R, Dey S, et al. A meta-analysis of blood lead levels in India and the attributable burden of disease. Environ Int. 2018; 121(1): 461-70.
  2. Centers for Disease Control and Prevention. CDC Response to Advisory Committee on Childhood Lead Poisoning Prevention Recommendations in “Low Level Lead Exposure Harms Children: A Renewed Call of Primary Prevention”. 2012.

Food Allergies and Childbirth

In a previous News Blog we looked at the practice of swabbing babies delivered via Caesarean section with vaginal fluid in an attempt to reduce the incidence of allergies in the child.[1] Another study has now been reported that could potentially strengthen this argument.[2] This was a nationwide cohort study conducted in Sweden that looked at over 1 million children, their route of delivery and the incidence of food allergies. Overall 2.5% of children were diagnosed with a food allergy, and this was positively associated with those who were delivered via C-section (hazard ratio 1.21, 95% CI 1.18-1.25) – both elective and emergency. Analysis of the data suggests that an extra 5 in 1,000 children delivered via C-section would develop a food allergy (compared to the reference group).

Interestingly there was also a negative association with those who were born prematurely (earlier than 32 weeks) (HR 0.74, 95% CI 0.56-0.98). The authors suggest this may be due to the postnatal care preterm infants receive, or is due to an immature gastrointestinal tract.

— Peter Chilton, Research Fellow


  1. Lilford RJ. Exposure of the Baby to a Rich Mixture of Coliform Organisms from the Birth Canal. NIHR CLAHRC West Midlands News Blog. 22 April 2016.
  2. Mitselou N, Hallberg J, Stephansson O, Almqvist C, Melén E, Ludvigsson JF. Cesarean delivery, preterm birth, and risk of food allergy: Nationwide Swedish cohort study of more than 1 million children. J Allerg Clin Immunol. 2018.

Low-Tech Solution to a Devastating Infection

What do you do when you finish your bottle of shampoo? Throw it straight in the recycling bin? Turn it into a child’s space rocket? Well, Dr Mohamad Chisti became inspired to invent a treatment for pneumonia.

Globally more than 920,000 children died from pneumonia in 2015, accounting for around 16% of all deaths in under-fives.[1] However, this rate is far higher in low-income countries, such as Bangladesh where the figure is 28%.[2] This is partially due to the greater amount of malnourishment – pneumonia results in inflammation of the alveoli in the lungs, resulting in breathlessness and difficulty breathing – malnourished children do not have the energy required to breath in enough oxygen. The standard treatment listed in World Health Organization guidelines is to deliver ‘low-flow’ oxygen through a face mask or tubes near the nostrils, but this still requires a lot of effort to breathe. Whilst visiting Australia Dr Chisti was shown a bubble-CPAP ventilator for premature babies. This type of ventilator passes exhaled breath through water, which forms bubbles that push fresh air into the lungs and thus makes breathing easier. However, the device is prohibitively expensive for many hospitals in low-income countries. When Dr Chisti spotted a discarded shampoo bottle he realised it could be possible to recreate such a ventilator at a fraction of the cost.[3] Results of a trial to assess the efficacy of bubble-CPAP for children with pneumonia were published in the Lancet in 2015, with positive results,[4] and since then the mortality rate at Dhaka Hospital where the device is used routinely has significantly decreased, as have associated costs.[5] Further trials are starting to be carried out in other hospitals.

— Peter Chilton, Research Fellow


  1. UNICEF. Pneumonia. 2018.
  2. International Centre for Diarrhoeal Disease Research, Bangladesh. Pneumonia and other respiratory diseases. 2018.
  3. Duke T. CPAP: a guide for clinicians in developing countries. Paediatr Int Child Health. 2013; 34(1): 3-11.
  4. Chisti MJ, Salam MA, Smith JH, et al. Bubble continuous positive airway pressure for children with severe pneumonia and hypoxaemia in Bangladesh: an open, randomised controlled trial. Lancet. 2015; 386: 1057-65.
  5. The Economist. How a shampoo bottle is saving young lives. The Economist. 6 September 2018.

Electronic Health Records and Mortality Rate

In a number of our previous blogs we have looked at the impact of electronic health records,[1-3] and now we add another.[4] In a paper recently published in Health Affairs the authors found that adoption of electronic health records was associated with an improvement in thirty-day mortality rates. Although the mortality rates increased when hospitals initially introduced the system (0.11 percentage points per function [such as radiology reports, laboratory reports, radiology images, medication prescribing, etc.] that was adopted), this improved over time (presumably as staff become more familiar, and were able to integrate the system into their work), and eventually mortality rates had decreased by 0.09% per year, per function adopted. Adding new functions during the study period saw further improvements with a decrease of 0.21% per year, per function. These improvements were greatest in smaller hospitals and those that were not teaching hospitals – perhaps because such hospitals have more opportunity for improvement as they are less likely to have engaged other initiatives; or they may have lacked resources to implement other improvement initiatives.

— Peter Chilton, Research Fellow


  1. Lilford RJ. Going Digital – The Electronic Patient Record. NIHR CLAHRC West Midlands News Blog. 6 May 2016.
  2. Lilford RJ. Electronic Health Record System and Adverse Outcomes. NIHR CLAHRC West Midlands News Blog. 28 October 2016.
  3. Lilford RJ. If You Have Time for Only One Article. NIHR CLAHRC West Midlands News Blog. 24 August 2018.
  4. Lin SC, Jha AK, Adler-Milstein J. Electronic Health Records Associated With Lower Hospital Mortality After Systems Ha­ve Time To Mature. Health Aff. 2018;37(7).

Changes in Mealtimes Leading to Eating Less

People have long looked for a method of dieting that is effective and easy to undertake. A recent pilot study in the Journal of Nutritional Sciences may offer a new alternative.[1] For ten weeks participants were required to both delay their usual breakfast time and bring forward their evening meal time by an hour and a half – there were no other restrictions on what food they could consume, or what exercises they needed to do. When compared to a control group they found that the participants in the intervention group reduced their daily energy intake (p=0.019), with an associated reduction in adipose levels (p=0.047). Further, there was also a significant difference in fasting glucose levels, though the authors note that this was mainly due to an increase in control participants compared to baseline. Questionnaire results suggest that the reduction in energy intake may have been due to less time for snacking, and/or still feeling full from the previous meal. Unfortunately the majority of participants found that the restrictions were too severe, impacting on their social and family life, and did not believe they could continue past the end of the study.

Although this was only a very small study of 13 participants it shows a potential opportunity for future research.

— Peter Chilton, Research Fellow


  1. Antoni R, Robertson TM, Robertson MD, Johnston JD. A pilot feasibility study exploring the effects of a moderate time-restricted feeding intervention on energy intake, adiposity and metabolic physiology in free-living human subjects. J Nutri Sci. 2018;7:e22.