All posts by clahrcwm

Ever Increasing Life Expectancies Come to an Abrupt End Among American Whites

Big discontinuities are fascinating. Just when we think we understand something, the trend line changes radically. Examples of unexpected discontinuities in trends include the massive decline in smoking among African-Americans in the 1980s [1]; the drop in crime in high-income cities over the last decade or so [2]; and the recent drop in teenage pregnancy rates.[3] These are favourable trends in contrast to the sudden end of year on year decline in mortality among the majority population in one large country – white people in the US.[4] Anne Case and Angus Deaton drill down into the numbers in their recent paper:

  1. Is this trend confined to white people? Yes, black and Hispanic people continue to experience declining mortality rates.
  2. Is this trend seen in other high-income countries? No – in France, Sweden, Japan and the UK, age-specific mortality continues to decline across the populations.
  3. How does it differ among whites by economic class? Using education as a proxy, a decline in life expectancy is confined to those with no education beyond high-school.
  4. What diseases are driving it? ‘Deaths of despair’ (suicide, alcoholic cirrhosis, drug overdose) are rising among white people in the US in absolute terms, and in comparison with non-white groups and with other countries. Cardiovascular deaths are no longer declining among whites in the US, even as they continue to do so in other countries. Increases in ‘deaths of despair’ along with arrest in declining cardiovascular diseases, combine to extinguish the declining trend.
  5. Is the phenomenon localised geographically? No, the ‘epidemic’ in ‘deaths of despair’ among white people covers rural and urban areas, and has pretty much become country-wide.
  6. Is the problem gender specific? No, the rise in ‘deaths of despair’ among the less-educated group affects both women and men.
  7. What are the long term trends? While the differences in mortality between better and less well educated groups are getting narrower in Europe, the gap is getting wider among whites in the US. This widening gap is also reflected in changes in self-assessed health.

So is all this really just a reflection of widening economic disparities? No:

  1. Disparities are widening within the black community and between black people and white people. However, mortality is converging between rich and poor black and Hispanic people, and ‘deaths of despair’ are not increasing in these ethnic groups.
  2. Widening disparities are seen in all comparator countries – in Spain, ‘deaths of despair’ actually declined through a vicious economic downturn between 2007 and 2011, for example.
  3. The difference in outcome correlates much more strongly with change in education than change in income.
  4. Historically there are many instances when mortality and inequality have moved in different directions, and selective reporting can be used by unscrupulous ideologues to buttress either side of this argument.

So why has it happened. Here we need to turn to sociology (in some desperation). A novel, called ‘Fishtown’ (by Neal Goldstein) captures some of the sociology; a tale of a rising feeling of purposelessness as workers overseas and machines at home combine to force less educated people (men especially) out of jobs. Such people rely on welfare, while immigrants take over the lowest paid jobs. Another explanation turns on the idea of differentials – this time between whites and non-whites, and loss of status rather than failure to achieve it – “if you have always been privileged, equality begins to look like oppression.” Case and Deaton are careful to point out that the above explanations are not strongly supported by the data. But there is something ‘out there’ – a ‘latent variable’ with a long memory (i.e. operating over the life course of various ‘cohorts’ of people). Many commentators pretend they have understood these latent variables, but I think we are going to have to look a lot harder and resist the beguiling but facile explanations offered up by journalists, political commentators, and academics alike (a point pursued in the next exciting instalment of your News Blog).— Richard Lilford, CLAHRC WM Director

References:

  1. Oredein T & Foulds J. Causes of the Decline in Cigarette Smoking Among African American Youths From the 1970s to the 1990s. Am J Public Health. 2011; 101(10): e4-14.
  2. The Economist. Falling crime. Where have all the burglars gone? The Economist. 20 July 2013.
  3. Wellings K, Palmer MJ, Geary RS, et al. Changes in Conceptions in Women Younger Than 18 Years and the Circumstances of Young Mothers in England in 2000-12: an Observational Study. Lancet. 2016; 388: 586-95.
  4. Case A, & Deaton A. Mortality and morbidity in the 21st century. Brookings Papers on Economic Activity. BPEA Conference Drafts. March 23-24, 2017.

Reducing Class Size

News blog readers know that from time to time I make a diversion into the territory of evidence-based education. On the way home from work recently, I listened to a discussion about the merits of reduced class size. One of the protagonists argued that reducing class size was very beneficial to learning outcomes. The other said that educational outcomes were hardly affected by class size. So I turned again to Hattie’s monumental work.[1] There was support for both positions from this well-studied intervention; the debate concerns the magnitude of the effect. The total number of students across the studies is about 1 million and the effect of reducing class size from about 25 to about 15 is about 0.15 of a standard deviation. This might sound like a nugatory effect (as argued by one of the debaters). However, a standard deviation of this magnitude represents about half a year of learning achievement. Remember, a standard deviation of only 0.3 represents a whole year and a standard deviation of 1.0 represents going on for three years of achievement, on average. Reducing class size is much less effective than many other interventions, but it still seems highly desirable. There is also an argument that teacher satisfaction and retention might be improved by smaller class sizes. However, when all is said and done, class-size is not nearly as important as teacher ability (which in full is not nearly as important as student ability, but that is a given for any particular class).

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Hattie J. Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement. Oxon, UK: Routledge, 2009.

The Second Machine Age

I must thank Dr Sebastiaan Mastenbroek (AMC, Amsterdam) for giving me a copy of the Second Machine Age by Brynjolfsson and McAfee.[1] At first I thought it was just another of those books describing how computers were going to take over the world.[2] Indeed the first part of the book is repetitive and not particularly insightful when it comes to the marvels of modern computers – I recently debated this subject live with another auteur, Daniel Susskind, on BBC World Service. However, the economic consequences of the second machine age are much more adroitly handled. The authors make a case that the wide disparities in wealth that have arisen over the last few decades are not entirely a function of globalization. The coming of computers has also had a large effect by increasing demand for jobs with a high cognitive content while reducing demand at the other end of the intellectual scale. Fortunately the book does not fall into the Luddite error of trying to hold back the progress of technology. That would be like the ancient Ottoman Empire which tried to ban printing. No, progress must continue, but it must be managed. The authors consider a universal income, but argue that it is too early for this. I agree. They also argue for a negative income tax. Such a tax does not act as a disincentive to work and has a lot going for it. All in all, this is one of the more sure-footed accounts of the economic consequences of the second machine age.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Brynjolfsson E & McAfee A. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York, NY: W. W. Norton & Company; 2014.
  2. Lilford RJ. A Book for a Change. NIHR CLAHRC West Midlands News Blog. 29 January 2016.

Childhood Decline in Physical Activity

Previously we have seen evidence from a cohort of children and adolescents in Norfolk that the decline in physical activity among modern young people takes place after childhood and during adolescence.[1] However in the majority of studies, including that which we looked at, the estimates of activity are largely based on reports. Now comes a new paper from Farooq and colleagues using objective measurements based on an accelerometer.[2] This new study shows that the previous conclusion was probably wrong. The study shows that there is a decline in moderate to vigorous physical activity throughout childhood and adolescence. A further interesting finding is that this study, based on objective measurements of activity, did not replicate the prevailing view that energy expenditure declines more rapidly in girls than in boys. This paper has considerable implications for policy. I would like to thank Professor Jeremy Dale for bringing this important paper to my attention.

— Richard Lilford, CLAHRC WM Director

References:

  1. Corder K, van Sluijs EMF, Ekelund U, Jones AP, Griffin SJ. Change in children’s physical activity over 12 months; longitudinal results from the SPEEDY study. Pediatrics. 2010; 126(4): e926-35.
  2. Farooq MA, Parkinson KN, Adamson AJ, et al. Timing of the decline in physical activity in childhood and adolescence: Gateshead Millennium Cohort Study. Br J Sports Med. 2017.

And Today We Have the Naming of Parts*

Management research, health services research, operations research, quality and safety research, implementation research – a crowded landscape of words describing concepts that are, at best, not entirely distinct, and at worst synonyms. Some definitions are given in Table 1. Perhaps the easiest one to deal with is ‘operations research’, which has a rather narrow meaning and is used to describe mathematical modelling techniques to derive optimal solutions to complex problems typically dealing with the flow of objects (people) over time. So it is a subset of the broader genre covered by this collection of terms. Quality and safety research puts the cart before the horse by defining the intended objective of an intervention, rather than where in the system the intervention impacts. Since interventions at a system level may have many downstream effects, it seems illogical and indeed potentially harmful, to define research by its objective, an argument made in greater detail elsewhere.[1]

Health Services Research (HSR) can be defined as management research applied to health, and is an acceptable portmanteau term for the construct we seek to define. For those who think the term HSR leaves out the development and evaluation of interventions at service level, the term Health Services and Delivery Research (HS&DR) has been devised. We think this is a fine term to describe management research as applied to the health services, and are pleased that the NIHR has embraced the term, and now has two major funding schemes ­– the HTA programme dealing with clinical research, and the HS&DR dealing with management research. In general, interventions and their related research programmes can be neatly represented as shown in the framework below, represented in a modified Donabedian chain:

078 DCB - Figure 1

So what about implementation research then? Wikipedia defines implementation research as “the scientific study of barriers to and methods of promoting the systematic application of research findings in practice, including in public policy.” However, a recent paper in BMJ states that “considerable confusion persists about its terminology and scope.”[2] Surprised? In what respect does implementation research differ from HS&DR?

Let’s start with the basics:

  1. HS&DR studies interventions at the service level. So does implementation research.
  2. HS&DR aims to improve outcome of care (effectiveness / safety / access / efficiency / satisfaction / acceptability / equity). So does implementation research.
  3. HS&DR seeks to improve outcomes / efficiency by making sure that optimum care is implemented. So does implementation research.
  4. HS&DR is concerned with implementation of knowledge; first knowledge about what clinical care should be delivered in a given situation, and second about how to intervene at the service level. So does implementation research.

This latter concept, concerning the two types of knowledge (clinical and service delivery) that are implemented in HS&DR is a critical one. It seems poorly understood and causes many researchers in the field to ‘fall over their own feet’. The concept is represented here:

078 DCB - Figure 2HS&DR / implementation research resides in the South East quadrant.

Despite all of this, some people insist on keeping the distinction between HS&DR and Implementation Research alive – as in the recent Standards for Reporting Implementation studies (StaRI) Statement.[3] The thing being implemented here may be a clinical intervention, in which case the above figure applies. Or it may be a service delivery intervention. Then they say that once it is proven, it must be implemented, and this implementation can be studied – in effect they are arguing here for a third ring:

078 DCB - Figure 3

This last, extreme South East, loop is redundant because:

  1. Research methods do not turn on whether the research is HS&DR or so-called Implementation Research (as the authors acknowledge). So we could end up in the odd situation of the HS&DR being a before and after study, and the Implementation Research being a cluster RCT! The so-called Implementation Research is better thought of as more HS&DR – seldom is one study sufficient.
  2. The HS&DR itself requires the tenets of Implementation Science to be in place – following the MRC framework, for example – and identifying barriers and facilitators. There is always implementation in any trial of evaluative research, so all HS&DR is Implementation Research – some is early and some is late.
  3. Replication is a central tenet of science and enables context to be explored. For example, “mother and child groups” is an intervention that was shown to be effective in Nepal. It has now been ‘implemented’ in six further sites under cluster RCT evaluation. Four of the seven studies yielded positive results, and three null results. Comparing and contrasting has yielded a plausible theory, so we have a good idea for whom the intervention works and why.[4] All seven studies are implementations, not just the latter six!

So, logical analysis does not yield any clear distinction between Implementation Research on the one hand and HS&DR on the other. The terms might denote some subtle shift of emphasis, but as a communication tool in a crowded lexicon, we think that Implementation Research is a term liable to sow confusion, rather than generate clarity.

Table 1

Term Definitions Sources
Management research “…concentrates on the nature and consequences of managerial actions, often taking a critical edge, and covers any kind of organization, both public and private.” Easterby-Smith M, Thorpe R, Jackson P. Management Research. London: Sage, 2012.
Health Services Research (HSR) “…examines how people get access to health care, how much care costs, and what happens to patients as a result of this care.” Agency for Healthcare Research and Quality. What is AHRQ? [Online]. 2002.
HS&DR “…aims to produce rigorous and relevant evidence on the quality, access and organisation of health services, including costs and outcomes.” INVOLVE. National Institute for Health Research Health Services and Delivery Research (HS&DR) programme. [Online]. 2017.
Operations research “…applying advanced analytical methods to help make better decisions.” Warwick Business School. What is Operational Research? [Online]. 2017.
Patient safety research “…coordinated efforts to prevent harm, caused by the process of health care itself, from occurring to patients.” World Health Organization. Patient Safety. [Online]. 2017.
Comparative Effectiveness research “…designed to inform health-care decisions by providing evidence on the effectiveness, benefits, and harms of different treatment options.” Agency for Healthcare Research and Quality. What is Comparative Effectiveness Research. [Online]. 2017.
Implementation research “…the scientific inquiry into questions concerning implementation—the act of carrying an intention into effect, which in health research can be policies, programmes, or individual practices (collectively called interventions).” Peters DH, Adam T, Alonge O, Agyepong IA, Tran N. Implementation research: what it is and how to do it. BMJ. 2013; 347: f6753.

We have ‘audited’ David Peters’ and colleagues BMJ article and found that every attribute they claim for Implementation Research applies equally well to HS&DR, as you can see in Table 2. However, this does not mean that we should abandon ‘Implementation Science’ – a set of ideas useful in designing an intervention. For example, stakeholders of all sorts should be involved in the design; barriers and facilitators should be identified; and so on. By analogy, I think Safety Research is a back-to-front term, but I applaud the tools and insights that ‘safety science’ provides.

Table 2

Term
“…attempts to solve a wide range of implementation problems”
“…is the scientific inquiry into questions concerning implementation – the act of carrying an intention into effect, which in health research can be policies, programmes, or individual practices (…interventions).”
“…can consider any aspect of implementation, including the factors affecting implementation, the processes of implementation, and the results of implementation.”
“The intent is to understand what, why, and how interventions work in ‘real world’ settings and to test approaches to improve them.”
“…seeks to understand and work within real world conditions, rather than trying to control for these conditions or to remove their influence as causal effects.”
“…is especially concerned with the users of the research and not purely the production of knowledge.”
“…uses [implementation outcome variables] to assess how well implementation has occurred or to provide insights about how this contributes to one’s health status or other important health outcomes.
…needs to consider “factors that influence policy implementation (clarity of objectives, causal theory, implementing personnel, support of interest groups, and managerial authority and resources).”
“…takes a pragmatic approach, placing the research question (or implementation problem) as the starting point to inquiry; this then dictates the research methods and assumptions to be used.”
“…questions can cover a wide variety of topics and are frequently organised around theories of change or the type of research objective.”
“A wide range of qualitative and quantitative research methods can be used…”
“…is usefully defined as scientific inquiry into questions concerning implementation—the act of fulfilling or carrying out an intention.”

 — Richard Lilford, CLAHRC WM Director and Peter Chilton, Research Fellow

References:

  1. Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ. 2010; 341: c4413.
  2. Peters DH, Adam T, Alonge O, Agyepong IA, Tran N. Implementation research: what it is and how to do it. BMJ. 2013; 347: f6753.
  3. Pinnock H, Barwick M, Carpenter CR, et al. Standards for Reporting Implementation Studies (StaRI) Statement. BMJ. 2017; 356: i6795.
  4. Prost A, Colbourn T, Seward N, et al. Women’s groups practising participatory learning and action to improve maternal and newborn health in low-resource settings: a systematic review and meta-analysis. Lancet. 2013; 381: 1736-46.

*Naming of Parts by Henry Reed, which Ray Watson alerted us to:

Today we have naming of parts. Yesterday,

We had daily cleaning. And tomorrow morning,

We shall have what to do after firing. But to-day,

Today we have naming of parts. Japonica

Glistens like coral in all of the neighbouring gardens,

And today we have naming of parts.

The Brain Speaketh Unto the Gut and the Gut Answereth Back

In the previous News Blog I mentioned the hypothesis that an altered gut microbiome may trigger chronic fatigue syndrome.[1] I promised more on the topic. Many years ago I chaired the Scientific Advisory Committee for the MRC Oracle study. This was a study of antibiotics versus no antibiotics to prevent preterm labour.[2] There were no differences in short-term outcomes in children of the antibiotic versus control mothers. But CLAHRC WM associate Sara Kenyon and her colleagues followed the children up to the age of seven. The results show markedly higher levels of cerebral palsy in the intervention (antibiotic) group over the control (no antibiotics) group, and in one of the two antibiotics used the risk of other functional impairments was also increased.[3] I was also inclined to pass this off as a chance finding – type 1 error. Now I am not so sure – recent evidence in Nature [4] shows that antibiotics in baby mice cause changes on their frontal cortices, affect the blood-brain barrier, and alter behaviour. These changes are partially preventable by probiotic administration. If maternal antibiotics are bad for the baby brain, then presumably so is neonatal antibiotic administration. It would be interesting to follow up neonates of given gestational age, mass and clinical condition to compare outcomes in those given antibiotics and non-antibiotics. Yes, I know it will be confounded by indication for antibiotics, so a null result would be more informative than a positive result.

— Richard Lilford, CLAHRC WM Director

References:

  1. Lilford RJ. Biological Underpinnings of Chronic Fatigue? NIHR CLAHRC West Midlands News Blog. 21 April 2017.
  2. Kenyon S, Taylor DJ, Tarnow-Mordi W, for the ORACLE Collaborative Group. Broad-spectrum antibiotics for preterm, prelabour rupture of fetal membranes: the ORACLE I randomised trial. Lancet. 2001; 357: 979-88.
  3. Kenyon S, Pike K, Jones DR, Brocklehurst P, Marlow N, Salt A, Taylor DJ. Childhood outcomes after prescription of antibiotics to pregnant women with spontaneous preterm labour: 7-year follow-up of the ORACLE II trial. Lancet. 2008; 372: 1319-27.
  4. Leclercq S, Mian FM, Stanisz AM, et al. Low-dose penicillin in early life induces long-term changes in murine gut microbiota, brain cytokines and behavior. Nat Commun. 2017; 8: 15062.

Yet Another Null Result on Vitamin D and Calcium Supplementation in Older Women

Hard on the heels of the results of a systematic review in a recent blog,[1] yet another RCT of calcium and vitamin D in healthy people.[2] This time the end-point is cancer, and again the result is null. The authors call for yet more research but, again, one wonders whether this topic should not just be put to bed. It is true, of course, that exposure to sunlight is associated with lower risk of cancer, but this might not be a causal relationship, and even if it is, sunlight and oral vitamin D are not the same thing, just as oral and ovarian oestrogen are not equivalent.

— Richard Lilford, CLAHRC WM Director

References:

  1. Lilford RJ. Effects of Vitamin D Supplementation. NIHR CLAHRC West Midlands News Blog. 24 March 2017.
  2. Lappe J, Watson P, Travers-Gustafson D, et al. Effect of Vitamin D and Calcium Supplementation on Cancer Incidence in Older Women. A Randomized Clinical Trial. JAMA. 2017; 317(12): 1234-43.

An Intriguing Suggestion to Link Trial Data to Routine Data

When extrapolating from trial data to a particular context, it is important to compare the trial population to the target population. Given sufficient data, it is possible to examine treatment effect across important subgroups of patients. Then the trial results can be related to a specific sub-group, say with less severe disease than the average in the trial. One problem is that trial data are collected with greater diligence than routine data. Hence a suggestion to link trial data to routine data collected on the same patients. That way one can compare subgroups of trial and non-trial patients recorded in a broadly similar (i.e. routine) way.[1] This strikes me as a half-way house to the day when (most) trial data are collected by routine systems, and trials are essentially nested within routine data-collection systems.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Najafzadeh M, Schneeweiss S. From Trial to Target Populations – Calibrating Real-World Data. N Engl J Med. 2017; 376: 1203-4.

Crying Infants – the Epidemiology of ‘Colic’

The period following childbirth is stressful for parents and uncontrollable crying is an important cause of this stress. Wolke and colleagues [1] have consolidated the results of studies across the world in a meta-analysis and show that:

  1. Crying peaks at around six weeks of age, and then declines sharply over the next three months.
  2. Bottle or mixed-fed babies cry less than those that are purely breastfed.
  3. Crying is much more common in some countries (Canada and UK) than others (Denmark and Japan), and this is a robust finding (i.e. replicated across many studies). I don’t suppose that this is the result of lower breastfeeding rates in Denmark and Japan than in Canada or the UK?

What the study does not show is how crying varies within families or by birth order. Nor does there seem to be an effective remedy for the problem. Pilgrim was right, it is not easy being a human, not at the beginning, not in the middle, and certainly not at the end.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Wolke D, Bilgin A, Samara M. Systematic Review and Meta-Analysis: Fussing and Crying Durations and Prevalence of Colic in Infants. J Pediatr. 2017.

 

More on Medical School Admission

I thank Celia Taylor for drawing my attention to an important paper on the relationship between personality test results, and cognitive and non-cognitive outcomes at medical school.[1] Everyone accepts that being a good doctor is about much more than cognitive excellence. That isn’t the question. The question is how to select for salient non-cognitive attributes? The paper is a hard read because one must first learn the acronyms for all the explanatory and outcome tests. So let the News Blog take the strain!

The study uses a database containing entry level personality scores, which were not used in selection, and outcomes following medical training. To cut a long story short “none of the non-cognitive tests evaluated in this study has been shown to have sufficient utility to be used in medical student selection.” And, of course, even if a better test is found in the future, it may perform differently when used as part of a selection process than when used for scientific purposes. I stick by the conclusions that Celia and I published in the BMJ many years ago [2]; until a test is devised that predicts non-cognitive medical skills, and assuming that cognitive ability is not negatively associated with non-cognitive attributes, we should select purely on academic ability. I await your vituperative comments! In the meantime can I suggest a research idea – correlate cognitive performance with the desirable compassionate skills we would like to see in our doctor. Maybe the correlation is positive, such that the more intelligent the person, the more likely they are to demonstrate compassion and patience in their dealings with patients.

— Richard Lilford, CLAHRC WM Director

References:

  1. MacKenzie RK, Dowell J, Ayansina D, Cleland JA. Do personality traits assessed on medical school admission predict exit performance? A UK-wide longitudinal cohort study. Adv Health Sci Educ Theory Pract. 2017; 22(2): 365-85.
  2. Brown CA, & Lilford RJ. Selecting medical students. BMJ. 2008; 336: 786.