Category Archives: Snippet

Update on Scientists Being Held Accountable for Impact of Research

I recently wrote a news blog on the dangers of researchers being advocates for their own work. Readers may be interested in an article from an authoritative source that I chanced upon recently, published in BMC Medicine (Whitty JM. What Makes an Academic Paper Useful for Health Policy? BMC Med. 2015; 10: 301).

— Richard Lilford, CLAHRC WM Director

Raison d’être for the CLAHRC News Blog

People sometimes ask the CLAHRC WM director why he writes his fortnightly News Blog. “Isn’t it an awful lot of work?”, “People already have enough to read,” and from his spouse “Anyway, who wants to know what you think?” So here are some of the reasons for doing it:

  • It is a dissemination vehicle for the CLAHRC WM. CLAHRCs have to engage with people and get their message out. This is one way of doing so. It is a way of reaching our patient representatives and the management community who may not access the formal academic literature. People are interested to read what our CLAHRC is doing and how that relates to “issues of the day.”
  • It is a way of keeping the score – the News Blog keeps account of the intellectual and practical achievements chronologically – it is a sort of living history of our CLAHRC.
  • Writing for the News Blog makes me organise my thoughts, form a view and reflect on the implications of articles I summarise for my work. In other words it is good Continuing Professional Development.
  • It is actually quite fun to write.
  • I find it acts as a useful repository of references when I am writing research papers and grant applications.
  • It is a place where I can publish ideas so outlandish, controversial, perspicuous or ineffable that, while they may entertain, amuse, and even inform, they would never make it into a ‘serious’ academic journal.

— Richard Lilford, CLAHRC WM Director

The Origins of Systematic Reviews

Circa AD 150, Ptolemy of Alexandria produced his Geographia, a gazeteer, atlas and treatise on cartography, which complied the geographical knowledge of the Roman Empire. How? By conducting what could be considered a systematic review. Firstly, by searching a database to find all of the material on the topic ­– “the first step in a proceeding of this kind is systematic research, assembling the maximum of knowledge from the reports of people with scientific training who have toured the individual countries…[1] This he did thanks to the consultation of the Pinakes, the first known library catalogue housed at the Library of Alexandria and created by Callimachus c.250 BC.[1] This was followed by doing a synthesis of the data, which was ultimately used to inform policy and practice, for example Christopher Columbus consulted a copy before he set out across the Atlantic Ocean. However, there was systematic bias in the data, leading to a major miscalculation in the distance to the East Indies.

— Prof Martin Underwood, Warwick Medical School

Reference:

  1. Brotton J. A History of the World in Twelve Maps. London: Penguin Books. 2013.

Medically Qualified Chief Executives

Two medically qualified NHS hospital chief executives have recently bitten the dust. First, Mark Newbold from Heart of England Hospitals, and now Keith McNeil at Addenbrookes. Two swallows do not make a summer and some medical chiefs have had very successful careers at the top – Sir Jonathan Michael, for example, is now on his third such appointment. Yet Newbold and McNeil had had successful earlier management careers at senior levels. They are impressive characters. Maybe it is harder for a medic than for a generic administrator to pull off the chief executive role. In many hospitals the chief executive and the medical director are a “duality”, with a close personal relationship, and high levels of trust, such that the medical director can sample the mood of consultants and act as a “lightning conductor”. What do others think?

— Richard Lilford, CLAHRC WM Director

Bring Back the Ward Round

Diagnosis, diagnosis, diagnosis. Both this and a previous post have made the argument that diagnostic errors should receive more attention. An important and elegant paper from previous CLAHRC WM collaborator, Wolfgang Gaissmaier,[1] shows that diagnostic accuracy is improved when medical students work in pairs. Of course, paired working is not possible most of the time, but it does suggest that opportunities for doctors to ‘put their heads together’ should be created whenever possible. The old-fashioned ward round had much to commend it.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Hautz WE, Kämmer JE, Schauber SK, Spies CD, Gaissmaier W. Diagnostic Performance by Medical Students Working Individually or in Teams. JAMA. 2015; 313(3): 303-4.

Medicine for the soul…

This was the inscription above the door of the library in the ancient Egyptian city of Thebes. The Egyptians clearly recognised the therapeutic benefits of reading. The idea of Bibliotherapy as a concept therefore is not new, but has recently been re-packaged and rolled out in a national scheme called ‘Information/Books on Prescription’ (I/BOP).

A collaboration between Health Services, The Reading Agency and local libraries, the I/BOP project offers an alternative approach to dealing with mental health issues by enabling patients to ‘self-help’. The project was introduced nationally in June 2013. Put simply, a GP diagnosing early stage mental health issues can prescribe an ‘Information Prescription’, which recommends either specific or general self-help study books from a list compiled with input from health specialists. The patient takes the prescription along to a local participating library, which provides the books.

A report by The Centre for Economic Performance’s Mental Health Policy Group states that “mental illness accounts for nearly 40% of morbidity, compared for example 2% due to diabetes”.[1] The annual expenditure on healthcare for mental illness amounts to some £14 billion. Interventions such as I/BOP are essential projects working to reduce both the expenditure and the human costs of mental illness.

The self-help approach through Bibliotherapy has a ‘wealth of evidence…’ that supports its use for illnesses such as depression, anxiety and self-harm.[2] Using book-based cognitive behavioural therapy, the I/BOP scheme has reached 275,000 people during its first year, and seen a 113% increase in the loan of titles on the list.[3] The patient is not required to have library membership, although evidence shows that those participating in the scheme are more likely to join and access additional books to those prescribed.

Furthermore, although the focus of I/BOP is on self-help books, The Reading Agency runs another scheme alongside I/BOP entitled ‘Reading Well – Mood Boosting books’, which urges users of I/BOP (indeed everyone) to read the uplifting novels, non-fiction and poetry titles recommended on their reading list as a means of maintaining well-being. The subject of a study carried out by cognitive neurophysiologist Dr David Lewis suggests that reading for as little as six minutes can reduce stress levels by 68%, compared with listening to music (61%) having a cup of tea/coffee (54%) or taking a walk (42%).[4] Other studies have shown that the very act of reading literary fiction improves Theory of Mind (ToM), the ability to understand others’ emotions,[5] although it may be argued that the I/BOP mood boosting list is compiled more of popular fiction than literary, we gain further insight into how reading literature has positive cognitive connotations.

On 26 January 2015 an I/BOP scheme specifically aimed at suffers of dementia and their carers was launched nationally, and a scheme accessible to children and young people with mental health issues is expected to be launched in 2016. To discover more about the scheme and the reading lists follow the link to The Reading Agency website.[3] I’d be interested to hear suggestions of ‘mood boosting books’ from you.

— Michelle Brown, Administrative Assistant

References:

  1. The Centre for Economic Performance’s Mental Health Policy Group. How mental health loses out in the NHS. London: The London School of Economics and Political Science. 2012.
  2. Chamberlain D, Heaps D, Robert I. Bibliotherapy and information prescriptions: a summary of the published evidence-base and recommendations from past and ongoing Books on Prescription projects. J Psychiatr Mental Health Nurs. 2008; 15: 24-36.
  3. The Reading Agency. Reading Well Books on Prescription Evaluation Report 2013/14. London: The Reading Agency. 2013.
  4. Telegraph Health News. Reading ‘can help reduce stress’. The Telegraph. 30 March 2009.
  5. Kidd DC, Castano E. Reading Literary Fiction Improves Theory of Mind. Science. 2013; 342: 377-80.

Extraordinary claims require extraordinary evidence

The subject of last issue’s quiz was the results of a study from The Tufts Center for the Study of Drug Development regarding the new estimates of the costs of developing a new drug. As is rightly stated the estimate was $2.6 billion. This study is an update of the original study by DiMasi and colleagues,[1] whose finding that the costs (in 2000 USD) of drug development were close to $1 billion, has achieved near canonical status. However, considerable doubt has been thrown on these claims, and the criticisms of the original study should be applied to this new research. Light and Warburton’s critique [2] [3] drew on a number of points: the lack of comparability and reliability about the original survey data as well as the lack of transparency (as the data were not made publicly available); there was a clear interest for pharmaceutical companies to overstate their costs in survey responses; neither the firms nor the drugs considered were random samples; the only drugs considered in the study were “self-originated new chemical entities” (NCEs) whose costs of development are many times higher than acquired or licensed-in NCEs, new formulations, combinations, or administrations of existing drugs, and yet only comprise around 22% of new drug approvals; government subsidies were not deducted; and, there was no adjustment for tax deductions and credits, to name but a few.

Articles in major journals based on industry sponsored research are three to four times more likely to report results favourable to the sponsors than articles with independent funding.[4] [5] Considerable variation therefore exists in estimates of the costs of drug development. Light and Warburton have estimated the median figure to be roughly a tenth of the original DiMasi estimate.[6] While this may seem (perhaps implausibly) low it certainly suggests we need to take industry sponsored research that affects health policy with a healthy dose of skepticism.

— Sam Watson, University of Warwick

References:

  1. DiMasi JA, Hansen RW, Grabowski HG. The price of innovation: new estimates of drug development costs. J Health Econ. 2003; 22(2): 151-85.
  2. Light DW & Warburton RN. Extraordinary claims require extraordinary evidence. J Health Econ. 2005; 24(5): 1030-3.
  3. Light DW & Warburton RN. Setting the record straight in the reply by DiMasi, Hansen and Grabowski. J Health Econ. 2005; 24(5): 1045-8.
  4. Bekelman JE, Li Y, Gross CP. Scope and impact of financial conflicts of interest in biomedical research. A systematic reviewJAMA. 2003; 289(4): 454-65.
  5. Lexchin J, Bero LA, Djulbegovic B, Clark O. Pharmaceutical industry sponsorship and research outcome and quality: systematic reviewBMJ. 2003; 326: 1167.
  6. Light DW & Warburton R. Demythologizing the high costs of pharmaceutical research. BioSocieties. 2011; 6: 34-50.

Evaluating Service Interventions

CLAHRCs were invented to align research practice with service change. As a result of such alignment, here in the West Midlands we have been able to evaluate service interventions with the scope and timescale of the service imperative, including:

  • Peer support for mothers at high social risk to see whether there is an improvement in mother/child bonding – individual person RCT.[1]
  • Rearranging mental health services to reduce delay in treatment for schizophrenia in adolescents – multi-centre before and after study.[2]
  • Case-finding for cardiovascular risk in Primary Care to assess uptake of services – cluster step-wedge RCT.[3]
  • Educational intervention to improve attitudes to mental health in schools – cluster parallel RCT.[4]

A brilliant example of an evaluation driven by service change is the Oregon Health Insurance experiment. Expansion of medical coverage was implemented by lottery, inadvertently generating an RCT. This was used to evaluate the effects of improving access to health care. The political and logistical issues behind the trial are discussed by Allen et al.[5] and the result will be summarised in a forthcoming blog.

— Richard Lilford, CLAHRC WM Director

References:

  1. Kenyon S, Jolly K, Hemming K, Ingram L, Gale N, Dann S-A, Chambers J, MacArthur C. Evaluation of Lay Support in Pregnant women with Social risk (ELSIPS): a randomised controlled trial. BMC Pregnancy Childbirth. 2012; 12: 11.
  2. Birchwood M, Bryan S, Jones-Morris N, Kaambwa B, Lester H, Richards J, Rogers H, Sirvastava N, Tzemou E. EDEN: Evaluating the development and impact of Early Intervention Services (EISs) in the West Midlands. NIHR Service Delivery & Organisation. HS&DR 08/1304/042. 2007.
  3. Marshall T, Caley M, Hemming K, Gill P, Gale N, Kolly K. Mixed methods evaluation of targeted case finding for cardiovascular disease prevention using a stepped wedge cluster RCT. BMC Public Health. 2012; 12: 908.
  4. Chisholm KE, Patterson P, Torgerson C, Turner E, Birchwood M. A randomised controlled feasibility trial for an educational school-based mental health intervention: study protocol. BMC Psychiatry. 2012; 12: 23.
  5. Allen H, Baicker K, Finkelstein A, Taubman S, Wright BJ, Oregon Health Study Group. What the Oregon health study can tell us about expanding Medicaid. Health Aff. 2010; 29(8): 1498-1506.

Behaviour Change – Special Issue of Psychology and Health

Readers of this News Blog know that CLAHRCs are interested in behaviour change – CLAHRCs not interested in this subject should send the money back! So a recent special issue of Psychology and Health on the risk of bias in RCTs of behaviour change interventions should pique our interest. Unsurprisingly, much of the material is old hat to clinical and service delivery researchers, and the issues discussed are not specific for behaviour change interventions. Drug trials are the exception in not having to cope with difficulties such as in blinding therapists (leading to co-intervention or contamination), blinding patients and observers (leading to detection bias for subjective outcomes), and isolating or standardising the active ingredient of the intervention. The above problems are shared with trials of most types of intervention; surgery, physiotherapy, targeted service change, generic service change, and so on. One author conflates randomisation (a procedure to guard against selection bias) with other procedures, such as double blinding (which guards against performance and detection bias).[1] In fact, they are separate causes of bias and it is possible to have one without the other.

If you have time for only one article, I recommend the paper by Jim McCambridge [2] on the social psychology of research participation. This includes question-behaviour effects where consent procedures or outcome questionnaires (applied to control and intervention groups) interact with the intervention to attenuate or amplify its effects. To deal with this, they recommend the Solomon-4 design where randomisation is to both intervention and (enhanced) questionnaires in a 2×2 factorial design. A real example where filling in a lengthy questionnaire interacted synergistically with an intervention is given. McCambridge makes the excellent point that the problems don’t go away just because a study is not randomised. The article, however, also deals with randomisation itself. Being assigned to a control group might be associated with ‘resentful demoralisation’. Here Zelen randomisation (no consent from control group) is one possibility. Another, oft recommended by the CLAHRC WM Director, is ensuring that only patients in equipoise [3] enter a trial, as originally recommended by Brewin and Bradley.[4]

— Richard Lilford, CLAHRC WM Director

References:

  1. Tarquinio C, Kivits J, Minary L, Coste J, Alla F. Evaluating complex interventions: Perspectives and issues for health behaviour change interventions. Psychol Health. 2015; 30(1): 35-51.
  2. McCambridge J. From question-behaviour effects in trials to the social psychology of research participation. Psychol Health. 2015; 30(1): 72-84.
  3. Lilford RJ, Jackson J. Equipoise and the ethics of randomization. J R Soc Med. 1995; 88(10): 552-9.
  4. Brewin CR, Bradley C. Patient preferences and randomised clinical trials. BMJ. 1989; 299(6694): 313-5.

Simpson’s Paradox and Discrimination

Readers of the News Blog will have encountered an example of Simpson’s paradox in a previous blog, applied first to base-ball strikers’ averages and then to the beguilingly appealing issue of Standardised Mortality Rates. Prof. Tony Belli, director of the NIHR Surgical Reconstruction and Microbiology Research Centre in Birmingham, recently drew the CLAHRC WM Director’s attention to another fascinating example; this time arising from discrimination cases in American courts.[1] [2] You will remember that Simpson’s paradox can arise by aggregating data across strata where the strata vary in size and where outcome rates differ across strata. The departments of English and History attract large numbers of applicants, a high proportion of whom are women, and rejection rates are high. Mathematics and Physics, by contrast, attract fewer applicants, a high proportion of whom are male, and rejection rates are low. Simple aggregation, ignoring the interaction between acceptance rates and applicant numbers, results in the mistaken conclusion that there is discrimination against women, rather than the correct conclusion that women favour popular subjects with high rejection rates. To avoid this problem it is necessary to use a method of aggregation based on weighted averages of strata specific estimates.

— Richard Lilford, CLAHRC WM Director

References

  1. Borhani H. Bias in Measuring Bias. American Bar Association Labour and Employment Section’s Annual CLE. Washington D.C. November 4–7 2009.
  2. Bickel PJ, Hammel EA, O’Connell JW. Sex Bias in Graduate Admissions: Data from Berkeley. Science. 1975. 187(4175): 398-404.