Tag Archives: RCTs

Mandatory Publication and Reporting of Research Findings

Publication bias refers to a phenomenon by which research findings that are statistically significant or perceived to be interesting/desirable are more likely to be published, and vice versa.[1] The bias is a major threat to scientific integrity and can have major implications for patient welfare and resource allocation. Progress has been made over the years in raising awareness and minimising the occurrence of such bias in clinical research: pre-registration of trials has been made compulsory by editors of leading medical journals [2] and subsequently regulatory agencies. Evidence of a positive impact on the registration and reporting of findings from trials used to support drug licensing has started to emerge.[3,4] So can this issue be consigned to history now? Unfortunately the clear answer is no.

A recent systematic review showed that, despite a gradual improvement in the past two decades, the mean proportion of pre-registration among randomised controlled trials (RCTs) included in previous meta-epidemiological studies of trial registration only increased from 25% to 52% between 2005 to 2015.[5] A group of researchers led by Dr Ben Goldacre created the EU Trials Tracker (https://eu.trialstracker.net/), which utilises automation to facilitate the identification of trials that are due to report their findings but have not done so within the European Union Clinical Trials Register.[6] Their estimates show a similar picture that half of the trials that were completed have not reported their results. The findings of the Trial Tracker are presented in a league table that allows people to see which sponsors have the highest rate of unreported trials. You might suspect that pharmaceutical companies are likely to be the top offenders given the high profile cases of supressing drug trial data in the past. In fact the opposite is now true – major pharmaceutical companies are among the best compliers of trial reporting, whereas some of the universities and hospitals seem to have achieved fairly low reporting rates. While there may be practical issues and legitimate reasons behind the absence/delay in the report of findings for some of the studies, the bottom line is that making research findings available is a moral duty for researchers irrespective of funding sources, and with improved trial registration and enhanced power of data science, leaving research findings to perish and be forgotten in a file drawer/folder is neither an acceptable nor a feasible option.

With slow but steady progress in tackling publication bias in clinical research, you might wonder about health services research that is close to heart for our CLAHRC. Literature on publication bias in this field is scant, but we have been funded by the NIHR HS & DR Programme to explore the issue in the past two years and some interesting findings are emerging. Interested readers can access further details, including conference posters reporting our early findings, on our project website (warwick.ac.uk/publicationbias). We will share further results with News Blog readers in the near future, and in due course, publish them all!

— Yen-Fu Chen, Associate Professor

References:

  1. Song F, Parekh S, Hooper L, et al. Dissemination and publication of research findings: an updated review of related biases. Health Technol Assess. 2010;14(8):1-193.
  2. Laine C, De Angelis C, Delamothe T, et al. Clinical trial registration: looking back and moving aheadAnn Intern Med. 2007;147(4):275-7.
  3. Zou CX, Becker JE, Phillips AT, et al. Registration, results reporting, and publication bias of clinical trials supporting FDA approval of neuropsychiatric drugs before and after FDAAA: a retrospective cohort study. Trials. 2018;19(1):581.
  4. Phillips AT, Desai NR, Krumholz HM, Zou CX, Miller JE, Ross JS. Association of the FDA Amendment Act with trial registration, publication, and outcome reporting. Trials. 2017;18(1):333.
  5. Trinquart L, Dunn AG, Bourgeois FT. Registration of published randomized trials: a systematic review and meta-analysis. BMC Medicine. 2018;16(1):173.
  6. Goldacre B, DeVito NJ, Heneghan C, et al. Compliance with requirement to report results on the EU Clinical Trials Register: cohort study and web resource. BMJ. 2018;362:k3218.

Preference Trials: an Old Subject Revisited

The CLAHRC WM Director tries to keep up to date with his literature summaries. However, from time to time, he dips into past literature. Recently he had a reason to re-read a paper on preference trials from David Torgerson and Bonnie Sibbald.[1] They point out the shortcomings of the comprehensive cohort design, whereby randomised people are followed-up along with those who manifest a preference such that they decline randomisation. However, the non-randomised cohorts so generated are subject to selection bias. To get around this bias a trial is proposed, in which all patients are randomised, but where the preference is recorded prior to randomisation. Measuring patient preferences within a fully randomised design conserves all the advantage of a randomised study with the further benefit of allowing for the interaction between outcome and preference to be measured. But, of course, there is an ethical issue here, in that people who are not equipoised are randomised. And why would you be randomised if not in equipoise? One reason is if the treatment, if not apparently successful, can be reversed later. For example, a person might be happy to be randomised to a trial of a medicine to reduce the frequency of migraine. The person might be happy to sacrifice a small quantum of expected utility to contribute to knowledge, secure in the belief that they can reverse the decision later. But to elicit such altruism in the face of a life or death treatment comparison, radiotherapy vs. surgery for prostate cancer for example, is to privilege knowledge over individual welfare in a non-trivial way. So here is ‘Lilford’s rule’ – do not offer patients ‘preference trials’ when the outcome is severe and irrevocable. In such circumstances it is fine to offer randomisation, but those who have a preference – either way – should not be subtly coerced to accept randomisation.[2] Further, the patient should be fully informed because patients are more likely to accept randomisation when information is withheld.[3]

— Richard Lilford, CLAHRC WM Director

References:

  1. Torgerson D, Sibbald B. Understanding controlled trials: What is a patient preference trial? BMJ. 1998; 316: 360.
  2. Lilford RJ. Ethics of Clinical Trials from a Bayesian and Decision Analytic Perspective: Whose Equipoise Is It Anyway? BMJ. 2003; 326: 980.
  3. Wragg JA, Robinson EJ, Lilford RJ. Information presentation and decisions to enter clinical trials: a hypothetical trial of hormone replacement therapy. Soc Sci Med. 2000; 51(3): 453-62.

Counter Intuitive Findings in Cervical Cancer Surgery

In recent years there has been an increase in the use of minimally invasive surgeries for a number of cancers, with many, such as uterine, colorectal, or gastric cancers, showing similar survival rates to traditional open surgery. Although there hasn’t been much specific evidence for the use of minimally invasive hysterectomy in patients with cervical cancer, it has steadily become adopted in a number of countries. Traditional open surgery for hysterectomy has been associated with considerable perioperative and long-term complications, while minimally invasive hysterectomy has been shown to reduce risk of infection and improve recovery times.

The New England Journal of Medicine has recently published the results of two separate studies looking at differences in survival rates following minimally invasive surgery (laparoscopy) compared to open surgery (laparotomy) for radical hysterectomy in cervical cancer patients.[1][2] One study, by Ramirez, et al., was a randomised controlled trial conducted in 33 centres across the globe,[1] while the other by Melamed, et al., was an observational study using a US dataset.[2] Both looked at a similar subset of patients with a similar period of follow-up.

In the RCT 563 patients underwent one of the two types of hysterectomy, and follow-up at four and a half years showed a significant difference in disease-free survival – 86.0% of those who had undergone minimally invasive surgery compared to 96.5% who had undergone open surgery (difference of -10.6 percentage points, 95% CI -16.4 to -4.7). Further, the minimally invasive surgery was associated with a lower rate of overall survival at three years (93.8% vs. 99.0%) with a hazard ratio for death of 6.00 (95% CI 1.77-20.30).

In the other, observational, study, the authors looked at 2,461 women who underwent a hysterectomy and found that after four years 90.9% of those who had minimally invasive surgery survived, compared to 94.7% of those who had undergone open surgery (hazard ratio 1.65, 95% CI 1.22-2.22). Looking at a longer time period of data, the widespread adoption of minimally invasive surgery in 2006 coincided with a decline in the four year relative survival rate of 0.8% per year (p=0.01).

So, here we have another two studies where the results of the randomised trial broadly agree with those from the observational study,[3] and with a large and significant effect. Looking at the methods used this counter intuitive effect is not accounted for by a more complex excision being performed during the open surgery. Instead, it may be that something to do with the technique – could manipulation of the cervix during the laparoscopy, or exposure of the tumour to circulating CO2, lead to the release of cancerous cells into the blood stream of the patient?

What we would like to know from News Blog readers is whether they know of any studies where someone has counted (using PCR or cell separation) to see if cancer cells are released into circulation when a tumour is manipulated. Please let us know.

— Peter Chilton, Research Fellow

References:

  1. Ramirez PT, Frumovitz M, Pareja R, et al. Minimally Invasive versus Abdominal Radical Hysterectomy for Cervical Cancer. New Engl J Med. 2018.
  2. Melamed A, Margul DJ, Chen L, Keating NL. Survival after Minimally Invasive Radical Hysterectomy for Early-Stage Cervical Cancer. New Engl J Med. 2018.
  3. Lilford RJ. RCTs versus Observational Studies: This Time in the Advertisement Industry. NIHR CLAHRC West Midlands News Blog. 29 June 2018.

RCTs versus Observational Studies: This Time in the Advertisement Industry

There is a substantive body of medical methodological research in which the results of RCTs for a given treatment are compared to the results of observational studies for that same treatment. They show that, as compared to the RCTs, effect sizes in the observational studies are similar to those in RCTs, but that they are widely scattered around the gold standard (i.e. RCT) estimate.[1] [2]

A similar result was obtained in a study of RCTs versus observational studies in the economics literature back in the 1980s, except that in their study RCTs yield more conservative estimates than those in observational studies.[3] Now, a similar study has been carried out in the advertising industry using advertisements carried on Facebook as the basis for a field experiment.[4] The results of 15 RCTs of advertisements on Facebook were compared to the results of observational studies in which standard statistical methods were used to control for potential identifiable confounders. The findings of this methodological study corroborate those of earlier studies in economics. The observational studies, even after risk adjustment, showed a wide scatter of results around those of the corresponding RCTs and the observational studies tend to produce more strongly positive results. This contradicts the prevailing view in the advertisement industry that observational studies produce reliable estimates of the effectiveness of advertisements. Interestingly, if just one factor accounted for all of the difference between the observational studies and the RCTs, then this single factor would account for more explanatory power than all other variables taken together.

The selection bias in the case of advertising likely relates to a link between exposure to the environment featuring the advertisement and responding to the ad when observed. To put this another way, people who are exposed to an advertisement are already pre-disposed to respond to the advertisement. In health care we would say that exposure and response are on the same causal chain. Economists would say that they were ‘endogenous’.

— Richard Lilford, CLAHRC WM Director

References:

  1. Benson K & Hartz AJ. A comparison of observational studies and randomized, controlled trials. N Engl J Med. 2000; 342(25): 1878-86.
  2. Anglemyer A, Horvath HT, Bero L. Healthcare outcomes assessed with observational study designs compared with those assessed in randomized trials. Cochrane Database Syst Rev. 2014; 4: MR000034.
  3. Banerjee AV, Duflo E, Kremer M. The Influence of Randomized Controlled Trials on Development Economics Research and on Development Policy. 2016
  4. Gordon BR, Zettelmeyer F, Bhargava N, Chapsky D. A Comparison of Approaches to Advertising Measurement: Evidence from Big Field Experiments at Facebook. 2018.

A Poorly Argued Article on the Results of Cluster RCTs in General Practice

A recent paper in the Journal of Clinical Epidemiology analysed the results of cluster RCTS where general practices were the unit of randomisation.[1] Effect sizes were reported for 72 outcomes across 29 cluster RCTs. Fifteen of the 72 outcomes were significant statistically, and only one met or exceeded the alternative hypothesis (delta). Disappointingly, the authors do not classify the trials properly, as we have recommended [2] – with or without baseline measurements and, if baseline measurements were used, whether the study was cross-sectional or cohort.[3] The authors seem to favour Bonferroni correction when there is more than one end-point, but this is unscientific. In situations where many study endpoints are part of a postulated causal chain, then far from ‘correcting’ for multiple observations, correspondence between different observed endpoints should reinforce a positive conclusion. Likewise, lack of correspondence should cast doubt on cause and effect conclusions. This process of triangulation between observations lies at the heart of causal thinking.[4] The logic is laid out in more detail elsewhere.[5] [6]

— Richard Lilford, CLAHRC WM Director

References:

  1. Siebenhofer A, Paulitsch MA, Pregartner G, Berghold A, Jeitler K, Muth C, Engler J. Cluster-randomized controlled trials evaluating complex interventions in general practice are mostly ineffective: a systematic review. J Clin Epidemiol. 2018; 94: 85-96.
  2. Lamont T, Barber N, de Pury J, Fulop N, Garfield-Birkbeck S, Lilford R, Mear L, Raine R, Fitzpatrick R. New approaches to evaluating complex health and care systems. BMJ. 2016; 352: i154.
  3. Hemming K, Chilton PJ, Lilford RJ, Avery A, Sheikh A. Bayesian Cohort and Cross-Sectional Analyses of the PINCER Trial: A Pharmacist-Led Intervention to Reduce Medication Errors in Primary Care. PLOS ONE. 2012; 7(6): e38306.
  4. Lilford RJ. Beyond Logic Models. NIHR CLAHRC West Midlands News Blog. 2 September 2016.
  5. Watson SI, & Lilford RJ. Essay 1: Integrating multiple sources of evidence: a Bayesian perspective. In: Raine R, & Fitzpatrick R. (Eds). Challenges, solutions and future directions in the evaluation of service innovations in health care and public health. HS&DR Report No. 4.16. Southampton: NIHR Journals Library. 2016.
  6. Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ. 2010; 341: c4413.

Immunisation Against Rotavirus: At What Age Should it be Given?

A three way RCT [1] from Thailand shows that rotavirus vaccine is effective in reducing the incidence of diarrhoea in children (which we know), and that a neonatal schedule is no less effective and probably more effective than an infant schedule. Giving the vaccine early may reduce the risk of intussusception – apparently a risk with the infant schedule.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Bines JE, At Thobari J, Satria CD, et al. Human Neonatal Rotavirus Vaccine (RV3-BB) to Target Rotavirus from Birth. New Engl J Med. 2018; 378(8): 719-30.

An Extremely Fascinating Debate in JAMA

You really should read this debate – Steven Goodman, a statistician for whom I have the utmost regard, wrote a brilliant paper in which he and colleagues show the importance of ‘design thinking’ in observational research.[1] The essence of their argument is that in designing and interpreting observational studies one should think about how the corresponding RCT would look. This way one can spot survivorship bias, which arises when the intervention group has been depleted of the most susceptible cases. This way of thinking encourages a comparison between new users of an intervention with new users of the comparator. Of course, it is not always possible to identify ‘new users’, but at least thinking in such a ‘design way’ can alert the reader to the danger of false inference.
One of the examples mentioned concerns hormone replacement therapy (HRT) where the largest RCT (Women’s Health Initiative trial) gave a very different result to the largest observational study (Nurses’ Health Study). The latter suggests a protective effect for HRT, while the former suggest the opposite. It looks as though this might not have been a very good example because, as Bhupathiraju and colleagues point out, there is a much simpler and more convincing explanation for the difference in the observed effects of HRT across the two studies.[2] The hormone replacement was given to much younger women in the observational study than in the trial. Subsequent meta-analysis of subgroups across all RCTs confirms that HRT is only protective in younger woman (who do not have established coronary artery disease). Thus, HRT is probably effective if started sufficiently early after the menopause.

This does not mean, of course, that Goodman and colleagues are wrong in principle; they may simply have selected a bad example. This is an extremely interesting exchange conducted politely between scholars and is interesting from both of the methodological and the substantive points of view.

— Richard Lilford, CLAHRC WM Director

References:

  1. Goodman SN, Schneeweiss S, Baiocchi M. Using design thinking to differentiate useful from misleading evidence in observational research. JAMA. 2017; 317(7): 705-7.
  2. Bhupathiraju SN, Stampfer MJ, Manson JE. Posing Causal Questions When Analyzing Observational Data. JAMA. 2017; 318(2): 201.

An Argument with Michael Marmot

About two decades ago I went head-to-head in an argument with the great Michael Marmot at the Medical Research Council. The topic of conversation was information that should be routinely collected in randomised trials. Marmot was arguing that social class and economic information should be collected. He made a valid point that these things are correlated with outcomes. I pointed out that although they may be correlated with outcomes, they were not necessarily correlated with treatment effects. Then came Marmot’s killer argument. Marmot asked whether I thought that sex and ethnic group should be collected. When I admitted that they should be, he rounded on me, saying that this proves his point. We met only recently and he remembered the argument and stood by his point. However, it turns out that it is not really important to collect information on the sex after all. Wallach and colleagues, writing in the BMJ,[1] cite evidence from meta-analyses of RCTs to show that sex makes no difference to treatment effects when averaged across all studies. So there we have it, a parsimonious data set is optimal for trial purposes, since it increases the likelihood of collecting essential information to measure the parameter of interest.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Wallach JD, Sullivan PG, Trepanowski JF, Steyerberg EW, Ioannidis JPA. Sex based subgroup differences in randomized controlled trials: empirical evidence from Cochrane meta-analyses. BMJ. 2016; 355: i5826.

 

An Extremely Interesting Three Way Experiment

News Blog readers know that the CLAHRC WM Director is always on the look-out for interesting randomised trials in health care and elsewhere. He has David Satterthwaite to thank for this one – an RCT carried out among applicants for low level jobs in five industries in Ethiopia.[1] The applicants (n=1,000), all of whom qualified for the job on paper, were randomised to three conditions:

  1. Control;
  2. Accepted into the industrial job;
  3. Given training in entrepreneurship and about $1,000 (at purchasing power parity).

Surprisingly, the industrial jobs, while producing more secure incomes, did not yield higher incomes than the control group and incomes were highest in the entrepreneur group. On intention-to-treat analysis the industrial jobs resulted in worse mental health than experienced in the entrepreneurial group, and physical health was also slightly worse. Many left the jobs in firms during the one year follow-up period. In qualitative interviews many said that they accepted industrial jobs only as a form of security while looking for other opportunities.

The authors, aware that rising minimum wages or increasing regulations have costs to society, are cautious in their conclusions. The paper is interesting nevertheless. The CLAHRC WM Director would like to do an RCT of paying a minimum wage vs. a slightly higher wage threshold to determine effects on productivity and wellbeing, positing an effect like this:

065-dcv-fig-1

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Blattman C & Dercon S. Occupational Choice in Early Industrializing Societies: Experimental Evidence on the Income and Health Effects of Industrial and Entrepreneurial Work. SSRN. 2016.

History of Controlled Trials in Medicine

Rankin and Rivest recently published a piece looking at the use of clinical trials more than 400 years ago,[1] while Bothwell and Podolsky have produced a highly readable historical account of controlled trials.[2] Alternate treatment designs became quite popular in the late eighteenth century, but Austin Bradford Hill was concerned with the risk of ‘cheating’ and carried out an iconic RCT to overcome the problem.[3] But what next for the RCT? It is time to move to a Bayesian approach,[4] automate trials in medical record systems, and widen credible limits to include the risk of bias when follow-up is incomplete, therapist is not masked, or subjective outcomes are not effectively blinded.

— Richard Lilford, CLAHRC WM Director

References:

  1. Rankin A & Rivest J. Medicine, Monopoly, and the Premodern State – Early Clincial Trials. N Engl J Med. 2016; 375(2): 106-9.
  2. Bothwell LE & Podolsky SH. The Emergence of the Randomized Controlled Trial. N Engl J Med. 2016; 375(6): 501-4.
  3. Hill AB. The environment and disease: Association or causation? Proc R Soc Med. 1965; 58(5): 295-300.
  4. Lilford RJ, & Edwards SJL. Why Underpowered Trials are Not Necessarily Unethical. Lancet. 1997; 350(9080): 804-7.