Tag Archives: Meta-analysis

An Epidemic of Meta-Analyses – a Veritable Plague?

So says the great John Ioannidis, the world’s leading clinical epidemiologist.[1] He has a point – CLAHRC WM associates Sarah Damery, Sarah Flanagan and Gill Coombes recently published an overview of systematic reviews of ‘Integrated Care’.[2] They over-viewed – wait for it – over 70 individual systematic reviews. But even that number is dwarfed by the 185 systematic reviews of anti-depressants. This might not be a problem (save for waste) if the quality was universally high. Sadly quality is often poor – many (most) reviews are junk. Some are used as a marketing tool and appear to have been manipulated in the service of shareholders rather than patients. Chinese meta-analyses of associations between candidate genes and outcomes are particularly unreliable; they are castles built on sand because the original association studies are so poor. Associations detailed in ‘first generation’ studies were found to be valid in a staggeringly low proportion of less than 2% when compared to multi-centre studies with built-in procedures to preclude selective reporting of data.

The systematic review ‘industry’ seems to be in some disarray. Clearly primary studies need to be improved, although big steps are being made in this regard. Systematic reviews should be done by people without commercial ties to companies whose product is being evaluated. Other ideas are welcome.

— Richard Lilford, CLAHRC WM Director

References:

  1. Ioannidis JPA. The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses. Milbank Quart. 2016; 94(3): 485-514.
  2. Damery S, Flanagan S, Combes G. Does integrated care reduce hospital activity for patients with chronic diseases? An umbrella review of systematic reviews. BMJ Open. 2016; 6: e011952.

Narrative Syntheses vs. Meta-Analyses: Different Epistemologies?

In our list of recent publications we include one Lancet (our fourth this year) and one PLoS Medicine paper. We recommend them thoroughly, but the one that piqued my interest was a paper by GJ Melendez-Torres and colleagues.[1] They considered 106 systematic reviews of health place promotion and compare the type (mode) of reasoning used in narrative syntheses vs. meta-analyses. About a quarter of the studies were meta-analyses, and the remainder were narrative syntheses. The narrative syntheses often justify not doing a quantitative meta-analysis, and when they do so, the justification was based on various forms of study heterogeneity in each case. The study is deeply philosophical, and I will have to read it a few more times before I really get it. Meta-analyses are more stylised and more clearly separate the ‘warrant’, or bridge linking data to conclusion, whereas this linkage is integral to the argument (and not clearly separated within it) in the case of narrative reviews. Narrative synthesis seems more ‘emergent’ [my word] as the writer tries to make sense of the data – a method typical of history I think. In any event the reasoning processes seem different across the methods. I think that much narrative synthesis takes place in both types of research, but it is often taken as a given in more quantitative studies. In any event, this paper takes me back to a previous News Blog where I make a plea that more attention should be paid to philosophical underpinnings of applied research.[2]

— Richard Lilford, CLAHRC WM Director

References:

  1. Melendez-Torres GJ, O’Mara Eves A, Thomas J, Brunton G, Caird J, Petticrew M. Interpretive analysis of 85 systematic reviews suggests that narrative syntheses and meta-analyses are incommensurate in argumentation. Res Synth Methods. 2016.
  2. Lilford RJ. Where is the Philosophy of Science in Research Methodology? NIHR CLAHRC WM News Blog. 9 October 2015.

Using Meta-Analysis to Answer Questions That Could Never Be Answered in a Single Trial

An example based on surgical technique for excision of cervical pre-cancer

It is well known that cervical pre-cancer is associated with increased risk of pre-term birth (with all that this entails), and that cervical treatment adds to the risk. A recent extensive meta-analysis of 70 observational studies with a control group of some sort showed that the more radical the procedure (in terms of tissue destroyed or removed), the greater the risk.[1] This is important information for women who must balance the putative benefit of deeper margins against the documented risks to a subsequent pregnancy. Where possible (i.e. milder grade lesions) it may be advisable to delay treatment until after bearing children and, indeed, not to delay having children.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Kyrgiou M, et al. Adverse obstetric outcomes after local treatment for cervical preinvasive and early invasive disease according to cone depth: systematic review and meta-analysis. BMJ. 2016; 354: i3633.

Immunotherapies for Multiple Sclerosis

Tramacere and colleagues boldly conducted a network meta-analysis of no less than 39 individual RCTs of immunotherapy for Multiple Sclerosis.[1] The treatments as a class both prevent deterioration (measured by EDSS [Expanded Disability Status Scale] score) and reduce the frequency of relapses. Some of the medicines within the class appear significantly better than others. But the trials are of only moderate quality on the GRADE score, and the follow-up is limited, mostly to two years. The really important data will come with the ten year follow-up results from the English Risk-Sharing Scheme, which is due to report imminently.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Tramacere I, Del Giovane C, Filippini G. Association of Immunotherapies with Outcomes in Relapsing-Remitting Multiple Sclerosis. JAMA. 2016; 315(4): 409-10.

Q. When Can Evidence-Based Care do More Harm than Good?

A. When People Mistake no Evidence of Effect for Evidence of no Effect.

Imagine that you have malignant melanoma on your forearm. You can select wide margin excision or a narrow margin. The latter is obviously less disfiguring.

Results from six RCTs (n=4,233) have been consolidated in a meta-analysis.[1] In keeping with individual trials and with previous meta-analyses, the result is null for numerous outcomes. However, the point estimates all favour wider margins and the confidence limits are close to the (arbitrary) 0.5% significance level. For example, the hazard ratio for overall survival favouring wide margins is 1.09 (0.98-1.22). The authors state that the study shows “a 33% probability that [overall survival] is more than 10% worse” when a narrow margin excision is used. It should be added that this assumes an uninformative prior. If the prior probability estimate favoured better survival with wider excision margins, then the evidence in favour of a wider margin excision is stronger still. Moreover, the authors quote results showing that patients do not trade even small survival gains for improved cosmetic outcome. Despite loose statistical language (conflating the probability of survival given the data with the probability of the data if there was no difference in outcome), the authors have done science and practice a great service. This paper should be quoted in the context of surgical treatment of cancer, not just melanoma excision. For example, is sentinel biopsy really preferred to axillary dissection in breast cancer surgery?

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Wheatley K, Wilson JS, Gaunt P, Marsden JR. Surgical excision margins in primary cutaneous melanoma: A meta-analysis and Bayesian probability evaluation. Cancer Treat Rev. 2015. [ePub].

Read or Perform Systematic Reviews? A Must-Read Paper

People often complain that systematic reviewers make use of arbitrary quality thresholds to select or reject papers for inclusion in meta-analyses and do not allow for bias in any included studies that are not perfect. But how to avoid arbitrary selections and to allow for bias? The answer is provided in a truly beautiful methodological paper from the MRC Biostatistics Unit in Cambridge.[1] The paper explains how a probability density can be elicited for the bias associated with each study included in a meta-analysis. It goes on to show how these probability distributions are incorporated in the analysis. But should it be assumed that the bias is additive or proportional (increases with effect size)? This is a judgement to be used in each case, but an example is given under each assumption in the paper.

Remarkably, this study incorporated estimates for both internal and external bias, and argued that evidence regarding bias could supplement for expert opinion in the former. External bias need not be taken into account if the exercise aims to summarise the literature rather than answer a policy question for a particular target audience.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Turner RM, Spiegelhalter DJ, Smith GCS, Thompson SG. Bias modelling in evidence synthesis. J R Stat Soc A. 2009; 172(1): 21-47.

Network Meta-Analysis Can Correct For “Bias” In Head-To-Head Treatment Comparisons

When we talk of bias we tend to mean bias due to factors that affect the “answer” to a given question. But a type of bias can arise when a question is posed in a way that it predisposes to a certain result, say by comparing an optimal dose of medicine A with a sub-optimal dose of medicine B, or comparing medicine A with medicine C when it is likely to fare less well against medicine B. Conventional tools for the assessment of methodological quality of individual trials are adept at picking up the former type of bias, but discerning the latter usually requires a broader view based on medical knowledge. The CLAHRC WM Director co-authored a paper, led by Fujian Song of East Anglia showing how network meta-analysis can explore this second type of bias.[1]

A further example of use of network meta-analysis to explore this form of bias due to choice of a sub-optimal comparator comes from a recent study comparing reductions in LDL cholesterol in industry-funded versus publicly-funded RCTs of various statins.[2] The combined result across 183 RCTs failed to show a difference in end-points between statins funded in these two ways when the playing field was levelled by using network meta-analysis to compare optimal doses. The authors observed no obvious effects of various attributes of methodological quality on study outcome.

— Richard Lilford, CLAHRC WM Director

References

  1. Song F, Harvey I, Lilford R. Adjusted indirect comparison may be less biased than direct comparison for evaluating new pharmaceutical interventions. J Clin Epidemiol. 2008; 61(5): 455-63.
  2. Naci H, Dias S, Ades AE. Industry sponsorship bias in research findings: a network meta-analysis of LDL cholesterol reduction in randomised trials of statins. BMJ. 2014; 349: g5741.

Meta-analysis vs. Pivotal Trial

Which is better – a meta-analysis of all good quality evidence, or the results of the most precise trial contributing to that meta-analysis? Of course there can’t be a definitive answer to that question if there is no gold standard. However, a single large trial does produce the more pessimistic evidence on treatment effect on average, according to Berlin and Golub.[1] Given a premise that bias tends towards ‘optimistic’ results, then the large “definitive” trial is the less biased on average.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Berlin JA, Golub RM. Meta-analysis as Evidence. Building a Better Pyramid. JAMA. 2014; 312(6): 603-5.

International differences in trial results

Many researchers accept that the efficacy of treatments may differ between countries, and that differences need to be considered when assessing the potential benefits of treatments whose trials were conducted in a different country. The CLAHRC WM Director has recently had a paper published on international differences in the results of cardiovascular trials, looking at 59 meta-analyses of RCTs.[1] In most meta-analyses, relative to the control, the intervention was more favoured in trials conducted in Europe than those in North America for non-fatal endpoints (70% of meta-analyses, P=0.017), while there were non-statistically significant differences for fatal endpoints (65% of meta-analyses, P=0.066). It was not possible to determine which types of interventions were more likely to show international differences. The size of the effect, though significant for non-fatal outcomes, was small and the results of trials travel reasonably well across the North Atlantic.

— Richard Lilford, Director CLAHRC WM

References:

  1. Bowater RJ, Hartley LC, Lilford RJ. Are cardiovascular trial results systematically different between North America and Europe? A study based on intra-meta-analysis comparisons. Arch Cardiovasc Dis. 2014. [ePub].