The Messy End of Science

Sometimes science gives us nice clear-cut answers. Director’s choice normally focusses on such studies. Today I focus on the opposite – the messy end of science, where standard statistical methods are too clunky to give a clear answer. Last weekend’s BMJ (12 April 2014) was stacked full of articles on anti-flu drugs [1-11] based on the meta-analysis that Tom Jefferson and colleagues carried out [3] when the full data-set was finally wrenched from the hands of the drug company. Tom’s meta-analysis was excellent, but I am less enamoured of the rather self-righteous tone of the extensive commentaries. There is an undisputed moral component to what went on (companies should not be allowed to sequester data obtained from patients), but that is where turpitude ends. Decision makers across the world are taken to task for spending £12 billion on Tamiflu® [1] when evidence on effectiveness could not be regarded as definitive. But since when do policy makers need definitive evidence in order to act? Chief Medical Officers simply have to make a best judgement on the evidence available at the time. Yes, in retrospect, enough money was wasted to procure four aircraft carriers.[12] But the Chief Medical Officers across the globe are paid to make an informed guess based on the information available in real time. The most important message is simple – companies should no longer be allowed to sequester data in their vaults – full transparency is essential.
But then what? Will clarity prevail? I am afraid not. The reason is that science is increasingly giving us messy answers – short-term outcomes when we really need longer term outcomes; a mixture of statistically positive and null results; treatments that are effective against placebo, but have not yet gone head-to-head; and so on. Take clot-busting medicine for stroke of over 3 hours duration – the pivotal clinical trial was done in academia, not a drug company. The primary outcome yields a null result, deaths were increased in the short-term, but a secondary outcome, based on questionnaire, showed improvement.[13] This led to a positive recommendation for treatment followed by criticism that guideline writers were tainted by their industry associations.[14] Again these problems would not have arisen if science had given a more clear-cut result. Similarly, the situation with Tamiflu® remains murky. It does block the receptor used by the virus to gain access to human cells, it shortens the duration of illness, and when used prophylactically it reduces symptomatic cases by over a half. However, it causes side-effects, and there was no measurable effect on admission rates (95% CI 0.57–1.50) or deaths, although this might have been because any improvements were too small to detect. The real problem is that frequentist statistics are just not up to the job in these ambiguous cases where there are multiple competing objectives and high risks of false null results, especially for the most important outcomes. We will never get out of the mud until we use a statistical method that surfaces the subjective element and that can interface axiomatically with a grounded Decision Analytic framework, whereby probabilities, values and money can be reconciled. Above all, the notion that statistics can obviate the need for judgement is an eidolon that must be scotched along with the idea that drug companies can withhold patient data.

–Richard Lilford, Director of CLAHRC WM

[1] Abbasi K. The missing data that cost $20bn. BMJ. 2014; 348: g2695.
[2] Torjesen I. Cochrance review questions effectiveness of neuraminidase inhibitors. BMJ. 2014; 348: g2675.
[3] Jefferson T, Jones M, Doshi P, Spencer EA, Onakpoya I, Henghan CJ. Oseltamivir for influenza in adults and children: systematic review of clinical study reports and summary of regulatory comments. BMJ. 2014; 348: g2545.
[4] Heneghan CJ, Onakpoya I, Thompson M, Spencer EA, Jones M, Jefferson T. Zanamivir for influenza in adults and children: systematic review of clinical study reports and summary of regulatory comments. BMJ. 2014; 348: g2547.
[5] Loder E, Tovey D, Godlee F. The Tamiflu trials. BMJ. 2014; 348: g2630.
[6] Krumholz HM. Neuraminidase inhibitors for influenza. BMJ. 2014; 348: g2548.
[7] Belluz J. Tug of war for antiviral drugs data. BMJ. 2014; 348: g2227.
[8] Jack A. Tamiflu: “a nice little earner”. BMJ. 2014; 348: g2524.
[9] Cohen D. Oseltamivir: another case of regulatory failure? BMJ. 2014; 348: g2591.
[10] Freemantle N, Shallcross LJ, Kyte D, Rader T, Calvert MJ. Oseltamivir: the real world data. BMJ. 2014; 348: g2371.
[11] Jefferson T, Doshi P. Multisystem failure: the story of anti-influenza drugs. BMJ. 2014; 348: g2263.
[12] BBC News. Royal Navy aircraft carrier costs ‘to double’. BBC News Online. 2013 Nov 4. Available online.
[13] The IST-3 collaborative group. Effect of thrombolysis with alteplase within 6 h of acute ischaemic stroke on long-term outcomes (the third International Stroke Trial [IST-3]): 18-month follow-up of a randomised controlled trial. Lancet Neurol. 2013; 12(8): 768-76.
[14] Lenzer J. Why we can’t trust clinical guidelines. BMJ. 2013; 346: f3830.


2 thoughts on “The Messy End of Science”

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s