Clinical Research Stands Out Among Disciplines for Being Largely Atheoretical

A recent paper in the BMJ (see our recent Director’s Choice) described the (null) result in a RCT of physiotherapy for ankle injury.[1] The broader implications of this finding were discussed in neither the discussion section of the paper itself, nor in the accompanying editorial.[2] The focus was confined entirely on the ankle joint, with not a thought given to implications for strains around other joints. The theory by which physiotherapy may produce an effect, and why this might apply to some joints and not others, did not enter the discourse. The ankle joint study is no exception, such an atheoretical approach is de rigour in medical journals, and it seems to distinguish clinical research from nearly everything else – most scientific endeavours try to find out what results mean – they seek to explain, not just describe. Pick up an economics journal and you will find, in the introduction, an extensive rationale for the study. Only when the theory that the study seeks to explicate has been thoroughly dealt with do the methods and results follow. An article in a physics journal will use data to populate a mathematical model that embodies theory. Clinical medicines’ parent discipline – the life sciences – are also heavily coloured by theory – Watson and Crick famously built their model (theory) entirely on other researchers’ data.

The premise that theory features less prominently in medical journals compared to the journals of other disciplines is based on my informal observations; my evidence is anecdotal. However, the impression is confirmed by colleagues with experience that ranges across academic disciplines. In due course I hope to stimulate work in our CLAHRC, or with a broader constituency of News Blog readers, to further examine the prominence given to theory across disciplines. In the meantime, if the premise is accepted, contingent questions arise – why is theory less prominent in medicine and is this a problem?

Regarding the first point, it was not ever thus. When I was studying medicine in the late 1960s / early 1970s ‘evidence-based medicine’ lay in the future – it was all theory then, even if the theory was rather shallow and often implicit. With the advent of RCTs and increased use of meta-analysis it became apparent that we had often been duped by theory. Many treatments that were supported by theory turned out to be useless (like physiotherapy for sprained ankles), or harmful (like steroids for severe head injury). At this point there was a (collective) choice to be made. Evidence could have been seen as a method to refine theory and thereby influence practice. Alternatively, having been misdirected by theory in the past, its role could have been extirpated (or downgraded) so that the evidence became the direct basis for practice. Bradford Hill, in his famous talk,[3] clearly favoured the former approach, but the profession, perhaps encouraged by some charismatic proponents of evidence-based medicine, seems to have taken the second route. It would be informative to track the evolution of thought and practice through an exegesis of historical documents since what I am suggesting is itself a theory – albeit a theory which might have verisimilitude for many readers.

But does it matter? From a philosophy of science point of view the answer is ‘yes’. Science is inductive, meaning that results from one place and time must be extrapolated to another. Such an extrapolation requires judgement – the informed opinion that the results can be transferred / generalised / particularised across time and place. And what is there to inform such a judgement but theory? So much for philosophy of science, but is there any evidence from practice to support the idea that an atheoretical approach is harmful? This is an inevitably tricky topic to study because the counterfactual cannot be observed directly – would things have turned out differently under an imaginary counterfactual where theory was given more prominence? Perhaps, if theory had been given more weight, we would have extrapolated from previous data and realised earlier that it is better to treat all HIV infected people with antivirals, not just those with supressed immune systems.[4] Likewise, people have over-interpreted null results of adjuvant chemotherapy in rare tumours when they could have easily ‘borrowed strength’ from positive trials in more common, yet biologically similar, cancers.[5] [6]

In the heady days of evidence-based medicine many clear cut results emerged concerning no treatment versus a proposed new method. Now we have question inflation among a range of possible treatments and diminishing headroom for improvement – not all possible treatments can be tested across all possible conditions – we are going to have to rely more on network meta-analyses, database studies and also on theory.

Richard Lilford, CLAHRC WM Director

References:

  1. Brison RJ, Day AG, Pelland L, et al. Effect of early supervised physiotherapy on recovery from acute ankle sprain: randomised controlled trial. BMJ. 2016; 355: i5650.
  2. Bleakley C. Supervised physiotherapy for mild or moderate ankle sprain. BMJ. 2016; 355: i5984.
  3. Hill AB. The environment and disease: Association or causation? Proc R Soc Med. 1965; 58(5): 295-300.
  4. Thompson MA, Aberg JA, Hoy JF, et al. Antiretroviral Treatment of Adult HIV Infection. 2012 Recommendations of the International Antiviral Society – USA Panel. JAMA. 2012; 308(4): 387-402.
  5. Chen Y-F, Hemming K, Chilton PJ, Gupta KK, Altman DG, Lilford RJ. Scientific hypotheses can be tested by comparing the effects of one treatment over many diseases in a systematic review. J Clin Epidemiol. 2014; 67: 1309-19.
  6. Bowater RJ, Abdelmalik SM, Lilford RJ. Efficacy of adjuvant chemotherapy after surgery when considered over all cancer types: a synthesis of meta-analyses. Ann Surg Oncol. 2012; 19(11): 3343-50.

 

Advertisements

2 thoughts on “Clinical Research Stands Out Among Disciplines for Being Largely Atheoretical”

  1. Given that most RCTs in medicine are pharmacological, I would argue that there is a strong theoretical basis, based on basic science, for the actions of why the intervention may work e.g. beta-blockers lowering BP. Having said that, it is also true that inductive extrapolation from lab-science and animal models has frequently failed when applied in human populations and sometimes resulted in harm e.g. CAST trial of flecainide. Hence both camps are correct. You need theory to design a RCT that is most likely to show benefit (especially in RCTs of behavioural change) but you still need empirical testing before one can assume real benefits (EBM).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s