Tag Archives: Theory

Clinical Research Stands Out Among Disciplines for Being Largely Atheoretical

A recent paper in the BMJ (see our recent Director’s Choice) described the (null) result in a RCT of physiotherapy for ankle injury.[1] The broader implications of this finding were discussed in neither the discussion section of the paper itself, nor in the accompanying editorial.[2] The focus was confined entirely on the ankle joint, with not a thought given to implications for strains around other joints. The theory by which physiotherapy may produce an effect, and why this might apply to some joints and not others, did not enter the discourse. The ankle joint study is no exception, such an atheoretical approach is de rigour in medical journals, and it seems to distinguish clinical research from nearly everything else – most scientific endeavours try to find out what results mean – they seek to explain, not just describe. Pick up an economics journal and you will find, in the introduction, an extensive rationale for the study. Only when the theory that the study seeks to explicate has been thoroughly dealt with do the methods and results follow. An article in a physics journal will use data to populate a mathematical model that embodies theory. Clinical medicines’ parent discipline – the life sciences – are also heavily coloured by theory – Watson and Crick famously built their model (theory) entirely on other researchers’ data.

The premise that theory features less prominently in medical journals compared to the journals of other disciplines is based on my informal observations; my evidence is anecdotal. However, the impression is confirmed by colleagues with experience that ranges across academic disciplines. In due course I hope to stimulate work in our CLAHRC, or with a broader constituency of News Blog readers, to further examine the prominence given to theory across disciplines. In the meantime, if the premise is accepted, contingent questions arise – why is theory less prominent in medicine and is this a problem?

Regarding the first point, it was not ever thus. When I was studying medicine in the late 1960s / early 1970s ‘evidence-based medicine’ lay in the future – it was all theory then, even if the theory was rather shallow and often implicit. With the advent of RCTs and increased use of meta-analysis it became apparent that we had often been duped by theory. Many treatments that were supported by theory turned out to be useless (like physiotherapy for sprained ankles), or harmful (like steroids for severe head injury). At this point there was a (collective) choice to be made. Evidence could have been seen as a method to refine theory and thereby influence practice. Alternatively, having been misdirected by theory in the past, its role could have been extirpated (or downgraded) so that the evidence became the direct basis for practice. Bradford Hill, in his famous talk,[3] clearly favoured the former approach, but the profession, perhaps encouraged by some charismatic proponents of evidence-based medicine, seems to have taken the second route. It would be informative to track the evolution of thought and practice through an exegesis of historical documents since what I am suggesting is itself a theory – albeit a theory which might have verisimilitude for many readers.

But does it matter? From a philosophy of science point of view the answer is ‘yes’. Science is inductive, meaning that results from one place and time must be extrapolated to another. Such an extrapolation requires judgement – the informed opinion that the results can be transferred / generalised / particularised across time and place. And what is there to inform such a judgement but theory? So much for philosophy of science, but is there any evidence from practice to support the idea that an atheoretical approach is harmful? This is an inevitably tricky topic to study because the counterfactual cannot be observed directly – would things have turned out differently under an imaginary counterfactual where theory was given more prominence? Perhaps, if theory had been given more weight, we would have extrapolated from previous data and realised earlier that it is better to treat all HIV infected people with antivirals, not just those with supressed immune systems.[4] Likewise, people have over-interpreted null results of adjuvant chemotherapy in rare tumours when they could have easily ‘borrowed strength’ from positive trials in more common, yet biologically similar, cancers.[5] [6]

In the heady days of evidence-based medicine many clear cut results emerged concerning no treatment versus a proposed new method. Now we have question inflation among a range of possible treatments and diminishing headroom for improvement – not all possible treatments can be tested across all possible conditions – we are going to have to rely more on network meta-analyses, database studies and also on theory.

Richard Lilford, CLAHRC WM Director


  1. Brison RJ, Day AG, Pelland L, et al. Effect of early supervised physiotherapy on recovery from acute ankle sprain: randomised controlled trial. BMJ. 2016; 355: i5650.
  2. Bleakley C. Supervised physiotherapy for mild or moderate ankle sprain. BMJ. 2016; 355: i5984.
  3. Hill AB. The environment and disease: Association or causation? Proc R Soc Med. 1965; 58(5): 295-300.
  4. Thompson MA, Aberg JA, Hoy JF, et al. Antiretroviral Treatment of Adult HIV Infection. 2012 Recommendations of the International Antiviral Society – USA Panel. JAMA. 2012; 308(4): 387-402.
  5. Chen Y-F, Hemming K, Chilton PJ, Gupta KK, Altman DG, Lilford RJ. Scientific hypotheses can be tested by comparing the effects of one treatment over many diseases in a systematic review. J Clin Epidemiol. 2014; 67: 1309-19.
  6. Bowater RJ, Abdelmalik SM, Lilford RJ. Efficacy of adjuvant chemotherapy after surgery when considered over all cancer types: a synthesis of meta-analyses. Ann Surg Oncol. 2012; 19(11): 3343-50.


A False Distinction

Lloyd Provost, writing in BMJ Quality and Safety, argues for a distinction between enumerative and analytical studies.[1] Enumerative studies, he says, are suitable for measurements of static samples, like the water properties in a pool, while analytic studies track changes over time, as in sampling water in a river. Analytic studies help unravel cause and effect, says this article from the Institute of Healthcare Improvement. You don’t have to be a professor of philosophy to drive a coach and horses through this distinction. Science is always concerned with causal mechanisms or, at the very least, with predictions – pick up any textbook on the philosophy of science. Philosophers talk of the ‘scandal of induction’, since there is never a fool-proof way to predict the future – think black swans. So theory is always required to help make judgements about past results and its implications over time and place. If you want to test the water in a pool to see if it is safe to swim, then that is not a scientific exercise. If you want to find out whether a certain chemical kills fish in pools, then that is a scientific exercise and you had better sample plenty of pools with live and dead fish in them.

— Richard Lilford, CLAHRC WM Director


  1. Provost LP. Analytical Studies: A Framework for Quality Improvement Design and Analysis. BMJ Qual Saf. 2011; 20: i92-6.

Demystifying Theory

A recent article in BMJ Quality and Safety offers a lively and useful account of the role of theory in applied research, with examples taken from service delivery research.[1] The authors explain repeatedly that theory is always present when a service is changed, and that the choice lies in making theory formal and explicit versus leaving it vague and implicit. The article covers grand theories (such as the idea of culture); mid-range theories (such as social behavioural theory, which emphasises the effect of social clues on behaviour); and programme theories (which map out the territory between cause [e.g. more nurses] and outcome [e.g. healthier, happier patients]). A detailed discussion of programme theory was recently published by CLAHRC Northwest London.[2]

One of the problems that provokes the CLAHRC WM Director is the observation that the same theory may go under different names (‘the same wine in new bottles’) or that theories may overlap. For example, Ferlie and Shortell,[3] and Richard Grol [4] have both developed a similar theory (that successful organisational change requires co-ordinated responses from different levels in the organisational hierarchy). Also, in many circumstances, it may be necessary to determine which theories are relevant and which are not. For example, should an intervention be designed according to nudge theory, social behavioural theory, the theory of planned behaviour, or one of the other 64 psychological theories of behavioural change? Here, one turns to a theory of theories – for example, the transtheoretical model.[5] [6]

— Richard Lilford, CLAHRC WM Director


  1. Davidoff F, Dixon-Woods M, Leviton L, Michie S. Demystifying theory and its use in improvement. BMJ Qual Saf. 2015; 24: 228-38.
  2. Reed JE, McNicholas C, Woodcock T, Issen L, Bell D. Designing quality improvement initiatives: the action effect method, a structured approach to identifying and articulating programme theory. BMJ Qual Saf. 2014. [ePub].
  3. Ferlie EB & Shortell SM. Improving the quality of health care in the United Kingdom and the United States: a framework for change. Milbank Q. 2001; 79(2): 281-315.
  4. Grol R, Wensing M, Eccles M, Davis D, eds. Improving patient care: the implementation of change in health care. Hoboken, NJ: John Wiley & Sons. 2013.
  5. Prochaska JO, Velicer WF. The transtheoretical model of health behaviour change. Am J Health Promot. 1997; 12(1): 38-48.
  6. Michie S, Johnston M, Francis J, Hardeman W, Eccles M. From theory to intervention: mapping theoretically derived behavioural determinants to behaviour change techniques. Appl Psychol. 2008; 57(4): 660-80.