Tag Archives: Medicine

More on Medical School Admission

I thank Celia Taylor for drawing my attention to an important paper on the relationship between personality test results, and cognitive and non-cognitive outcomes at medical school.[1] Everyone accepts that being a good doctor is about much more than cognitive excellence. That isn’t the question. The question is how to select for salient non-cognitive attributes? The paper is a hard read because one must first learn the acronyms for all the explanatory and outcome tests. So let the News Blog take the strain!

The study uses a database containing entry level personality scores, which were not used in selection, and outcomes following medical training. To cut a long story short “none of the non-cognitive tests evaluated in this study has been shown to have sufficient utility to be used in medical student selection.” And, of course, even if a better test is found in the future, it may perform differently when used as part of a selection process than when used for scientific purposes. I stick by the conclusions that Celia and I published in the BMJ many years ago [2]; until a test is devised that predicts non-cognitive medical skills, and assuming that cognitive ability is not negatively associated with non-cognitive attributes, we should select purely on academic ability. I await your vituperative comments! In the meantime can I suggest a research idea – correlate cognitive performance with the desirable compassionate skills we would like to see in our doctor. Maybe the correlation is positive, such that the more intelligent the person, the more likely they are to demonstrate compassion and patience in their dealings with patients.

— Richard Lilford, CLAHRC WM Director


  1. MacKenzie RK, Dowell J, Ayansina D, Cleland JA. Do personality traits assessed on medical school admission predict exit performance? A UK-wide longitudinal cohort study. Adv Health Sci Educ Theory Pract. 2017; 22(2): 365-85.
  2. Brown CA, & Lilford RJ. Selecting medical students. BMJ. 2008; 336: 786.

Clinical Research Stands Out Among Disciplines for Being Largely Atheoretical

A recent paper in the BMJ (see our recent Director’s Choice) described the (null) result in a RCT of physiotherapy for ankle injury.[1] The broader implications of this finding were discussed in neither the discussion section of the paper itself, nor in the accompanying editorial.[2] The focus was confined entirely on the ankle joint, with not a thought given to implications for strains around other joints. The theory by which physiotherapy may produce an effect, and why this might apply to some joints and not others, did not enter the discourse. The ankle joint study is no exception, such an atheoretical approach is de rigour in medical journals, and it seems to distinguish clinical research from nearly everything else – most scientific endeavours try to find out what results mean – they seek to explain, not just describe. Pick up an economics journal and you will find, in the introduction, an extensive rationale for the study. Only when the theory that the study seeks to explicate has been thoroughly dealt with do the methods and results follow. An article in a physics journal will use data to populate a mathematical model that embodies theory. Clinical medicines’ parent discipline – the life sciences – are also heavily coloured by theory – Watson and Crick famously built their model (theory) entirely on other researchers’ data.

The premise that theory features less prominently in medical journals compared to the journals of other disciplines is based on my informal observations; my evidence is anecdotal. However, the impression is confirmed by colleagues with experience that ranges across academic disciplines. In due course I hope to stimulate work in our CLAHRC, or with a broader constituency of News Blog readers, to further examine the prominence given to theory across disciplines. In the meantime, if the premise is accepted, contingent questions arise – why is theory less prominent in medicine and is this a problem?

Regarding the first point, it was not ever thus. When I was studying medicine in the late 1960s / early 1970s ‘evidence-based medicine’ lay in the future – it was all theory then, even if the theory was rather shallow and often implicit. With the advent of RCTs and increased use of meta-analysis it became apparent that we had often been duped by theory. Many treatments that were supported by theory turned out to be useless (like physiotherapy for sprained ankles), or harmful (like steroids for severe head injury). At this point there was a (collective) choice to be made. Evidence could have been seen as a method to refine theory and thereby influence practice. Alternatively, having been misdirected by theory in the past, its role could have been extirpated (or downgraded) so that the evidence became the direct basis for practice. Bradford Hill, in his famous talk,[3] clearly favoured the former approach, but the profession, perhaps encouraged by some charismatic proponents of evidence-based medicine, seems to have taken the second route. It would be informative to track the evolution of thought and practice through an exegesis of historical documents since what I am suggesting is itself a theory – albeit a theory which might have verisimilitude for many readers.

But does it matter? From a philosophy of science point of view the answer is ‘yes’. Science is inductive, meaning that results from one place and time must be extrapolated to another. Such an extrapolation requires judgement – the informed opinion that the results can be transferred / generalised / particularised across time and place. And what is there to inform such a judgement but theory? So much for philosophy of science, but is there any evidence from practice to support the idea that an atheoretical approach is harmful? This is an inevitably tricky topic to study because the counterfactual cannot be observed directly – would things have turned out differently under an imaginary counterfactual where theory was given more prominence? Perhaps, if theory had been given more weight, we would have extrapolated from previous data and realised earlier that it is better to treat all HIV infected people with antivirals, not just those with supressed immune systems.[4] Likewise, people have over-interpreted null results of adjuvant chemotherapy in rare tumours when they could have easily ‘borrowed strength’ from positive trials in more common, yet biologically similar, cancers.[5] [6]

In the heady days of evidence-based medicine many clear cut results emerged concerning no treatment versus a proposed new method. Now we have question inflation among a range of possible treatments and diminishing headroom for improvement – not all possible treatments can be tested across all possible conditions – we are going to have to rely more on network meta-analyses, database studies and also on theory.

Richard Lilford, CLAHRC WM Director


  1. Brison RJ, Day AG, Pelland L, et al. Effect of early supervised physiotherapy on recovery from acute ankle sprain: randomised controlled trial. BMJ. 2016; 355: i5650.
  2. Bleakley C. Supervised physiotherapy for mild or moderate ankle sprain. BMJ. 2016; 355: i5984.
  3. Hill AB. The environment and disease: Association or causation? Proc R Soc Med. 1965; 58(5): 295-300.
  4. Thompson MA, Aberg JA, Hoy JF, et al. Antiretroviral Treatment of Adult HIV Infection. 2012 Recommendations of the International Antiviral Society – USA Panel. JAMA. 2012; 308(4): 387-402.
  5. Chen Y-F, Hemming K, Chilton PJ, Gupta KK, Altman DG, Lilford RJ. Scientific hypotheses can be tested by comparing the effects of one treatment over many diseases in a systematic review. J Clin Epidemiol. 2014; 67: 1309-19.
  6. Bowater RJ, Abdelmalik SM, Lilford RJ. Efficacy of adjuvant chemotherapy after surgery when considered over all cancer types: a synthesis of meta-analyses. Ann Surg Oncol. 2012; 19(11): 3343-50.


Medicine or the Law… It is all a Question of Probability

The CLAHRC WM Director was recently sent a transcript of the Richard Davies QC Memorial Lecture 2015, “Standards of Proof in Law and Science: Distinctions without a Difference“. The transcript was dispatched by Dr Martin Quinn, an old friend from his gynaecology days, and the speech was given by prominent High Court Judge, Justice Jay. Such experience as the CLAHRC WM Director has of High Court Judges is that they are a cognitively astute bunch, but not necessarily highly numerate. If he is right about that, then Jay is something of an exception. His theme was the similarity and differences between the scientific and legal intellectual frameworks. He first makes a parody of their different epistemologies, but soon comes round to cogent arguments that they are more united by their similarities than divided by their differences. After all they both have to analyse evidence, work out what it means, and make judgements under uncertainty. Individual cases come down to probabilities in both areas; the balance of probabilities in cases of tort, probabilities sufficient to put the matter ‘beyond reasonable doubt’ in criminal cases, and the relative probabilities of benefit and harm in medical cases.

This all means that both professions need quite sophisticated notions of probability with which to work. Doctors fall over their feet on probability, but Justice Jay has a clear understanding of frequentist and Bayesian notions of probability. As CLAHRC WM News Blog readers know only too well, frequentist statistics cannot tell you the probability that something is true (given the data), but only the probability of the data, given that something (typically the null hypothesis) is true. Given only the latter (i.e. a frequentist calculation of the probability of the data under a null hypothesis), then the probability of some alternative hypothesis can only be calculated given a prior probability. This is obviously a crucial concept in both law and medicine.

Consider two scenarios – a judge deciding on a case of homicide, and a doctor considering the diagnosis of Duchenne muscular dystrophy. The judge has blood group  and inconclusive alibi – what was the probability that the accused was at the scene of the crime? The doctor has the result of a blood enzyme test and family history – what is the probability of the diagnosis? They both use Bayes theorem:

052 DCB - Judge Doctor Fig 1

The difference between judge and doctor lies not in the axiomatic method that normatively underpins the requisite probabilities, but what to do with them. The judge interprets a given probability with reference to a legal framework – reasonable doubt. That might correspond to posterior odds of, say, 99:1. The doctor must make his interpretation with reference to the balance of benefits and harms. Since benefits and harms are not all equivalent, the decision turns on a ‘loss function’. The loss function is derived under expected utility theory and weights probabilities by preferences.[1]

Both doctors and lawyers must understand notions of contingent probability. Failure to understand this idea leads to erroneous thinking, for example the famous ‘prosecutor’s fallacy’. This is exemplified in the case of Sally Clark, where an expert, Roy Meadow, argued that guilt was likely on the grounds that two cases of infant death in one family are very rare; one in many thousands. However, that consideration of the frequency of a certain scenario is quite beside the point once the scenario has been observed. In that case, the salient probability is a contingent probability – namely that of malfeasance versus that of natural causes given the observed outcome.

Cases of tort frequently turn on evidence of effectiveness. For example, the observed relative risk reduction in a meta-analysis of high quality RCTs may observe a statistically ‘significant’ 55% reduction in relative risk of outcome x if treatment y was administered. Given a particular case of tort where failure to administer y (in the absence of a contra-indication) was followed by x, it might be tempting to argue that causality can be established on the balance of probabilities. But not so fast:

  1. This is to conflate the probability of the effect and the probability of the data, and ‘well-brought-up people do not do that’; the prior must be brought into play.
  2. As in medical care, the particular features of the case must be taken into account – there may be good grounds to argue that the typical effect would be greater or smaller among people resembling the case under consideration.

In the end, ‘evidence-based medicine’ may have relatively little effect on outcomes in cases under tort. This is because most interventions examined by RCTs, the standard tool of evidence-based medicine, are not so powerful as to halve relative risks – relative risk ratios of around 20% are more typical. Furthermore, the magnitude of effect is generally smaller for less serious outcomes (such as admission to hospital with angina) than for more serious outcomes (such as cardiac death) that drive compensation quanta in claims.[2] The situation is different with diagnostic errors, procedural errors, and failure to rescue. The CLAHRC WM Director favours a change in the law, whereby compensation is weighted by the (Bayesian) probability of causality rather than the (illogical?) balance threshold.

— Richard Lilford, CLAHRC WM Director


  1. Thornton JG, Lilford RJ, Johnson N. Decision analysis in medicine. BMJ. 1992 ; 304(6834): 1099-103.
  2. Bowater RJ, Hartley LC, Lilford RJ. Are cardiovascular trial results systematically different between North America and Europe? A study based on intra-meta-analysis comparisons. Arch Cardiovasc Dis. 2015; 108(1): 23-38.