Tag Archives: Philosophy

Clinical Research Stands Out Among Disciplines for Being Largely Atheoretical

A recent paper in the BMJ (see our recent Director’s Choice) described the (null) result in a RCT of physiotherapy for ankle injury.[1] The broader implications of this finding were discussed in neither the discussion section of the paper itself, nor in the accompanying editorial.[2] The focus was confined entirely on the ankle joint, with not a thought given to implications for strains around other joints. The theory by which physiotherapy may produce an effect, and why this might apply to some joints and not others, did not enter the discourse. The ankle joint study is no exception, such an atheoretical approach is de rigour in medical journals, and it seems to distinguish clinical research from nearly everything else – most scientific endeavours try to find out what results mean – they seek to explain, not just describe. Pick up an economics journal and you will find, in the introduction, an extensive rationale for the study. Only when the theory that the study seeks to explicate has been thoroughly dealt with do the methods and results follow. An article in a physics journal will use data to populate a mathematical model that embodies theory. Clinical medicines’ parent discipline – the life sciences – are also heavily coloured by theory – Watson and Crick famously built their model (theory) entirely on other researchers’ data.

The premise that theory features less prominently in medical journals compared to the journals of other disciplines is based on my informal observations; my evidence is anecdotal. However, the impression is confirmed by colleagues with experience that ranges across academic disciplines. In due course I hope to stimulate work in our CLAHRC, or with a broader constituency of News Blog readers, to further examine the prominence given to theory across disciplines. In the meantime, if the premise is accepted, contingent questions arise – why is theory less prominent in medicine and is this a problem?

Regarding the first point, it was not ever thus. When I was studying medicine in the late 1960s / early 1970s ‘evidence-based medicine’ lay in the future – it was all theory then, even if the theory was rather shallow and often implicit. With the advent of RCTs and increased use of meta-analysis it became apparent that we had often been duped by theory. Many treatments that were supported by theory turned out to be useless (like physiotherapy for sprained ankles), or harmful (like steroids for severe head injury). At this point there was a (collective) choice to be made. Evidence could have been seen as a method to refine theory and thereby influence practice. Alternatively, having been misdirected by theory in the past, its role could have been extirpated (or downgraded) so that the evidence became the direct basis for practice. Bradford Hill, in his famous talk,[3] clearly favoured the former approach, but the profession, perhaps encouraged by some charismatic proponents of evidence-based medicine, seems to have taken the second route. It would be informative to track the evolution of thought and practice through an exegesis of historical documents since what I am suggesting is itself a theory – albeit a theory which might have verisimilitude for many readers.

But does it matter? From a philosophy of science point of view the answer is ‘yes’. Science is inductive, meaning that results from one place and time must be extrapolated to another. Such an extrapolation requires judgement – the informed opinion that the results can be transferred / generalised / particularised across time and place. And what is there to inform such a judgement but theory? So much for philosophy of science, but is there any evidence from practice to support the idea that an atheoretical approach is harmful? This is an inevitably tricky topic to study because the counterfactual cannot be observed directly – would things have turned out differently under an imaginary counterfactual where theory was given more prominence? Perhaps, if theory had been given more weight, we would have extrapolated from previous data and realised earlier that it is better to treat all HIV infected people with antivirals, not just those with supressed immune systems.[4] Likewise, people have over-interpreted null results of adjuvant chemotherapy in rare tumours when they could have easily ‘borrowed strength’ from positive trials in more common, yet biologically similar, cancers.[5] [6]

In the heady days of evidence-based medicine many clear cut results emerged concerning no treatment versus a proposed new method. Now we have question inflation among a range of possible treatments and diminishing headroom for improvement – not all possible treatments can be tested across all possible conditions – we are going to have to rely more on network meta-analyses, database studies and also on theory.

Richard Lilford, CLAHRC WM Director

References:

  1. Brison RJ, Day AG, Pelland L, et al. Effect of early supervised physiotherapy on recovery from acute ankle sprain: randomised controlled trial. BMJ. 2016; 355: i5650.
  2. Bleakley C. Supervised physiotherapy for mild or moderate ankle sprain. BMJ. 2016; 355: i5984.
  3. Hill AB. The environment and disease: Association or causation? Proc R Soc Med. 1965; 58(5): 295-300.
  4. Thompson MA, Aberg JA, Hoy JF, et al. Antiretroviral Treatment of Adult HIV Infection. 2012 Recommendations of the International Antiviral Society – USA Panel. JAMA. 2012; 308(4): 387-402.
  5. Chen Y-F, Hemming K, Chilton PJ, Gupta KK, Altman DG, Lilford RJ. Scientific hypotheses can be tested by comparing the effects of one treatment over many diseases in a systematic review. J Clin Epidemiol. 2014; 67: 1309-19.
  6. Bowater RJ, Abdelmalik SM, Lilford RJ. Efficacy of adjuvant chemotherapy after surgery when considered over all cancer types: a synthesis of meta-analyses. Ann Surg Oncol. 2012; 19(11): 3343-50.

 

Advertisements

Interdisciplinary Research – Mind the Gap

Prof Terry Young, University of Brunel, recently drew the CLAHRC WM Director’s attention to a series of articles on Interdisciplinary Research in the journal Nature.[1-5] Terry directed the successful Engineering and Physical Sciences Research Council (EPSRC) Multidisciplinary Assessment of Technology Centre for Healthcare (MATCH), which was based on collaborations between health economics, social science, mathematics and engineering. The aim was to devise methods to improve selection and exploitation of promising ideas in the device industry. He is therefore interested in fostering interdisciplinary research.

An important facilitator/barrier to interdisciplinary research can be found in the notion of ‘intellectual distance’ between disciplines. The gap between disciplines such as economics and statistics or journalism and literature is small. In addition, certain disciplines underpin others – mathematics is an essential tool across a range of disciplines. We have argued that philosophy should underpin all serious study – a recent blog post refers.[6]

But when distance widens further – sociology and nanotechnology, say, or poetry and neuroimaging, then it is more difficult to straddle the intellectual gap. CLAHRCs tackle problems that are inherently multi-disciplinary and so are accustomed to working across boundaries. However, collaborations are most productive when scientists are genuinely interdisciplinary and not just a collection of freestanding (multidisciplinary) projects. Top researchers are so called ‘T-shaped’, having depth of knowledge in one area and breadth across many.

However, we must also remember that interdisciplinary work is a means to an end and not an end in itself; it should not be fetishised. At some point in the development of nanotechnology, sociological issues may emerge. ‘T-shaped’ researchers will recognise when this moment arises and premature attempts to generate or incentivise collaboration can be annoying and wasteful.

That said, there are times where basic science and art can enrich each other, as we shall explore in the next issue of the News Blog.

— Richard Lilford, CLAHRC WM Director

References:

  1. Viseu A. Integration of Social Science into Research is Crucial. Nature. 2015; 525:291.
  2. Van Noorden R. Interdisciplinary Research by the Numbers: An Analysis Reveals the Extent and Impact of Research that Bridges Disciplines. 2015; 525:306-7.
  3. Ledford H. How to Solve the World’s Biggest Problems. Nature. 2015; 525:308-11.
  4. Rylance R. Global giving: Global funders to focus on interdisciplinarity. 2015; 525:313-5.
  5. Brown RR, Deletic A, Wong THF. Interdisciplinarity: How to catalyse collaboration. Nature. 2015; 525:315-7.
  6. Lilford R. Where is the Philosophy of Science in Research Methodology? CLAHRC WM News Blog. 9 October 2015.

Chocolate Can Help You Lose Weight – a Hoax

The hoax perpetrated by John Bohannon is now well known.[1] [2] He carried out a small RCT of the effects of a high chocolate diet versus standard diet on multiple end-points, including blood pressure, mood, cholesterol level, etc. In fact there were 18 end-points, so if the null hypothesis was true, then there was a 0.6 (60%) probability of obtaining at least one positive result {1 – (1 – 0.05)18}. Then he ‘p-hacked’. Sure enough, one end-point was ‘positive’ – effect on weight.

He paid £600 to publish the paper in an online journal. It was picked up by Europe’s largest circulation newspaper (Bild) and went viral – all the way to talk shows in Arizona. He exposed the hoax. (The paper has now been withdrawn but is still available online).[3]

What are the ethics of such a study? The CLAHRC WM Director “is okay” with it on the grounds that the betrayal of trust inherent in such a hoax is justified by the need to expose the orders of magnitude more harmful betrayals in the entire data dredging, scientific publication and journalistic process.

But what does it mean for science? One the one hand, we have people like Ioannidis saying much (even most) research is junk because of ‘p-hacking’,[4] and on the other, those like Pawson and Tilley, arguing that it is the pattern in ‘rich’ data that should inform theory and action.[5] The former advocate published prior hypotheses, distinguishing between ‘primary’ and ‘secondary’ outcomes, and even correcting for multiple observations (thereby making it hard for results to be ‘positive’). The latter advocate mixed methods research and ‘triangulation’.

Both have a point. The chocolate story is but one of many showing that the dangers of p-hacking are very real. Yet the realist camp also have a point – indeed a more profound one – since it is a long-standing part of the scientific method that multiple observations reinforce or undermine theories. The famous philosopher of science, William Whewell, articulated the idea of forming a theory, deciding what observations could confirm or refute the theory, and then collecting the necessary observations.[6] A multiple outcome study does just that (even if the observations come from one study rather than studies in series). Finding that a treatment both reduces blood pressure and heart attack and stroke is more impressive evidence than reduction in one of these end-points alone. The combination of effects in the expected direction suggests that the underlying theoretical construct is correct and that it would be safe to generalise. This would be the case even with respect to an end-point where improvement was not quite significant at the usual threshold. Likewise, showing that improving nurse-patient ratios resulted in nurses spending more time with patients, being more diligent in making observations of vital signs, and turning patients more often, as well as improving satisfaction and improving clinical outcomes, would be more impressive evidence (say in an observational study) than improvement in one end-point alone.

So how to reconcile these two approaches? What we need – the trick to pull off – is to impose a prior discipline (akin to the idea of ‘prior’ hypotheses), while capitalising on the idea of corroboration across different observations, as recommended by Whewell. Here discipline is imposed by first spelling out the hypothesised relationships between end-points. Then observations are made with respect to the hypothesised relationships across the pre-defined causal chain. In the case of nurse-patient ratios, the causal chain may look something like this:

Advertise for more nurses; leading to more nurses are hired; leading to nurse morale improves, nurses spend more time with patients, nurse knowledge improves; leading to patients turned more often, vital signs more diligently observed, nurses provide more compassion; leading to less pressure ulcers, lower mortality / less failed resuscitation attempts, and satisfaction improves

Observations are made across this chain. A borderline improvement in patient satisfaction in an uncontrolled study, in the absence of a change in any other end-points would not be impressive evidence of effectiveness. However, showing that the intervention was properly implemented (A), and that intervening variables (B), clinical processes (C), and patient outcomes (D) all improved, would support a cause and effect relationship.[7] This would hold even in the event that one end-point, say pressure ulcers, improved, but not to the extent that it crossed the usual threshold for statistical significance.

So there we have it – a philosophical basis to reconcile two apparently contradicting movements in research. All that leaves is how to combine the data. CLAHRC WM is actively investigating Bayesian networks for this purpose with practical examples supported by NIHR Programme, HS&DR, and Health Foundation grants.

— Richard Lilford, CLAHRC WM Director

References:

  1. Bohannon J. I Fooled Millions into Thinking Chocolate Helps Weight Loss. Here’s How. io9. 27 May 2015.
  2. Kassel M. John Bohannon’s Chocolate Hoax and the Spread of Misinformation. Observer. 6 April 2015.
  3. Bohannon J, Koch D, Homm P, Driehaus A. Chocolate with high Cocoa content as a weight-loss accelerator. 2015. [Online]
  4. Ioannidis JPA. Why Most Published Research Findings are False. PLoS Med. 2005; 2: e124.
  5. Pawson R & Tilley N. Realistic Evaluation. London: Sage. 1997.
  6. Whewell W & Butts RE. William Whewell’s Theory of Scientific Method. Pittsburgh: University of Pittsburgh Press. 1968.
  7. Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ. 2010; 341: c4413.

Where is the Philosophy of Science in Research Methodology?

I was recently speaking to a (deservedly) famous and prominent UK academic about a meeting on research methodology. We were to invite mathematicians and sociologists; statisticians and psychologists; epidemiologists and economists. All the subjects cognate to applied health research were included – save one. When I suggested that we include the philosophy of science, my interlocutor was dismissive to the point of ridicule. As I said, he was a very senior fellow and I did not wish to annoy him (senior fellows are quite easily annoyed), so I let it drop. But his reaction is not atypical – when have you heard a health economist, epidemiologist or psychologist argue from an explicit epistemological or ontological premise? Granted, Lindley wrote a very famous article about the “Philosophy of Statistics”,[1] and formal Bayes can be considered a philosophical tradition with a distinct epistemology. But that aside, only one of the cognate disciplines listed above uses the “E” and “O” words – sociology.

And here is my problem. Scientific methodology is underpinned by philosophical premises and this remains the case, irrespective of whether these ideas are made explicit or remain implicit. The result is that disciplines that tend to be explicit about their philosophical assumptions have more influence over methodology than those where the underlying principles are only implicit. In short, if epidemiologists, economists and psychologists remain silent on epistemology, then they abrogate intellectual authority. Likewise, sociologists who do not dwell on these topics, delegate ‘authority’ to those who do. And with apologies to my esteemed colleagues in that discipline – the intellectual basis of applied research is too important an issue to leave to a sub-section of sociologists. This would not be so important, but for the constructivist flavour of much of the methodological literature in sociology. I am a vociferous, but lonely, critic of this viewpoint.[2] I would invite readers who have not already done so to enjoy the story of a hoax perpetuated by a Professor of Physics at the expense of constructivist sociologists – just type Alan Sokal into Google.[3] [4]

There is, however, one point on which constructivists and sociologists are absolutely right – we should talk about these things. However, philosophy of science does not seem to be prominent in philosophy departments and most disciplines ignore the subject. Why this reluctance to engage?

I do not have the answer and look forward to your views. One theory is that the more successful a discipline, the less it worries over its epistemology. Physicists (Sokal aside) take little notice of the subject. People who find bosons and black holes are content with their methodology, thank you very much. The role of philosophy is back to front in physics – a ‘mopping up’ exercise to describe how the physicists succeeded (so called ‘naturalism’), rather than a normative prescription for future work. Does this mean that sociologists struggle to come up with riveting discoveries and so become self-conscious about how they should go about their business. I do not believe that; witness the many sociological results we carry in this News Blog – the iconic Miguel & Fisman study,[5] the Good Samaritan study,[6] and the blog on how disrespect towards patients can lead to a downward spiral.[7] Maybe sociology produces great insights, but it is difficult to translate these into action. In that case, it would be natural to search for the link between research findings and decisions. Or could it be that there is some sort of ideological motivation rooted in existentialism? After all, sociology lies very close to politics. So maybe some sociologists are buying into a liberation ideology that paints reductionist science into the ‘bad’ corner, along with capitalists, industrialists and (now) bankers? Or is it connected with the subject matter in some way that chemistry, psychology and medicine are not?

News Blog reader Frances Griffiths suggested a further reason for philosophical pre-occupation in sociology. She made the point that in studying society, sociologists have to be aware of limits on objectivity arising from belonging to that society. It could be supposed that this applies particularly to qualitative work where separation of observer and observed cannot be achieved in the way it can in quantitative work.

As always we seek the views of readers on this point. One may make a further point in passing. While research methodology is not philosophically aware, save in sociology, some scientific findings are so curious or ineffable that they provoke philosophical reflections. This applies in the area of astro- and quantum-physics, and also consciousness studies as discussed later in this News Blog.

In the meantime, we must accept that sociologists are the main group making the running in the philosophy of scientific methodology and many lean towards a constructivist (if not a relativist) point of view, as promoted by Lincoln and Guba.[8] Why don’t the rest of us engage? When applied health researchers – especially those with a biomedical background – are confronted with constructivist arguments, they appear either not to understand them, or to be so incredulous that they cannot take them seriously. I bang on about the enemy at the gate and they look at me pitifully. So there you have it. I am a lonely voice in an epistemological wilderness.

— Richard Lilford, CLAHRC WM Director

References:

  1. Lindley DV. The Philosophy of Statistics. J Roy Stat Soc D-Sta. 2000; 49(3):293-337.
  2. Paley J, Lilford R. Qualitative methods: an alternative view. BMJ. 2011; 342:d424.
  3. Sokal AD. A Physicist Experiments with Cultural Studies. Lingua Franca. 1996. pp. 62-64.
  4. Sokal AD. Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity. Social Text. 1996; 46/47:217-52.
  5. Lilford RJ; Chen Y-F. Challenging the Idea of Hospital Culture. CLAHRC WM News Blog. 9 January 2015.
  6. Lilford RJ. A Culture of Quality: Join the Debate. CLAHRC WM News Blog. 131 June 2014.
  7. Lilford, RJ. Care that is not just unskilled but abusive. CLAHRC WM News Blog. 8 May 2015.
  8. Lincoln YS, & Guba EG. Naturalistic Inquiry. Newbury Park, CA: Sage Publications. 1985.

Why is the Science of Consciousness so Intractable? Because it is Deeply Philosophical

The Economist newspaper has recently featured a series of articles on major scientific topics. These articles are beautifully written and accessible to the intelligent – that is News Blog – reader. They cover topics such as time and space; matter and energy; and the Cambrian explosion in life forms that started about 540m years ago. The September 10th edition grappled with the intriguing, but enigmatic, topic of consciousness.[1] This “science brief” covered “theory of mind” (i.e. ability to imagine what another person is thinking or feeling), and neurophysiology (importance of the claustrum and parietal-temporal area of the brain). But what is consciousness? At this point science and philosophy merge, as adumbrated in this fortnight’s News Blog. The article cites the work of the philosopher Nagel, who asks us to try to imagine being a bat in possession of consciousness. While we are able to imagine what it feels like to hang upside down, the bat would build consciousness on its predominant sense – echolocation. The form of such consciousness must remain ineffable to humans. The article goes on to point out that other scientific phenomena are equally ineffable – no scientist can really imagine wave particle duality or light years. But what makes these problems tractable is mathematics. Space/time is hard to imagine, but the speed of light can be converted to distance and included in Pythagoras’ theorem, c2t2 – x2 – y2 – z2, to identify a point in space time. No such approach can be used to fully understand what another creature – bat or human – is experiencing.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. The Economist. What is Consciousness? The Hard Problem. 2015. [Online]

Objectivity in Service Delivery Research

The CLAHRC WM Director gave two recent talks about methodology and causal modelling in the evaluation of service delivery / quality improvement initiatives.

In both he received push back from a section of the audience. The remarks could be divided into two categories:

  1. Objectivity may be useful in physics and biomedical research, but cannot be usefully applied to service delivery research.
  2. Service delivery is too complex to be evaluated by standard scientific tools and the CLAHRC WM Director just doesn’t get it.

So, in this blog I shall tackle the question of objectivity, leaving the issue of complexity for a forthcoming post.

Concerning objectivity, I am a Bayesian and therefore need no convincing that science cannot be shorn of subjectivity. After all, the posterior probability is a function, not just of the data, but the ‘prior’. And the prior is subjective since it is constructed mentally (except in rare cases, such as genetics when it can be calculated from Mendel’s laws). Therefore, I regard subjectivity as an ineluctable part of science. But that does not follow that objectivity must be extirpated from health service evaluations. The fact that science cannot be totally objective does not mean that it is all subjective, any more than the fact that it being partly subjective excludes a role for objectivity. No, in forming a subjective view of the world (for example, in calibrating a parameter of interest to a decision-maker) the observations that are made should be as objective as we can make them. Why should they be objective? The answer is simple – to reduce the risk of error. Why is there a risk of error? Again the answer is simple – the human mind is prone to cognitive illusions. We favour observations that fit our preconceptions,[1] as discussed in a previous post. We anchor our minds on more recent experience or evidence encountered early in a chain of evidence. We are poorly calibrated over probability estimates, especially contingent probabilities.[2] The list of cognitive biases to which the human mind is prone is extensive and has been the subject of considerable research – try Daniel Kahneman’s “Thinking Fast and Slow”, for a summary.[3] It flies in the face of accumulated evidence to reify ‘lived experience’ at the expense of gathering objective evidence in the search for scientific understanding.

It should be understood that neither subjectivism nor objectivism need to ‘win’ – they are both in play. This idea that subjectivity is inherent in science, but objectivity has an important part to play, is clearly counter-intuitive to many people. So it may help to think metaphorically, and regard science as a journey, and objectivity as sign-posts along the way. The journey has to start with a question originating in human creativity and imagination – clearly a subjective process. But creativity yields theories to test and parameters to estimate. It is in collecting and making the initial analysis of such data that objectivity should be sought. The degree to which objectivity can be achieved will, of course, vary from one situation to another. In some cases, the observer can distance herself, as when a statistician is blinded to the intervention and control group in estimating an effectiveness parameter from RCT data. In other cases such separation is not possible, as when an ethnographer makes field notes. But objectivity is still the aim, just as a (good) teacher strives for objectivity in marking a piece of work. Once the analysis is complete, then meaning must be ascribed, guidelines formulated, etc. Here personal and social factors interact, as Bandura so elegantly describes in social cognitive theory,[4] and Bruno Latour equally elegantly explicates in the specific context of scientific understanding.[5] It is failure to appreciate that science is not just one thing that seems to cause people to trip up in understanding the interplay between subjectivity and objectivity in scientific achievement. To further help explain this concept I provide the following mind-line:

Conception of the idea - creativity and imagination; to Design  study; to Collect data; to Analyse data; to Interpret data; to Determine action.

Lastly, I encounter the objection that this is as may be in physics or life sciences, but does not apply to the social sciences. That’s cobblers – if there were no general statements we could make about personal and collective behaviour, then there would be no such thing as psychology or sociology. People who argue that human volition vitiates scientific inference confuse heterogeneity (it is hard, maybe impossible, to predict how an individual will behave) from a general tendency (women will accept a lower return than men in the ultimatum game; demand for health care is elastic on price). For a sure-footed philosophical account of this issue of objectivism and subjectivism in scientific reasoning I recommend John Searle.[6]

— Richard Lilford, CLAHRC WM Director

References:

  1. Lord CG, Ross L, Lepper MR. Biased Assimilation and Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence. J Pers Soc Psychol. 1979; 37(11): 2098-109.
  2. Gigerenzer G, Edwards A. Simple tools for understanding risks: from innumeracy to insight. BMJ. 2003; 327: 741-4.
  3. Kahneman D. Thinking Fast and Slow. New York, NY: Farrar, Strauss and Giroux. 2011.
  4. Bandura A. Social Learning Theory. Englewood Cliffs, NJ: Prentice Hall. 1977.
  5. Latour B. Science in Action: How to Follow Scientists and Engineers Through Society. Milton Keynes: Open University Press. 1987.
  6. Searle JR. The Construction of Social Reality. London: Penguin Books. 1996.