Tag Archives: Research

Scientists Should Not Be Held Accountable For Ensuring the Impact of Their Research

It has become more and more de rigour to expect researchers to be the disseminators of their own work. Every grant application requires the applicant to fill in a section on dissemination. We were recently asked to describe our dissemination plans as part of the editorial review process for a paper submitted to the BMJ. Only tact stopped us from responding, “To publish our paper in the BMJ”! Certainly when I started out on my scientific career it was generally accepted that the sciences should make discoveries and journals should disseminate them. The current fashion for asking researchers to take responsibility for dissemination of their work emanates, at least in part, from the empirical finding that journal articles by themselves may fail to change practice even when the evidence is strong. Furthermore, it could be argued that researchers are ideal conduits for dissemination. They have a vested interest in uptake of their findings, an intimate understanding of the research topic, and are in touch with networks of relevant practitioners. However, there are dangers in a policy where the producers of knowledge are also held accountable for its dissemination. I can think of three arguments against policies making scientists the vehicle for dissemination and uptake of their own results – scientists may not be good at it; they may be conflicted; and the idea is based on a fallacious understanding of the normative and practical link between research and action.

1. Talent for Communication
There is no good reason to think that researchers are naturally gifted in dissemination, or that this is where their inclination lies. Editors, journalists, and I suppose blog writers, clearly have such an interest. However, an inclination to communicate is not a necessary condition for becoming an excellent researcher. Specialisation is the basis for economic progress, and there is an argument that the benefits of specialisation apply to the production and communication of knowledge.

2. Objectivity
Pressurising researchers to market their own work may create perverse incentives. Researchers may be tempted to overstate their findings, or over interpret the implications for practice. There is also a fine line to be drawn between dissemination (drawing attention to findings) and advocacy (persuading people to take action based on findings). It is along the slippery slope between dissemination and advocacy that the dangers of auto-dissemination reside. The vested interest that scientists have in the uptake of their results should serve as a word of caution for those who militantly maintain that scientists should be the main promotors of their own work. The climate change scientific fraternity has been stigmatised by overzealous scientific advocacy. Expecting scientists to be the bandleader for their own product, and requiring them to demonstrate impact, has created perverse incentives.

3. Research Findings and Research Implications
With some noble exceptions, it is rare for a single piece of primary research to be sufficiently powerful to drive a change in practice. In fact replication is one of the core tenets of scientific practice. The pathway from research to change of practice should go as follows:

  1. Primary researcher conducts study and publishes results.
  2. Research results replicated.
  3. Secondary researcher conducts systematic review.
  4. Stakeholder committee develops guidelines according to established principles.
  5. Local service providers remove barriers to change in practice.
  6. Clinicians adapt a new method.

The ‘actors’ at these different stages can surely overlap, but this process nevertheless provides a necessary degree of detachment between scientific results and the actions that should follow, and it makes use of different specialisms and perspectives in translating knowledge into practice.

We would be interested to hear contrary views, but be careful to note that I am not arguing that a scientist should never be involved in dissemination of their own work, merely that this should not be a requirement or expectation.

— Richard Lilford, CLAHRC WM Director

Service Delivery Research: Researcher-Led or Manager-Led?

The implication behind much Service Delivery Research is that it is researcher-led. After all, it is called “research”. But is this the correct way to conceptualise such research when its purpose is to evaluate an intervention?

For a start, the researcher might not have been around when the intervention was promulgated; many, perhaps most, service interventions are evaluated retrospectively. In the case of such ex-post evaluations the researcher has no part in the intervention and cannot be held responsible for it in any way – the responsibilities of the researchers relate solely to research, such as data, security and analysis. The researcher cannot accept responsibility for the intervention itself. For instance, it would be absurd to hold Nagin and Pepper [1] responsible for the death penalty by virtue of their role in evaluating its effect on homicide rates! Responsibility for selection, design, and implementation of interventions must lie elsewhere.

But even when the study is prospective, for instance, involving a cluster RCT, it does not follow that the researcher is responsible for the intervention. Take, for instance, the Mexican Universal Health Insurance trial.[2] The Mexican Government promulgated the intervention and Professor King and his colleagues had to scramble after the fact, to ensure that it was introduced over an evaluation framework. CLAHRCs work closely with health service and local authority managers, helping to supply their information needs and evaluate service delivery interventions to improve the quality / efficiency / accountability / acceptability of health care. The interventions are ‘owned’ by the health service, in the main.

This makes something of a nonsense of the Ottawa Statement on the ethics of cluster trials – for instance, it says that the researcher must ensure that the study intervention is “adequately justified” and “researchers should protect cluster interests.”[3]

Such statement seems to misplace the responsibility for the intervention. That responsibility must lie with the person who has the statutory duty of care and who is employed by the legal entity charged with protecting client interests. The Chief Executive or her delegate – the ‘Cluster Guardian’ – must bear this responsibility.[4] Of course, that does not let researchers off the hook. For a start, the researcher has responsibility for the research itself: design, data collation, etc. Also, researchers may advise or even recommend an intervention, in which case they have a vicarious responsibility.

Advice or suggestions offered by researchers must be sound – the researcher should not advocate a course of action that is clearly not in the cluster interest and should not deliberately misrepresent information or mislead / wrongly tempt the cluster guardian. But the cluster guardian is the primary moral agent with responsibility to serve the cluster interest. The ethics of doing so are the ethics of policy-making and service interventions generally. Policy-makers are often not very good at making policy, as pointed out by King and Crewe in their book “The Blunders of Our Governments”.[5] But that is a separate topic.

— Richard Lilford, CLAHRC WM Director

References:

  1. Nagin DS & Pepper JV. Deterrence and the Death Penalty. Washington, D.C.: The National Academies Press, 2012.
  2. King G, Gakidou E, Imai K, et al. Public policy for the poor? A randomised assessment of the Mexican universal health insurance programme. Lancet. 2009; 373(9673):1447-54.
  3. Weijer C, Grimshaw JM, Eccles MP, et al. The Ottawa Statement on the Ethical Design and Conduct of Cluster Randomized Trials. PLoS Med. 2012; 9(11): e1001346.
  4. Edwards SJL, Lilford RJ, Hewison J. The ethics of randomised controlled trials from the perspectives of patients, the public, and healthcare professionals. BMJ. 1998; 317(7167): 1209-12.
  5. King A & Crewe I. The Blunders of Our Governments. London: Oneworld Publications, 2013.

Qualitative Research

News Blog readers are referred to a BMJ article signed by 76 senior academics from 11 countries.[1] The paper is led by the redoubtable Trisha Greenhalgh. The signatories include three members of the CLAHRC WM Scientific Advisory Committee (Jeffrey Braithwaite, Mary Dixon-Woods, and Paul Shekelle), a further CLAHRC WM collaborator (Russell Mannion), and the Director of a peer CLAHRC (Ruth Boaden). The authors make a strong case for the BMJ to publish more qualitative research, but, in a separate article, the BMJ editors hold their ground.[2] The CLAHRC WM Director thinks qualitative information is extremely important, and numbers alone seldom provide everything needed to theorise and take decisions. He would be happy to append his name to the signatories of the Greenhalgh letter. The authors provide examples where qualitative research has been influential, and the CLAHRC WM Director has written a reader on important qualitative research.[3] Most of the Director’s personal work involves collecting and synthesising both qualitative and quantitative data. As a Bayesian he spends a lot of time quantifying belief in order to accomplish such synthesis.[4] [5] The BMJ should be prepared to consider qualitative research on its merits and adopt a more welcoming posture.

Qualitative research is great, so what’s the problem? The CLAHRC WM Director does not like the way qualitative research is often carried out. Here are some piquant thoughts on how it might be improved:

  1. Do not determine sample sizes by theoretical saturation without saying what is to count as “saturation”. This should be described quantitatively, as discussed in a recent post.[6]
  2. Drop the argument that qualitative research preserves data in all their original complexity. No. The whole point of science is to represent underlying mechanisms and this means abstracting a theory from the messy observed world.
  3. Drop the irritating statement that qualitative research provides ‘rich data’. What is so rich about qualitative data, but not a large, in-depth, epistemological study like BioBank?
  4. Do not say that sampling can be purely purposive and leave it at that. Purposive sampling results in subgroups, and you should say how you will sample within the subgroup (unless there is only one person in the sub-group; the prime-minister for example).
  5. Combine qualitative and quantitative data whenever possible. Mixed methods research appears more often in protocols than in research papers. Yet this is a case where the total really can be much more than the sum of its parts.
  6. Drop the idea that qualitative research is not subject to bias, since it describes peoples lived experience, which is inviolate. No, there is shedloads of evidence that responses are labile, turning on the context on which discussions occur.[7] Here we start to lap against the real Achilles’ heel of some qualitative research – it is tainted by (sometimes subsumed by) notions of constructivism.

The heart of the problem is the idea that qualitative and quantitative research are quite different things that should be governed by a distinct set of laws. This veil of distinction is bogus. For a start, numbers and qualitative data are not that different. They are frequently inter-convertible – witness an informative Bayesian prior. And measurement theory is not dependent only on numbers, but on homology. Quantitative researchers should not allow qualitative researchers to bamboozle them with talk of different ‘paradigms’. Qualitative researchers should ensure that they grasp abstract, but crucial, epidemiological ideas, such as regression to the mean and the distinction between differences in relative and absolute risk. Classicists learn Latin and Greek, and health researchers should learn quantitative and qualitative, since a narrow understanding of one or the other is just that – a narrow understanding. So, shed any constructivist paradigm cloak and consign it to trash. When John Paley and I submitted our article to the BMJ, arguing that qualitative research need not (indeed should not) be constructivist in its epistemology,[3] the editors wrote back to say that BMJ qualitative researchers were not so silly – or rather that they took a more pragmatic stance. But, we replied in stout defence of our submission quoting a BMJ paper that said, “Most qualitative researchers today share a different belief about knowledge, called ‘constructivism…’.”[8] Our argument was not that knowledge was not socially constructed, but that it was not necessary, indeed it was undesirable, for qualitative researchers to buy into constructivism as a paradigm. Accepting that knowledge is constructed does not, in our view, mean that it can’t be constructed well or badly, or that all constructions are equivalent, or that there is not a reality ‘out there’ that can be at least partly understood by means of knowledge constructed under guidance from the scientific canon. In short, qualitative research is in, the two world view idea of qualitative and quantitative research is out.

— Richard Lilford, CLAHRC WM Director

References:

  1. Greenhalgh T, Annandale E, Ashcroft R, et al. An open letter to The BMJ editors on qualitative research. BMJ. 2016; 352: i563.
  2. Loder E, Groves T, Schroter S, Merino JG, Weber W. Qualitative research and The BMJ. BMJ. 2016; 352: i641.
  3. Paley J, & Lilford R. Qualitative methods: an alternative view. BMJ. 2011; 342: d424.
  4. Yao GL, Novielli N, Manaseki-Holland S, et al. Evaluation of a predevelopment service delivery intervention: an application to improve clinical handovers. BMJ Qual Saf. 2012; 21(s1):i29-38.
  5. Hemming K, Chilton PJ, Lilford RJ, Avery A, Sheikh A. Bayesian cohort and cross-sectional analyses of the PINCER trial: a pharmacist-led intervention to reduce medication errors in primary care. PLoS One. 2012;7(6):e38306.
  6. Lilford RJ. Sample size for qualitative studies: two recent approaches. CLAHRC WM News Blog. 11 March 2016.
  7. Lincoln YS, Guba EG. Paradigmatic controversies, contradictions, and emerging influences. In: Denzin NK, Lincoln YS, eds. The landscape of qualitative research: theories and issues. 2nd ed. Sage, 2003:253-91.
  8. Kuper A, Reeves S, Levinson W. An introduction to reading and appraising qualitative research. BMJ. 2008; 337: a288.