Tag Archives: Research

And Today We Have the Naming of Parts*

Management research, health services research, operations research, quality and safety research, implementation research – a crowded landscape of words describing concepts that are, at best, not entirely distinct, and at worst synonyms. Some definitions are given in Table 1. Perhaps the easiest one to deal with is ‘operations research’, which has a rather narrow meaning and is used to describe mathematical modelling techniques to derive optimal solutions to complex problems typically dealing with the flow of objects (people) over time. So it is a subset of the broader genre covered by this collection of terms. Quality and safety research puts the cart before the horse by defining the intended objective of an intervention, rather than where in the system the intervention impacts. Since interventions at a system level may have many downstream effects, it seems illogical and indeed potentially harmful, to define research by its objective, an argument made in greater detail elsewhere.[1]

Health Services Research (HSR) can be defined as management research applied to health, and is an acceptable portmanteau term for the construct we seek to define. For those who think the term HSR leaves out the development and evaluation of interventions at service level, the term Health Services and Delivery Research (HS&DR) has been devised. We think this is a fine term to describe management research as applied to the health services, and are pleased that the NIHR has embraced the term, and now has two major funding schemes ­– the HTA programme dealing with clinical research, and the HS&DR dealing with management research. In general, interventions and their related research programmes can be neatly represented as shown in the framework below, represented in a modified Donabedian chain:

078 DCB - Figure 1

So what about implementation research then? Wikipedia defines implementation research as “the scientific study of barriers to and methods of promoting the systematic application of research findings in practice, including in public policy.” However, a recent paper in BMJ states that “considerable confusion persists about its terminology and scope.”[2] Surprised? In what respect does implementation research differ from HS&DR?

Let’s start with the basics:

  1. HS&DR studies interventions at the service level. So does implementation research.
  2. HS&DR aims to improve outcome of care (effectiveness / safety / access / efficiency / satisfaction / acceptability / equity). So does implementation research.
  3. HS&DR seeks to improve outcomes / efficiency by making sure that optimum care is implemented. So does implementation research.
  4. HS&DR is concerned with implementation of knowledge; first knowledge about what clinical care should be delivered in a given situation, and second about how to intervene at the service level. So does implementation research.

This latter concept, concerning the two types of knowledge (clinical and service delivery) that are implemented in HS&DR is a critical one. It seems poorly understood and causes many researchers in the field to ‘fall over their own feet’. The concept is represented here:

078 DCB - Figure 2HS&DR / implementation research resides in the South East quadrant.

Despite all of this, some people insist on keeping the distinction between HS&DR and Implementation Research alive – as in the recent Standards for Reporting Implementation studies (StaRI) Statement.[3] The thing being implemented here may be a clinical intervention, in which case the above figure applies. Or it may be a service delivery intervention. Then they say that once it is proven, it must be implemented, and this implementation can be studied – in effect they are arguing here for a third ring:

078 DCB - Figure 3

This last, extreme South East, loop is redundant because:

  1. Research methods do not turn on whether the research is HS&DR or so-called Implementation Research (as the authors acknowledge). So we could end up in the odd situation of the HS&DR being a before and after study, and the Implementation Research being a cluster RCT! The so-called Implementation Research is better thought of as more HS&DR – seldom is one study sufficient.
  2. The HS&DR itself requires the tenets of Implementation Science to be in place – following the MRC framework, for example – and identifying barriers and facilitators. There is always implementation in any trial of evaluative research, so all HS&DR is Implementation Research – some is early and some is late.
  3. Replication is a central tenet of science and enables context to be explored. For example, “mother and child groups” is an intervention that was shown to be effective in Nepal. It has now been ‘implemented’ in six further sites under cluster RCT evaluation. Four of the seven studies yielded positive results, and three null results. Comparing and contrasting has yielded a plausible theory, so we have a good idea for whom the intervention works and why.[4] All seven studies are implementations, not just the latter six!

So, logical analysis does not yield any clear distinction between Implementation Research on the one hand and HS&DR on the other. The terms might denote some subtle shift of emphasis, but as a communication tool in a crowded lexicon, we think that Implementation Research is a term liable to sow confusion, rather than generate clarity.

Table 1

Term Definitions Sources
Management research “…concentrates on the nature and consequences of managerial actions, often taking a critical edge, and covers any kind of organization, both public and private.” Easterby-Smith M, Thorpe R, Jackson P. Management Research. London: Sage, 2012.
Health Services Research (HSR) “…examines how people get access to health care, how much care costs, and what happens to patients as a result of this care.” Agency for Healthcare Research and Quality. What is AHRQ? [Online]. 2002.
HS&DR “…aims to produce rigorous and relevant evidence on the quality, access and organisation of health services, including costs and outcomes.” INVOLVE. National Institute for Health Research Health Services and Delivery Research (HS&DR) programme. [Online]. 2017.
Operations research “…applying advanced analytical methods to help make better decisions.” Warwick Business School. What is Operational Research? [Online]. 2017.
Patient safety research “…coordinated efforts to prevent harm, caused by the process of health care itself, from occurring to patients.” World Health Organization. Patient Safety. [Online]. 2017.
Comparative Effectiveness research “…designed to inform health-care decisions by providing evidence on the effectiveness, benefits, and harms of different treatment options.” Agency for Healthcare Research and Quality. What is Comparative Effectiveness Research. [Online]. 2017.
Implementation research “…the scientific inquiry into questions concerning implementation—the act of carrying an intention into effect, which in health research can be policies, programmes, or individual practices (collectively called interventions).” Peters DH, Adam T, Alonge O, Agyepong IA, Tran N. Implementation research: what it is and how to do it. BMJ. 2013; 347: f6753.

We have ‘audited’ David Peters’ and colleagues BMJ article and found that every attribute they claim for Implementation Research applies equally well to HS&DR, as you can see in Table 2. However, this does not mean that we should abandon ‘Implementation Science’ – a set of ideas useful in designing an intervention. For example, stakeholders of all sorts should be involved in the design; barriers and facilitators should be identified; and so on. By analogy, I think Safety Research is a back-to-front term, but I applaud the tools and insights that ‘safety science’ provides.

Table 2

Term
“…attempts to solve a wide range of implementation problems”
“…is the scientific inquiry into questions concerning implementation – the act of carrying an intention into effect, which in health research can be policies, programmes, or individual practices (…interventions).”
“…can consider any aspect of implementation, including the factors affecting implementation, the processes of implementation, and the results of implementation.”
“The intent is to understand what, why, and how interventions work in ‘real world’ settings and to test approaches to improve them.”
“…seeks to understand and work within real world conditions, rather than trying to control for these conditions or to remove their influence as causal effects.”
“…is especially concerned with the users of the research and not purely the production of knowledge.”
“…uses [implementation outcome variables] to assess how well implementation has occurred or to provide insights about how this contributes to one’s health status or other important health outcomes.
…needs to consider “factors that influence policy implementation (clarity of objectives, causal theory, implementing personnel, support of interest groups, and managerial authority and resources).”
“…takes a pragmatic approach, placing the research question (or implementation problem) as the starting point to inquiry; this then dictates the research methods and assumptions to be used.”
“…questions can cover a wide variety of topics and are frequently organised around theories of change or the type of research objective.”
“A wide range of qualitative and quantitative research methods can be used…”
“…is usefully defined as scientific inquiry into questions concerning implementation—the act of fulfilling or carrying out an intention.”

 — Richard Lilford, CLAHRC WM Director and Peter Chilton, Research Fellow

References:

  1. Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ. 2010; 341: c4413.
  2. Peters DH, Adam T, Alonge O, Agyepong IA, Tran N. Implementation research: what it is and how to do it. BMJ. 2013; 347: f6753.
  3. Pinnock H, Barwick M, Carpenter CR, et al. Standards for Reporting Implementation Studies (StaRI) Statement. BMJ. 2017; 356: i6795.
  4. Prost A, Colbourn T, Seward N, et al. Women’s groups practising participatory learning and action to improve maternal and newborn health in low-resource settings: a systematic review and meta-analysis. Lancet. 2013; 381: 1736-46.

*Naming of Parts by Henry Reed, which Ray Watson alerted us to:

Today we have naming of parts. Yesterday,

We had daily cleaning. And tomorrow morning,

We shall have what to do after firing. But to-day,

Today we have naming of parts. Japonica

Glistens like coral in all of the neighbouring gardens,

And today we have naming of parts.

Important Notice: A New Online Repository for Research Results

Such a repository has now been launched – The Wellcome-Gates repository, established by the world’s second largest and largest medical research charities respectively, and run by a firm called F1000.[1] Research funded by Gates can only be published here. This is another big milestone in the gradual shake-up of the scientific publication sector.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. The Economist. The findings of medical research are disseminated too slowly. The Economist. 25 March 2017.

Scientists Should Not Be Held Accountable For Ensuring the Impact of Their Research

It has become more and more de rigour to expect researchers to be the disseminators of their own work. Every grant application requires the applicant to fill in a section on dissemination. We were recently asked to describe our dissemination plans as part of the editorial review process for a paper submitted to the BMJ. Only tact stopped us from responding, “To publish our paper in the BMJ”! Certainly when I started out on my scientific career it was generally accepted that the sciences should make discoveries and journals should disseminate them. The current fashion for asking researchers to take responsibility for dissemination of their work emanates, at least in part, from the empirical finding that journal articles by themselves may fail to change practice even when the evidence is strong. Furthermore, it could be argued that researchers are ideal conduits for dissemination. They have a vested interest in uptake of their findings, an intimate understanding of the research topic, and are in touch with networks of relevant practitioners. However, there are dangers in a policy where the producers of knowledge are also held accountable for its dissemination. I can think of three arguments against policies making scientists the vehicle for dissemination and uptake of their own results – scientists may not be good at it; they may be conflicted; and the idea is based on a fallacious understanding of the normative and practical link between research and action.

1. Talent for Communication
There is no good reason to think that researchers are naturally gifted in dissemination, or that this is where their inclination lies. Editors, journalists, and I suppose blog writers, clearly have such an interest. However, an inclination to communicate is not a necessary condition for becoming an excellent researcher. Specialisation is the basis for economic progress, and there is an argument that the benefits of specialisation apply to the production and communication of knowledge.

2. Objectivity
Pressurising researchers to market their own work may create perverse incentives. Researchers may be tempted to overstate their findings, or over interpret the implications for practice. There is also a fine line to be drawn between dissemination (drawing attention to findings) and advocacy (persuading people to take action based on findings). It is along the slippery slope between dissemination and advocacy that the dangers of auto-dissemination reside. The vested interest that scientists have in the uptake of their results should serve as a word of caution for those who militantly maintain that scientists should be the main promotors of their own work. The climate change scientific fraternity has been stigmatised by overzealous scientific advocacy. Expecting scientists to be the bandleader for their own product, and requiring them to demonstrate impact, has created perverse incentives.

3. Research Findings and Research Implications
With some noble exceptions, it is rare for a single piece of primary research to be sufficiently powerful to drive a change in practice. In fact replication is one of the core tenets of scientific practice. The pathway from research to change of practice should go as follows:

  1. Primary researcher conducts study and publishes results.
  2. Research results replicated.
  3. Secondary researcher conducts systematic review.
  4. Stakeholder committee develops guidelines according to established principles.
  5. Local service providers remove barriers to change in practice.
  6. Clinicians adapt a new method.

The ‘actors’ at these different stages can surely overlap, but this process nevertheless provides a necessary degree of detachment between scientific results and the actions that should follow, and it makes use of different specialisms and perspectives in translating knowledge into practice.

We would be interested to hear contrary views, but be careful to note that I am not arguing that a scientist should never be involved in dissemination of their own work, merely that this should not be a requirement or expectation.

— Richard Lilford, CLAHRC WM Director

Service Delivery Research: Researcher-Led or Manager-Led?

The implication behind much Service Delivery Research is that it is researcher-led. After all, it is called “research”. But is this the correct way to conceptualise such research when its purpose is to evaluate an intervention?

For a start, the researcher might not have been around when the intervention was promulgated; many, perhaps most, service interventions are evaluated retrospectively. In the case of such ex-post evaluations the researcher has no part in the intervention and cannot be held responsible for it in any way – the responsibilities of the researchers relate solely to research, such as data, security and analysis. The researcher cannot accept responsibility for the intervention itself. For instance, it would be absurd to hold Nagin and Pepper [1] responsible for the death penalty by virtue of their role in evaluating its effect on homicide rates! Responsibility for selection, design, and implementation of interventions must lie elsewhere.

But even when the study is prospective, for instance, involving a cluster RCT, it does not follow that the researcher is responsible for the intervention. Take, for instance, the Mexican Universal Health Insurance trial.[2] The Mexican Government promulgated the intervention and Professor King and his colleagues had to scramble after the fact, to ensure that it was introduced over an evaluation framework. CLAHRCs work closely with health service and local authority managers, helping to supply their information needs and evaluate service delivery interventions to improve the quality / efficiency / accountability / acceptability of health care. The interventions are ‘owned’ by the health service, in the main.

This makes something of a nonsense of the Ottawa Statement on the ethics of cluster trials – for instance, it says that the researcher must ensure that the study intervention is “adequately justified” and “researchers should protect cluster interests.”[3]

Such statement seems to misplace the responsibility for the intervention. That responsibility must lie with the person who has the statutory duty of care and who is employed by the legal entity charged with protecting client interests. The Chief Executive or her delegate – the ‘Cluster Guardian’ – must bear this responsibility.[4] Of course, that does not let researchers off the hook. For a start, the researcher has responsibility for the research itself: design, data collation, etc. Also, researchers may advise or even recommend an intervention, in which case they have a vicarious responsibility.

Advice or suggestions offered by researchers must be sound – the researcher should not advocate a course of action that is clearly not in the cluster interest and should not deliberately misrepresent information or mislead / wrongly tempt the cluster guardian. But the cluster guardian is the primary moral agent with responsibility to serve the cluster interest. The ethics of doing so are the ethics of policy-making and service interventions generally. Policy-makers are often not very good at making policy, as pointed out by King and Crewe in their book “The Blunders of Our Governments”.[5] But that is a separate topic.

— Richard Lilford, CLAHRC WM Director

References:

  1. Nagin DS & Pepper JV. Deterrence and the Death Penalty. Washington, D.C.: The National Academies Press, 2012.
  2. King G, Gakidou E, Imai K, et al. Public policy for the poor? A randomised assessment of the Mexican universal health insurance programme. Lancet. 2009; 373(9673):1447-54.
  3. Weijer C, Grimshaw JM, Eccles MP, et al. The Ottawa Statement on the Ethical Design and Conduct of Cluster Randomized Trials. PLoS Med. 2012; 9(11): e1001346.
  4. Edwards SJL, Lilford RJ, Hewison J. The ethics of randomised controlled trials from the perspectives of patients, the public, and healthcare professionals. BMJ. 1998; 317(7167): 1209-12.
  5. King A & Crewe I. The Blunders of Our Governments. London: Oneworld Publications, 2013.

Qualitative Research

News Blog readers are referred to a BMJ article signed by 76 senior academics from 11 countries.[1] The paper is led by the redoubtable Trisha Greenhalgh. The signatories include three members of the CLAHRC WM Scientific Advisory Committee (Jeffrey Braithwaite, Mary Dixon-Woods, and Paul Shekelle), a further CLAHRC WM collaborator (Russell Mannion), and the Director of a peer CLAHRC (Ruth Boaden). The authors make a strong case for the BMJ to publish more qualitative research, but, in a separate article, the BMJ editors hold their ground.[2] The CLAHRC WM Director thinks qualitative information is extremely important, and numbers alone seldom provide everything needed to theorise and take decisions. He would be happy to append his name to the signatories of the Greenhalgh letter. The authors provide examples where qualitative research has been influential, and the CLAHRC WM Director has written a reader on important qualitative research.[3] Most of the Director’s personal work involves collecting and synthesising both qualitative and quantitative data. As a Bayesian he spends a lot of time quantifying belief in order to accomplish such synthesis.[4] [5] The BMJ should be prepared to consider qualitative research on its merits and adopt a more welcoming posture.

Qualitative research is great, so what’s the problem? The CLAHRC WM Director does not like the way qualitative research is often carried out. Here are some piquant thoughts on how it might be improved:

  1. Do not determine sample sizes by theoretical saturation without saying what is to count as “saturation”. This should be described quantitatively, as discussed in a recent post.[6]
  2. Drop the argument that qualitative research preserves data in all their original complexity. No. The whole point of science is to represent underlying mechanisms and this means abstracting a theory from the messy observed world.
  3. Drop the irritating statement that qualitative research provides ‘rich data’. What is so rich about qualitative data, but not a large, in-depth, epistemological study like BioBank?
  4. Do not say that sampling can be purely purposive and leave it at that. Purposive sampling results in subgroups, and you should say how you will sample within the subgroup (unless there is only one person in the sub-group; the prime-minister for example).
  5. Combine qualitative and quantitative data whenever possible. Mixed methods research appears more often in protocols than in research papers. Yet this is a case where the total really can be much more than the sum of its parts.
  6. Drop the idea that qualitative research is not subject to bias, since it describes peoples lived experience, which is inviolate. No, there is shedloads of evidence that responses are labile, turning on the context on which discussions occur.[7] Here we start to lap against the real Achilles’ heel of some qualitative research – it is tainted by (sometimes subsumed by) notions of constructivism.

The heart of the problem is the idea that qualitative and quantitative research are quite different things that should be governed by a distinct set of laws. This veil of distinction is bogus. For a start, numbers and qualitative data are not that different. They are frequently inter-convertible – witness an informative Bayesian prior. And measurement theory is not dependent only on numbers, but on homology. Quantitative researchers should not allow qualitative researchers to bamboozle them with talk of different ‘paradigms’. Qualitative researchers should ensure that they grasp abstract, but crucial, epidemiological ideas, such as regression to the mean and the distinction between differences in relative and absolute risk. Classicists learn Latin and Greek, and health researchers should learn quantitative and qualitative, since a narrow understanding of one or the other is just that – a narrow understanding. So, shed any constructivist paradigm cloak and consign it to trash. When John Paley and I submitted our article to the BMJ, arguing that qualitative research need not (indeed should not) be constructivist in its epistemology,[3] the editors wrote back to say that BMJ qualitative researchers were not so silly – or rather that they took a more pragmatic stance. But, we replied in stout defence of our submission quoting a BMJ paper that said, “Most qualitative researchers today share a different belief about knowledge, called ‘constructivism…’.”[8] Our argument was not that knowledge was not socially constructed, but that it was not necessary, indeed it was undesirable, for qualitative researchers to buy into constructivism as a paradigm. Accepting that knowledge is constructed does not, in our view, mean that it can’t be constructed well or badly, or that all constructions are equivalent, or that there is not a reality ‘out there’ that can be at least partly understood by means of knowledge constructed under guidance from the scientific canon. In short, qualitative research is in, the two world view idea of qualitative and quantitative research is out.

— Richard Lilford, CLAHRC WM Director

References:

  1. Greenhalgh T, Annandale E, Ashcroft R, et al. An open letter to The BMJ editors on qualitative research. BMJ. 2016; 352: i563.
  2. Loder E, Groves T, Schroter S, Merino JG, Weber W. Qualitative research and The BMJ. BMJ. 2016; 352: i641.
  3. Paley J, & Lilford R. Qualitative methods: an alternative view. BMJ. 2011; 342: d424.
  4. Yao GL, Novielli N, Manaseki-Holland S, et al. Evaluation of a predevelopment service delivery intervention: an application to improve clinical handovers. BMJ Qual Saf. 2012; 21(s1):i29-38.
  5. Hemming K, Chilton PJ, Lilford RJ, Avery A, Sheikh A. Bayesian cohort and cross-sectional analyses of the PINCER trial: a pharmacist-led intervention to reduce medication errors in primary care. PLoS One. 2012;7(6):e38306.
  6. Lilford RJ. Sample size for qualitative studies: two recent approaches. CLAHRC WM News Blog. 11 March 2016.
  7. Lincoln YS, Guba EG. Paradigmatic controversies, contradictions, and emerging influences. In: Denzin NK, Lincoln YS, eds. The landscape of qualitative research: theories and issues. 2nd ed. Sage, 2003:253-91.
  8. Kuper A, Reeves S, Levinson W. An introduction to reading and appraising qualitative research. BMJ. 2008; 337: a288.