Tag Archives: Implementation

And Today We Have the Naming of Parts*

Management research, health services research, operations research, quality and safety research, implementation research – a crowded landscape of words describing concepts that are, at best, not entirely distinct, and at worst synonyms. Some definitions are given in Table 1. Perhaps the easiest one to deal with is ‘operations research’, which has a rather narrow meaning and is used to describe mathematical modelling techniques to derive optimal solutions to complex problems typically dealing with the flow of objects (people) over time. So it is a subset of the broader genre covered by this collection of terms. Quality and safety research puts the cart before the horse by defining the intended objective of an intervention, rather than where in the system the intervention impacts. Since interventions at a system level may have many downstream effects, it seems illogical and indeed potentially harmful, to define research by its objective, an argument made in greater detail elsewhere.[1]

Health Services Research (HSR) can be defined as management research applied to health, and is an acceptable portmanteau term for the construct we seek to define. For those who think the term HSR leaves out the development and evaluation of interventions at service level, the term Health Services and Delivery Research (HS&DR) has been devised. We think this is a fine term to describe management research as applied to the health services, and are pleased that the NIHR has embraced the term, and now has two major funding schemes ­– the HTA programme dealing with clinical research, and the HS&DR dealing with management research. In general, interventions and their related research programmes can be neatly represented as shown in the framework below, represented in a modified Donabedian chain:

078 DCB - Figure 1

So what about implementation research then? Wikipedia defines implementation research as “the scientific study of barriers to and methods of promoting the systematic application of research findings in practice, including in public policy.” However, a recent paper in BMJ states that “considerable confusion persists about its terminology and scope.”[2] Surprised? In what respect does implementation research differ from HS&DR?

Let’s start with the basics:

  1. HS&DR studies interventions at the service level. So does implementation research.
  2. HS&DR aims to improve outcome of care (effectiveness / safety / access / efficiency / satisfaction / acceptability / equity). So does implementation research.
  3. HS&DR seeks to improve outcomes / efficiency by making sure that optimum care is implemented. So does implementation research.
  4. HS&DR is concerned with implementation of knowledge; first knowledge about what clinical care should be delivered in a given situation, and second about how to intervene at the service level. So does implementation research.

This latter concept, concerning the two types of knowledge (clinical and service delivery) that are implemented in HS&DR is a critical one. It seems poorly understood and causes many researchers in the field to ‘fall over their own feet’. The concept is represented here:

078 DCB - Figure 2HS&DR / implementation research resides in the South East quadrant.

Despite all of this, some people insist on keeping the distinction between HS&DR and Implementation Research alive – as in the recent Standards for Reporting Implementation studies (StaRI) Statement.[3] The thing being implemented here may be a clinical intervention, in which case the above figure applies. Or it may be a service delivery intervention. Then they say that once it is proven, it must be implemented, and this implementation can be studied – in effect they are arguing here for a third ring:

078 DCB - Figure 3

This last, extreme South East, loop is redundant because:

  1. Research methods do not turn on whether the research is HS&DR or so-called Implementation Research (as the authors acknowledge). So we could end up in the odd situation of the HS&DR being a before and after study, and the Implementation Research being a cluster RCT! The so-called Implementation Research is better thought of as more HS&DR – seldom is one study sufficient.
  2. The HS&DR itself requires the tenets of Implementation Science to be in place – following the MRC framework, for example – and identifying barriers and facilitators. There is always implementation in any trial of evaluative research, so all HS&DR is Implementation Research – some is early and some is late.
  3. Replication is a central tenet of science and enables context to be explored. For example, “mother and child groups” is an intervention that was shown to be effective in Nepal. It has now been ‘implemented’ in six further sites under cluster RCT evaluation. Four of the seven studies yielded positive results, and three null results. Comparing and contrasting has yielded a plausible theory, so we have a good idea for whom the intervention works and why.[4] All seven studies are implementations, not just the latter six!

So, logical analysis does not yield any clear distinction between Implementation Research on the one hand and HS&DR on the other. The terms might denote some subtle shift of emphasis, but as a communication tool in a crowded lexicon, we think that Implementation Research is a term liable to sow confusion, rather than generate clarity.

Table 1

Term Definitions Sources
Management research “…concentrates on the nature and consequences of managerial actions, often taking a critical edge, and covers any kind of organization, both public and private.” Easterby-Smith M, Thorpe R, Jackson P. Management Research. London: Sage, 2012.
Health Services Research (HSR) “…examines how people get access to health care, how much care costs, and what happens to patients as a result of this care.” Agency for Healthcare Research and Quality. What is AHRQ? [Online]. 2002.
HS&DR “…aims to produce rigorous and relevant evidence on the quality, access and organisation of health services, including costs and outcomes.” INVOLVE. National Institute for Health Research Health Services and Delivery Research (HS&DR) programme. [Online]. 2017.
Operations research “…applying advanced analytical methods to help make better decisions.” Warwick Business School. What is Operational Research? [Online]. 2017.
Patient safety research “…coordinated efforts to prevent harm, caused by the process of health care itself, from occurring to patients.” World Health Organization. Patient Safety. [Online]. 2017.
Comparative Effectiveness research “…designed to inform health-care decisions by providing evidence on the effectiveness, benefits, and harms of different treatment options.” Agency for Healthcare Research and Quality. What is Comparative Effectiveness Research. [Online]. 2017.
Implementation research “…the scientific inquiry into questions concerning implementation—the act of carrying an intention into effect, which in health research can be policies, programmes, or individual practices (collectively called interventions).” Peters DH, Adam T, Alonge O, Agyepong IA, Tran N. Implementation research: what it is and how to do it. BMJ. 2013; 347: f6753.

We have ‘audited’ David Peters’ and colleagues BMJ article and found that every attribute they claim for Implementation Research applies equally well to HS&DR, as you can see in Table 2. However, this does not mean that we should abandon ‘Implementation Science’ – a set of ideas useful in designing an intervention. For example, stakeholders of all sorts should be involved in the design; barriers and facilitators should be identified; and so on. By analogy, I think Safety Research is a back-to-front term, but I applaud the tools and insights that ‘safety science’ provides.

Table 2

Term
“…attempts to solve a wide range of implementation problems”
“…is the scientific inquiry into questions concerning implementation – the act of carrying an intention into effect, which in health research can be policies, programmes, or individual practices (…interventions).”
“…can consider any aspect of implementation, including the factors affecting implementation, the processes of implementation, and the results of implementation.”
“The intent is to understand what, why, and how interventions work in ‘real world’ settings and to test approaches to improve them.”
“…seeks to understand and work within real world conditions, rather than trying to control for these conditions or to remove their influence as causal effects.”
“…is especially concerned with the users of the research and not purely the production of knowledge.”
“…uses [implementation outcome variables] to assess how well implementation has occurred or to provide insights about how this contributes to one’s health status or other important health outcomes.
…needs to consider “factors that influence policy implementation (clarity of objectives, causal theory, implementing personnel, support of interest groups, and managerial authority and resources).”
“…takes a pragmatic approach, placing the research question (or implementation problem) as the starting point to inquiry; this then dictates the research methods and assumptions to be used.”
“…questions can cover a wide variety of topics and are frequently organised around theories of change or the type of research objective.”
“A wide range of qualitative and quantitative research methods can be used…”
“…is usefully defined as scientific inquiry into questions concerning implementation—the act of fulfilling or carrying out an intention.”

 — Richard Lilford, CLAHRC WM Director and Peter Chilton, Research Fellow

References:

  1. Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ. 2010; 341: c4413.
  2. Peters DH, Adam T, Alonge O, Agyepong IA, Tran N. Implementation research: what it is and how to do it. BMJ. 2013; 347: f6753.
  3. Pinnock H, Barwick M, Carpenter CR, et al. Standards for Reporting Implementation Studies (StaRI) Statement. BMJ. 2017; 356: i6795.
  4. Prost A, Colbourn T, Seward N, et al. Women’s groups practising participatory learning and action to improve maternal and newborn health in low-resource settings: a systematic review and meta-analysis. Lancet. 2013; 381: 1736-46.

*Naming of Parts by Henry Reed, which Ray Watson alerted us to:

Today we have naming of parts. Yesterday,

We had daily cleaning. And tomorrow morning,

We shall have what to do after firing. But to-day,

Today we have naming of parts. Japonica

Glistens like coral in all of the neighbouring gardens,

And today we have naming of parts.

Advertisements

Getting Evidence into Practice

In the early days of CLAHRCs, ‘getting evidence into practice‘ was an important objective. We set about closing the T2 gap and used implementation science to get doctors to prescribe evidence-based care, dentists to use tooth protecting resins, and nurses to make regular observations. That is to say, we were concerned with how to make practitioners comply with standards over which they had complete jurisdiction. Theories of individual behaviour change were invoked, and rather then choose a theory on the basis of its impressive sounding title (e.g. prospect theory, social network theory), a framework was developed to identify barriers and facilitators of change.[1]

But practitioners increasingly follow the evidence when it is compelling and when the evidence-based standard is in their gift.[2] So, the big (and much more interesting) problem now is how to change the service in a generic way rather than simply to increase performance on a specific measure – we are becoming more concerned with draining the swamp than zapping individual mosquitoes.[3] In our CLAHRC we recently evaluated a compound (multi-component) intervention to improve home dialysis rates, having promulgated a guideline supporting improved access to such a service. We showed that agreement with the proposed change among stakeholders, an agreed

Implementation plan, managerial support, and product champions all facilitated the success of the intervention in taking West Midlands from the worst to the best performing region in England. However, the king of all intervention components was a financial incentive.[4] Fulop and colleagues have now published a similar multi-methods evaluation of an arguably even more complex intervention to improve access to acute stroke care.[5] The findings are very similar, save that we found more emphasis on financial incentives and also more problems in communication with patients; something that would perhaps not stand out in the hyper acute stroke context. The Fulop paper is an advance on ours in (at least) two respects. First, they compare and contrast across two regions/CLAHRCs and I always think controls should be used if possible; even one is better than none. Second, they illustrate the causal model with diagrams that make the theoretical framework they are using clear, a practice that is helpful in communicating the very real distinctions between the intervention as planned, its implementation/adaption, its upstream effects (e.g. staff knowledge/morale), its downstream effects (at the patient ‘level’), and the context in which all takes place.[6] People muddle these concepts and hence fall over their feet , but Fulop and colleagues have shown themselves to be sure-footed!

— Richard Lilford,

References:

  1. Michie S, van Stralen M, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventionsImplement Sci. 2011; 6: 42.
  2. Johnson N, Sutton J, Thornton JG, Lilford RJ, Johnson VA, Peel KR. Decision analysis for best management of mildly dyskaryotic smear. Lancet. 1993;342(8863):91-6
  3. Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end pointsBMJ. 2010; 341: c4413.
  4. Combes G, Allen K, Sein K, Girling A, Lilford R. Taking hospital treatments home: a mixed methods case study looking at the barriers and success factors for home dialysis treatment and the influence of a target on uptake ratesImplement Sci. 2015; 10: 148.
  5. Fulop NJ, Ramsay AIG, Perry C, et al. Explaining outcomes in major system change: a qualitative study of implementing centralised acute stroke services in two large metropolitan regions in England. Implement Sci. 2016; 11: 80.
  6. Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ. 2010; 341: c4413.

Evidence-based fly fishing for trout

Fly fishing - Red setter lure
Photo by: Seriousfun

In the last blog I presented, I discussed the concept of knowledge brokering and how it is operationalised within CLAHRC WM. I apologised for too much ‘management speak’, but at the same time threatened more of this in connection with a concept that underpins our implementation research theme, ‘absorptive capacity’ (ACAP). Since its inception, ACAP has been seen as a core element of increasing critical review capacity for R&D units in private sector firms,[1] which is an idea now spilling over to healthcare for the development and implementation of evidence-based service delivery.[2] [3] [4] There are two dimensions of the concept particularly relevant to implementation research in CLAHRC WM. The first dimension is four stages of knowledge mobilisation crucial for developing ACAP:

  • acquisition of knowledge;
  • assimilation of knowledge;
  • transformation of knowledge;
  • exploitation of knowledge.[5]

The second dimension of the ACAP concept are its antecedents, otherwise known as ‘combinative capabilities’, which encompasses systems capability, socialisation capability, and co-ordination capability.[6] In this blog, I am going to deal with the four stages of knowledge mobilisation crucial for developing ACAP, and leave combinative capabilities for another day.

Rather than consider ACAP in R&D units in private sector firms, or healthcare commissioners (the subject of another NIHR HS&DR study I currently lead) and healthcare providers (the subject of a study I recently completed, which was also funded by NIHR HS&DR, examining the translation of patient safety knowledge), I am going to indulge myself and illustrate ACAP stages through evidence-based fly fishing for trout. It’s a great time of year to go fly fishing for trout, but unfortunately I am stuck in the office doing my day job. The next best thing is to write about it, and indeed I am still trying to think of a way to smuggle an analysis of fly fishing into an organisation studies journal. Maybe this blog is the beginnings of that.

Let’s start with the first stage of developing ACAP, that of acquisition of knowledge. The type of knowledge content (component knowledge) we might acquire to catch those trout relates to:

  • What fly is best (e.g. hatching currently)?
  • Do I use a sinking, intermediate or floating line?
  • What are the weather conditions and its effect on the trout?
  • At what depth might trout be at a certain time of the day or year?
  • What is the topography of the lake, and where might I best fish?
  • What speed might I best retrieve the fly?

The sources of this knowledge are many, but broadly, I might consult texts or internet sites, and then on arrival at the water seek out those already fishing or the fishery manager, to elicit local, current knowledge. Of course, I have a store of tacit knowledge related to what happened at this water last year, which I have been reflecting upon as I drove towards the venue for fishing. In short, acquisition of knowledge is not necessarily the challenge, particularly because the fly fishing fraternity are a friendly and open bunch towards sharing of knowledge. In healthcare, there is also a plethora of knowledge around, often held as data and information, which is readily accessible. Whilst some of the healthcare fraternity may be more or less open to knowledge sharing, nevertheless I contend knowledge acquisition is the least challenging dimension of ACAP in healthcare compared to assimilation, transformation and exploitation.

Moving onto assimilation of component knowledge, this proves more of a challenge for evidence-based fly fishing. What component knowledge do I privilege? Is it the fly selection that is likely to work? Might I catch on any fly, if I get the depth right? Or is it more about the speed of retrieve? How might I vary my fly fishing in different parts of the water? I can see trout taking flies on the surface, but it is windy so I lack control of presentation of my flies with a floating line. Further, whilst the textbook suggests a certain set of tactics, the local fly fishing fraternity have local knowledge at odds with this. What constitutes the best evidence – generic or local? Anyway, I have a gut feeling, based on years of experience, that the trout will chase brightly coloured lures today. Have I to follow my instinct? It’s worked before. Maybe I should ask the trout? Drawing parallels with healthcare, there are competing sources of knowledge, which may be at odds with each other, and intuitive clinical judgement may prove correct. Also the abstracted, more generic evidence doesn’t seem to fit. Maybe I should privilege what the patient thinks is best?

Assuming I do package different sources and components of knowledge in a coherent way, I have a fly fishing intervention, constituting transformation of knowledge. Let’s try it out. I have gone for a team of ‘buzzers’ (in essence a nylon line attached to the floating fly line, from which I hang pupae), which I let drift in the wind in a corner of the lake into which the wind is blowing. Bang, bang, within an hour I have caught four trout, but of a smaller size. In the following hour, the ‘takes’ dry up. So my ‘pilot’ intervention, resultant from acquisition, assimilation and transformation of knowledge, has worked to some extent, but only for a particular size of trout and I am unsure exactly why this has worked. When not catching fish, the rule in fly fishing is to change tactics and move around. What’s changed? The sun has come out, which might mean the trout are feeding deeper (they have no eyelids!). The wind has dropped a bit, so the buzzers are more static than drifting. It has become warmer. How do I adapt what was previously successful, so that it works more generically. How do I scale it up so it works in all circumstances? What is the panacea for catching big trout? This represents exploitation.

If I could exploit knowledge at the final stage of ACAP development to complement what I have done so far in evidence-based fly fishing, I would catch trout regularly. However, it might then be rather predictable and boring. The variation, including a frustrating ineffectiveness of chosen tactics on some days, is the essence of fly fishing. This is where evidence-based fly fishing and healthcare service delivery diverge. In healthcare, we are seeking to identify and scale up what works for all patients. CLAHRCs are at the heart of such acquisition, assimilation, transformation and exploitation of knowledge to deliver best care for all patients. Maybe my evidence-based fly fishing metaphor doesn’t hold for translational health research after all.

Next time, combinative capabilities, and why their absence explains why Aston Villa is threatened with relegation this year (only joking).

— Graeme Currie, Deputy Director CLAHRC WM, Implementation & Organisation Studies Lead

References:

  1. Lane PJ, Koka BR, Pathak S. The reification of absorptive capacity: a critical review and rejuvenation of the construct. Acad Manage Rev. 2006; 31(4): 833-63.
  2. Berta W, Teare GF, Gilbart E, Ginsburg LS, Lemieux-Charles L, Davis D, Rappolt S. Spanning the know-do gap: Understanding knowledge application and capacity in long-term care homes. Soc Sci Med. 2010; 70(9): 1326-34.
  3. Ferlie E, Crilly T, Jashapara A, Peckham A. Knowledge mobilisation in healthcare: A critical review of health sector and generic management literature. Soc Sci Med. 2012; 74(8): 1297-304.
  4. Harvey G, Skelcher C, Spencer E, Jas P, Walshe K. Absorptive capacity in a non-market environment. Pub Manage Rev. 2010; 12(1): 77-97.
  5. Zahra SA, George G. Absorptive capacity: a review, reconceptualization, and extension. Acad Manage Rev. 2002; 27(2): 185-203.
  6. Van Den Bosch FAJ, Volberda HW, de Boer M. Coevolution of form absorptive capacity and knowledge environment: Organizational forms and combinative capabilities. Organ Sci. 1999; 10(5): 551-568.