Tag Archives: Research

Effective Collaboration between Academics and Practitioners Facilitates Research Uptake in Practice

New research has been conducted by Eivor Oborn, Professor of Entrepreneurship & Innovation at Warwick Business School and Michael Barrett, Professor of  Information Systems & Innovation Studies at Cambridge Judge Business School, to better understand the contribution of collaboration in bridging the gap between research and its uptake in to practice (Inform Organ. 2018; 28[1]: 44-51).

Much has been written on the role of knowledge exchange to bridge the academic-practitioner divide. The common view Is that academics ‘talk funny’, using specialised language, which often leads to the practical take home messages being missed or ‘lost in translation’. The challenge for academics is to learn how to connect ‘theory or evidence driven’ knowledge with practitioners’ knowledge to ‘give sense’ and enable new insights to form.

The research examines four strategies by which academics may leverage their expertise in collaborative relationships with practitioners to realise, what the authors term: ‘Research Impact and Contributions to Knowledge’ (RICK).

  1. Maintain critical distance
    Academics may adopt a strategy of maintaining critical distance in how they engage in academic-practitioner relations for a variety of reasons, for example, to retain control of the subject of investigation.
  2. Prompt deeper engagementAcademics who are immersed in one domain, become fluent in a new language and gain practical expertise in this second (practical) domain. For example, in the Warwick-led NIHR CLAHRC West Midlands, academics are embedded and work closely with their NHS counterparts. This provides academics with knowledge -sharing and -transfer opportunities, enabling them to better respond to the knowledge requirements of the health service, and in some scenarios, co-design research studies, and catalyse upon opportunities to promote the use of evidence from their research activities.
  3. Develop prescience
    Prescience describes a process of anticipating what we need to know – almost akin to ‘horizon-scanning’. A strategy of prescience would aim to anticipate, conceptualize, and influence significant problems that might arise in domains over time. The WBS-led Enterprise Research Centre employs this strategy and seeks to answer one central question: ‘what drives SME growth?’
  4. Achieve hybrid practices
    Engaged scholarship allows academics to expand their networks and collaboration with other domains and in doing so generate an entirely new field of ‘hybrid’ practices.

The research examines how the utility (such as practical or scientific usefulness) of contributions in academic-practitioner collaboration can be maximised. It calls for established journals to support a new genre of articles that involve engaged scholarship, produced by multidisciplinary teams of academic, practitioners and policymakers.

The research is published in Information & Organization journal, together with a collection of articles on Research Impact and Contributions to Knowledge (RICK) – a framework coined by co-author on the above research, Prof Michael Barrett.

— Nathalie Maillard, WBS Impact Officer

Advertisements

Patient Reported Outcome Measures: A Tool for Individual Patient Management, Not Just for Research

Research and practice are often thought of as totally different types of activity. For instance, research is governed by an extensive set of procedural requirements that do not apply to standard practice. However, many inquiries that would have counted as research in earlier times are now embedded in management and governance practices. Take, for instance, outcomes of treatments and procedures. When the CLAHRC WM Director was a young doctor a study of the outcomes of, say, 50 Wertheim’s hysterectomies would be a typical research undertaking. Now, this may be considered a standard audit and may take place as part of a hospital’s prudent monitoring of its work. CLAHRCs have a long tradition of building data analysis into routine practice.

One active area of practice concerns patient-reported outcome measures (PROMs).  Often thought of as endpoints for research, CLAHRC WM collaborator Melanie Calvert and members of the Centre for Patient Reported Outcomes Research, are working closely with clinicians at University Hospital Birmingham NHS Foundation Trust to capture electronic PROMs which are used for real-time monitoring of patient symptoms and to tailor care to individual patient needs. Prof Calvert notes that these data have the potential to be used for multiple purposes: aggregated data may be used to inform and improve service delivery, whilst individual patient data may be used alongside remote clinical monitoring to guide frequency of outpatient appointments. Attendance at the hospital can be flexible and dictated by patient need and response to therapy.

You can hear more about their work and exciting new developments in PROMs research at the forthcoming PROMS conference, sponsored by CLAHRC WM and hosted by the University of Birmingham on Wednesday 20 June 2018. You can register online by clicking here. Registration is open until 11 June.

— Richard Lilford, CLAHRC WM Director

— Melanie Calvert, Professor of Outcomes Methodology

World Bank Report Into Research Productivity in Sub-Saharan Africa

This report [1] shows the following:

  • Over the last decade, the research output from sub-Saharan Africa (SSA) in terms of total citations has risen rapidly, but is still less than a third of a percent of the world output, while the continent houses 12% of the global population.
  • The increase in world share is higher in two Asian countries chosen for comparison—Malaysia and Vietnam – than in African countries.
  • With the notable exception of South Africa, citation rates are lower than average per paper in SSA than in the rest of the world.
  • The most highly cited papers tend to include an author from another continent or South Africa.
  • Health research dominates except in South Africa.
  • Harvard University, the University of Oxford, the University of Liverpool (incorporating LSTM at the time of study), London School of Hygiene and Tropical Medicine, the University of Copenhagen, Institut Pasteur, Institut de Recherche pour le Développement (IRD), France Agricultural Research for Development (CIRAD), and Johns Hopkins University are the top collaborating institutions from high-income countries. But watch this space!
  • Returnees to Africa have much higher citation counts than those who never left. Visiting faculty contribute even more.

The CLAHRC WM Director is proud to be a collaborator in the CARTA (Consortium for Advanced Research Training in Africa) programme. This is an Africa-based, Africa-led initiative to rebuild and strengthen the capacity of African universities to locally produce well-trained and skilled researchers and scholars. The programme has been extremely effective in attracting high calibre applicants who go on to great things. CARTA is well networked across Africa and between Africa and Europe / North America.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. The World Bank & Elsevier. A Decade of Development in Sub-Saharan African Science, Technology, Engineering and Mathematics Research. Working Paper No. 91016. Washington, D.C.: World Bank Group. 2014.

Is Research Productivity on the Decline Internationally?

I have written previously on the so-called ‘golden age of medical research,’ [1] which coincides roughly with the first two decades of my life – 1950-1970. The premise of a golden age entails the conclusion that it is followed by a less spectacular age where marginal returns are lower per unit of input – say per researcher. So, where does the truth lie – is research becoming ever more efficient, or is the productivity of research declining? This subject has been carefully examined by a number of scholars, most recently by Bloom and others.[2] First they looked at aggregate supply of researchers and economic output across the US economy, and they found a relationship that looks like this:

091 DCB Figure 1

So, productivity per researcher appears to decline with time and does so quite rapidly – the graph uses log scales. The drop in unit productivity has been fully compensated by growth in the number of researchers.

Given the obvious problems of studying this phenomenon at the aggregate level, the researchers turn to individual topics, such as number of transistors packed onto a single chip. It turns out that keeping Moore’s law going takes a rapidly increasing number of researchers. However, diminishing returns are not just observed in electronics, the authors found the same phenomenon in agriculture and medicine. Research productivity in the pharmaceutical industry is one-tenth of what it was in 1970, and mortality gains have peaked in cancer and in heart disease. To some extent one can see this effect in the number of authors of medical papers, such as those in genetic epidemiology – they often run literally into hundreds. It would appear that ideas really are getting harder to find and/or when found they portend smaller gains.

I have previously made the obvious point that improved care reduces the headroom for future improvements.[3] Of course, economic growth and further improvement in health still turn on new knowledge and technology without which the supply-side of the economy must stagnate. The phenomenal growth of some emerging economies has been possible because of the non-rivalrous nature of previous discoveries made elsewhere. But we need to continue to advance for all that advances are hard to make. One of these advances concerns making optimal use of existing knowledge, and that is where CLAHRCs come into their own – we trade in knowledge about knowledge.

— Richard Lilford, CLAHRC WM Director

References:

  1. Lilford RJ. Future Trends in NHS. NIHR CLAHRC West Midlands. 25 November 2016.
  2. Bloom N, Jones CI, Van Reenen J, Webb M. Are Ideas Getting Harder to Find? Centre for Economic Performance Discussion Paper No. 1496. 2017.
  3. Lilford RJ. Patient Involvement in Patient Safety: Null Result from a High Quality Study. NIHR CLAHRC West Midlands. 18 August 2017.

The reliability of ethical review committees

I recently submitted the same application for ethical review for a multi-country study to three ethical review panels, two of which were overseas and one in the UK. The three panels together raised 19 points to be addressed before full approval could be given. Of these 19 points, just one was raised by two committees and none was raised by all three. Given CLAHRC WM’s methodological interest in inter-rater reliability and my own interests in the selection and assessment of health care students and workers, I was left pondering a) whether different ethical review committees consistently have different pass/fail thresholds for different ethical components of a proposed research study; and b) whether others have had similar experiences (we would welcome any examples of either convergent or divergent decisions by different ethical review committees).

Let me explain with two examples. One point raised was the need for formal written client consent during observations of Community Health Workers’ day-to-day activities. We had argued that because the field worker would only be observing the actions of the Community Health Worker and not the client, then formal written client consent was not required, but that informal verbal consent would be requested and the field worker would withdraw if the client did not wish them to be present. The two overseas committees both required formal written client consent, but the UK committee was happy with our justification for not doing so. On the other hand, the UK committee did not think we had provided sufficient reassurance of how we would protect the health and safety of the field worker as they conducted the observations, which could involve travelling alone into remote rural communities. The two overseas committees, however, considered our original plans for ensuring field worker health and safety sufficient.

What are the potential implications if different ethical review committees have different “passing standards”? As with pass/fail decisions in selection and assessment, there could be false positives or false negatives if studies are reviewed by “dove-ish” or “hawk-ish” committees respectively. As with selection and assessment, a false positive is probably the most concerning of the two: a study is given ethical clearance when ethical issues that would concern most other committees have not been raised and addressed. Although it is probably very rare that a study never gets ethical approval, a false negative decision would mean that the research team is required to make potentially costly and time-consuming amendments that most other committees would consider excessive. I have no experience on the “other side” of an ethical review committee, but I expect there must be some consideration of balancing the need for the research findings against potential ethical risks to participants and the research team.

Two interesting research questions arise. The first is to examine how ethical review committees make their decisions and set passing standards for research studies. A study of this nature in undergraduate medical education is currently ongoing: Peter Yates at Keele University is qualitatively examining how medical schools set their standards for finals examinations. The second is to explore the extent of the difference in passing standards across ethical review committees, by asking a sample of committees to each review a set of identical applications and to compare their decisions. A similar study in undergraduate medical education investigated differences in passing standards for written finals examinations across UK medical schools.[1] To avoid significant bias due to the Hawthorne effect, the ethical review committees would really need to be unaware that they were the subjects of such research. This, of course, raises a significant ethical dilemma with respect to informed consent and deception. Therefore it is not known whether such a study would be given ethical approval (and if so, by which committees?).

— Celia Taylor, Associate Professor

Reference:

  1. Taylor CA, Gurnell M, Melville CR, Kluth DC, Johnson N, Wass V. Variation in passing standards for graduation‐level knowledge items at UK medical schools. Med Educ. 2017; 51(6): 612-20.

Researchers – Beware of Predators

A recent column in Nature draws attention to ‘predatory journals’ – journals that charge open access publication fees without editorial or publishing services (such as peer-review) that are usually seen with legitimate journals.[1] Anecdotally, researchers have found that, after submitting a manuscript, they are presented with a hitherto unmentioned charge for publishing, and then when refusing to pay find that the paper is still ‘published’, making it much more difficult for it to published in another, legitimate, journal. Further, they were then invoiced for a retraction fee to remove the paper. Others have found that they have been listed on a journal’s editorial board without their explicit consent.

Although many researchers may feel that they would not fall for a predatory journal, it is still possible, especially for those who are early career researchers, those who have had a string of rejections and are feeling pressurised to publish, or those who are distracted by other concerns. Fortunately Shamseer and colleagues conducted a cross-sectional comparison study of nearly 300 journals to discern if there were any characteristics more strongly associated with predatory journals.[2] They identified 13 such characteristics that are more likely to be seen:

  1. Including biomedical and non-biomedical subjects in their scope of interest, and in particular subjects with little overlap.
  2. Having spelling and grammar errors.
  3. Using unauthorised and/or low-resolution images.
  4. Using language on the website that targets authors as opposed to readers. For example, focusing on inviting submissions, promoting metrics, etc. as opposed to highlighting recent publications.
  5. Promoting the Index Copernicus Value as a metric.
  6. Lacking description of the manuscript handling process.
  7. Requesting that manuscripts are submitted through email, as opposed to through a submission system. This often ignores requirements such as conflicts of interest declarations, funding statements, etc.
  8. Promising rapid publication.
  9. Having no retraction policy.
  10. Having no detail on digital preservation.
  11. Having low publishing fee (e.g. <$150, as opposed to >$2000 in legitimate journals).
  12. If the journal claims to be open access, either retains copyright, or fails to mention it.
  13. Having a non-professional or non-journal affiliated email address as a point of contact.

Of course, having one or some of these characteristics does not mean the journal is predatory, but should indicate that you take a closer look.

— Peter Chilton, Research Fellow

References:

  1. Cobey K. Illegitimate journals scam even senior scientists. Nature. 6 September 2017.
  2. Shamseer L, Moher D, Maduekwe O, et al. Potential predatory and legitimate biomedical journals: can you tell the difference? A cross-sectional comparison. BMC Medicine. 2017; 15: 28.

Patient and Public Involvement: Direct Involvement of Patient Representatives in Data Collection

It is widely accepted that the public and patient voice should be heard loud and clear in the selection of studies, in the design of those studies, and in the interpretation and dissemination of the findings. But what about involvement of patient and the public in the collection of data? Before science became professionalised, all scientists could have been considered members of the public. Robert Hooke, for example, could have called himself architect, philosopher, physicist, chemist, or just Hooke. Today, the public are involved in data collection in many scientific enterprises. For example, householders frequently contribute data on bird populations, and Prof Brian Cox involved the public in the detection of new planets in his highly acclaimed television series. In medicine, patients have been involved in collecting data; for example patients with primary biliary cirrhosis were the data collectors in a randomised trial.[1] However, the topic of public and patient involvement in data collection is deceptively complex. This is because there are numerous procedural safeguards governing access to users of the health service and that restrict disbursement of the funds that are used to pay for research.

Let us consider first the issue of access to patients. It is not permissible to collect research data without undergoing certain procedural checks; in the UK it is necessary to be ratified by the Disclosure and Barring Service (DBS) and to have necessary permissions from the institutional authorities. You simply cannot walk onto a hospital ward and start handing out questionnaires or collecting blood samples.

Then there is the question of training. Before collecting data from patients it is necessary to be trained in how to do so, covering both salient ethical and scientific principles. Such training is not without its costs, which takes us to the next issue.

Researchers are paid for their work and, irrespective of whether the funds are publically or privately provided, access to payment is governed by fiduciary and equality/diversity legislation and guidelines. Access to scarce resources is usually governed by some sort of competitive selection process.

None of the above should be taken as an argument against patients and the public taking part in data collection. It does, however, mean that this needs to be a carefully managed process. Of course things are very much simpler if access to patients is not required. For example, conducting a literature survey would require only that the person doing it was technically competent and in many cases members of the public would already have all, or some, of the necessary skills. I would be very happy to collaborate with a retired professor of physics (if anyone wants to volunteer!). But that is not the point. The point is that procedural safeguards must be applied, and this entails management structures that can manage the process.

Research may be carried out by accessing members of the public who are not patients, or at least who are not accessed through the health services. As far as I know there are no particular restrictions on doing so, and I guess that such contact is governed by the common law covering issues such as privacy, battery, assault, and so on. The situation becomes different, however, if access is achieved through a health service organisation, or conducted on behalf of an institution, such as a university. Then presumably any member of the public wishing to collect data from other members of the public would fall under the governance arrangements of the relevant institution. The institution would have to ensure not only that the study was ethical, but that the data-collectors had the necessary skills and that funds were disbursed in accordance with the law. Institutions already deploy ‘freelance’ researchers, so I presume that the necessary procedural arrangements are already in place.

This analysis was stimulated by a discussion in the PPI committee of CLAHRC West Midlands, and represents merely my personal reflections based on first principles. It does not represent my final, settled position, let alone that of the CLAHRC WM, or any other institution. Rather it is an invitation for further comment and analysis.

— Richard Lilford, CLAHRC WM Director

Reference:

  1. Browning J, Combes B, Mayo MJ. Long-term efficacy of sertraline as a treatment for cholestatic pruritus in patients with primary biliary cirrhosis. Am J Gastroenterol. 2003; 98: 2736-41.

Update on Scientists Being Held Accountable for Impact of Research

I recently wrote a news blog on the dangers of researchers being advocates for their own work. Readers may be interested in an article from an authoritative source that I chanced upon recently, published in BMC Medicine (Whitty JM. What Makes an Academic Paper Useful for Health Policy? BMC Med. 2015; 10: 301).

— Richard Lilford, CLAHRC WM Director

The Beneficial Effects of Taking Part in International Research: an Old Chestnut Revisited

Two recent and well-written articles grapple with this question of whether or not clinical trials are beneficial, net of any benefit conferred by the therapeutic modalities evaluated in those trials.[1] [2]

The first study from the Netherlands concerns the effect of taking part in clinical trials where controls are made up of people not participating in trials (presumably because they were not offered entry in the trial).[1] This is the topic of a rather extensive literature, including a study to which I contributed.[3] The latter study found that the putative ‘trial effect’ applied only in circumstances where care given to control patients was not protocol-directed. In other words, our results suggested that the ‘trial effect’ was really a ‘protocol effect’. In that case the effect should be ephemeral and disappear as greater proportions of care become protocolised. And that is what appears to have happened – Lin, et al.[1] report no benefit to trial participants versus non-trial patients for the highly protocolised disease Hodgkin lymphoma. They speculate that while participation in trials does not affect individual patient care in the short-term, hosting trials does sensitise clinicians at an institutional level, so that they are more likely than clinicians from non-participating hospitals to practice evidence-based care. However, they offer no direct evidence for this assertion. Such evidence is, however, provided by the next study.

The effects of high participation rates in clinical trials at the hospital level is evaluated in an elegant study recently published in the prestigious journal ‘Gut’.[2] The team of authors (that includes prominent civil servants and many distinguished cancer specialists and statisticians) compared outcomes from colon cancer according to the extent to which the hospital providing treatment participated in trials. This ingenious study was accomplished by linking the NIHR’s data on clinical trials participation to cancer registry data and Hospital Episode Statistics. It turned out that risk-adjusted survival was significantly better in the high participation hospitals than in lower participation hospitals, even after substantial risk-adjustment. “Residual confounding” do I hear you say? Perhaps, but the authors have two further lines of evidence for the causal explanation. First, they documented a dose-response; the greater the level of participation, the greater the improvement in survival. Of course, an unknown confounder that was correlated with participation rates would produce just such a finding. The second line of evidence is more impressive – the longer the duration over which a hospital had sustained high participation rates, the greater the effect. Again, of course, this argument is not impregnable – duration might not serve as a good Instrumental Variable. How might the case be further strengthened (or refuted)? By unravelling the theoretical pathway between explanatory and outcome variables.[4] Since this is a database study, the process variables that might mediate the putative effect were not available to the authors. However, separate studies have indeed found an association between improved processes of care and trial participation.[5] Taken in the round, I think that a cause/effect explanation holds (>90% of my probability density favours the causal explanation).

— Richard Lilford, CLAHRC WM Director

References:

  1. Liu L, Giusti F, Schaapveld M, et al. Survival differences between patients with Hodgkin lymphoma treated inside and outside clinical trials. A study based on the EORTC-Netherlands Cancer Registry linked data with 20 years of follow-up. Br J Haematol. 2017; 176: 65-75.
  2. Downing A, Morris EJA, Corrigan N, et al. High hospital research participation and improved colorectal cancer survival outcomes: a population-based study. Gut. 2017; 66: 89-96.
  3. Braunholtz DA, Edwards SJ, Lilford RJ. Are randomized clinical trials good for us (in the short term)? Evidence for a “trial effect”. J Clin Epidemiol. 2001; 54(3): 217-24.
  4. Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end pointsBMJ. 2010; 341: c4413.
  5. Selby P. The impact of the process of clinical research on health service outcomes. Ann Oncol. 2011; 22(s7): vii2-4.

And Today We Have the Naming of Parts*

Management research, health services research, operations research, quality and safety research, implementation research – a crowded landscape of words describing concepts that are, at best, not entirely distinct, and at worst synonyms. Some definitions are given in Table 1. Perhaps the easiest one to deal with is ‘operations research’, which has a rather narrow meaning and is used to describe mathematical modelling techniques to derive optimal solutions to complex problems typically dealing with the flow of objects (people) over time. So it is a subset of the broader genre covered by this collection of terms. Quality and safety research puts the cart before the horse by defining the intended objective of an intervention, rather than where in the system the intervention impacts. Since interventions at a system level may have many downstream effects, it seems illogical and indeed potentially harmful, to define research by its objective, an argument made in greater detail elsewhere.[1]

Health Services Research (HSR) can be defined as management research applied to health, and is an acceptable portmanteau term for the construct we seek to define. For those who think the term HSR leaves out the development and evaluation of interventions at service level, the term Health Services and Delivery Research (HS&DR) has been devised. We think this is a fine term to describe management research as applied to the health services, and are pleased that the NIHR has embraced the term, and now has two major funding schemes ­– the HTA programme dealing with clinical research, and the HS&DR dealing with management research. In general, interventions and their related research programmes can be neatly represented as shown in the framework below, represented in a modified Donabedian chain:

078 DCB - Figure 1

So what about implementation research then? Wikipedia defines implementation research as “the scientific study of barriers to and methods of promoting the systematic application of research findings in practice, including in public policy.” However, a recent paper in BMJ states that “considerable confusion persists about its terminology and scope.”[2] Surprised? In what respect does implementation research differ from HS&DR?

Let’s start with the basics:

  1. HS&DR studies interventions at the service level. So does implementation research.
  2. HS&DR aims to improve outcome of care (effectiveness / safety / access / efficiency / satisfaction / acceptability / equity). So does implementation research.
  3. HS&DR seeks to improve outcomes / efficiency by making sure that optimum care is implemented. So does implementation research.
  4. HS&DR is concerned with implementation of knowledge; first knowledge about what clinical care should be delivered in a given situation, and second about how to intervene at the service level. So does implementation research.

This latter concept, concerning the two types of knowledge (clinical and service delivery) that are implemented in HS&DR is a critical one. It seems poorly understood and causes many researchers in the field to ‘fall over their own feet’. The concept is represented here:

078 DCB - Figure 2HS&DR / implementation research resides in the South East quadrant.

Despite all of this, some people insist on keeping the distinction between HS&DR and Implementation Research alive – as in the recent Standards for Reporting Implementation studies (StaRI) Statement.[3] The thing being implemented here may be a clinical intervention, in which case the above figure applies. Or it may be a service delivery intervention. Then they say that once it is proven, it must be implemented, and this implementation can be studied – in effect they are arguing here for a third ring:

078 DCB - Figure 3

This last, extreme South East, loop is redundant because:

  1. Research methods do not turn on whether the research is HS&DR or so-called Implementation Research (as the authors acknowledge). So we could end up in the odd situation of the HS&DR being a before and after study, and the Implementation Research being a cluster RCT! The so-called Implementation Research is better thought of as more HS&DR – seldom is one study sufficient.
  2. The HS&DR itself requires the tenets of Implementation Science to be in place – following the MRC framework, for example – and identifying barriers and facilitators. There is always implementation in any trial of evaluative research, so all HS&DR is Implementation Research – some is early and some is late.
  3. Replication is a central tenet of science and enables context to be explored. For example, “mother and child groups” is an intervention that was shown to be effective in Nepal. It has now been ‘implemented’ in six further sites under cluster RCT evaluation. Four of the seven studies yielded positive results, and three null results. Comparing and contrasting has yielded a plausible theory, so we have a good idea for whom the intervention works and why.[4] All seven studies are implementations, not just the latter six!

So, logical analysis does not yield any clear distinction between Implementation Research on the one hand and HS&DR on the other. The terms might denote some subtle shift of emphasis, but as a communication tool in a crowded lexicon, we think that Implementation Research is a term liable to sow confusion, rather than generate clarity.

Table 1

Term Definitions Sources
Management research “…concentrates on the nature and consequences of managerial actions, often taking a critical edge, and covers any kind of organization, both public and private.” Easterby-Smith M, Thorpe R, Jackson P. Management Research. London: Sage, 2012.
Health Services Research (HSR) “…examines how people get access to health care, how much care costs, and what happens to patients as a result of this care.” Agency for Healthcare Research and Quality. What is AHRQ? [Online]. 2002.
HS&DR “…aims to produce rigorous and relevant evidence on the quality, access and organisation of health services, including costs and outcomes.” INVOLVE. National Institute for Health Research Health Services and Delivery Research (HS&DR) programme. [Online]. 2017.
Operations research “…applying advanced analytical methods to help make better decisions.” Warwick Business School. What is Operational Research? [Online]. 2017.
Patient safety research “…coordinated efforts to prevent harm, caused by the process of health care itself, from occurring to patients.” World Health Organization. Patient Safety. [Online]. 2017.
Comparative Effectiveness research “…designed to inform health-care decisions by providing evidence on the effectiveness, benefits, and harms of different treatment options.” Agency for Healthcare Research and Quality. What is Comparative Effectiveness Research. [Online]. 2017.
Implementation research “…the scientific inquiry into questions concerning implementation—the act of carrying an intention into effect, which in health research can be policies, programmes, or individual practices (collectively called interventions).” Peters DH, Adam T, Alonge O, Agyepong IA, Tran N. Implementation research: what it is and how to do it. BMJ. 2013; 347: f6753.

We have ‘audited’ David Peters’ and colleagues BMJ article and found that every attribute they claim for Implementation Research applies equally well to HS&DR, as you can see in Table 2. However, this does not mean that we should abandon ‘Implementation Science’ – a set of ideas useful in designing an intervention. For example, stakeholders of all sorts should be involved in the design; barriers and facilitators should be identified; and so on. By analogy, I think Safety Research is a back-to-front term, but I applaud the tools and insights that ‘safety science’ provides.

Table 2

Term
“…attempts to solve a wide range of implementation problems”
“…is the scientific inquiry into questions concerning implementation – the act of carrying an intention into effect, which in health research can be policies, programmes, or individual practices (…interventions).”
“…can consider any aspect of implementation, including the factors affecting implementation, the processes of implementation, and the results of implementation.”
“The intent is to understand what, why, and how interventions work in ‘real world’ settings and to test approaches to improve them.”
“…seeks to understand and work within real world conditions, rather than trying to control for these conditions or to remove their influence as causal effects.”
“…is especially concerned with the users of the research and not purely the production of knowledge.”
“…uses [implementation outcome variables] to assess how well implementation has occurred or to provide insights about how this contributes to one’s health status or other important health outcomes.
…needs to consider “factors that influence policy implementation (clarity of objectives, causal theory, implementing personnel, support of interest groups, and managerial authority and resources).”
“…takes a pragmatic approach, placing the research question (or implementation problem) as the starting point to inquiry; this then dictates the research methods and assumptions to be used.”
“…questions can cover a wide variety of topics and are frequently organised around theories of change or the type of research objective.”
“A wide range of qualitative and quantitative research methods can be used…”
“…is usefully defined as scientific inquiry into questions concerning implementation—the act of fulfilling or carrying out an intention.”

 — Richard Lilford, CLAHRC WM Director and Peter Chilton, Research Fellow

References:

  1. Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ. 2010; 341: c4413.
  2. Peters DH, Adam T, Alonge O, Agyepong IA, Tran N. Implementation research: what it is and how to do it. BMJ. 2013; 347: f6753.
  3. Pinnock H, Barwick M, Carpenter CR, et al. Standards for Reporting Implementation Studies (StaRI) Statement. BMJ. 2017; 356: i6795.
  4. Prost A, Colbourn T, Seward N, et al. Women’s groups practising participatory learning and action to improve maternal and newborn health in low-resource settings: a systematic review and meta-analysis. Lancet. 2013; 381: 1736-46.

*Naming of Parts by Henry Reed, which Ray Watson alerted us to:

Today we have naming of parts. Yesterday,

We had daily cleaning. And tomorrow morning,

We shall have what to do after firing. But to-day,

Today we have naming of parts. Japonica

Glistens like coral in all of the neighbouring gardens,

And today we have naming of parts.