Tag Archives: Management

How Effective Are Management Consultants?

CLAHRC WM collaborator Ian Kirkpatrick (University of Warwick) reports an interesting article on the effectiveness of management consultants in the NHS hospitals in England.[1] Across all such hospitals the mean yearly spend is £1.2m. I know of only one RCT of use of management consultants.[2] This was a study of garment manufacturing companies in India where the use of management consultants was associated with an upturn in productivity. The retrospective study of Kirkpatrick, et al. reaches the opposite conclusion. Their explanatory variable is deployment of a management consultant, and their outcome variable is a change in efficiency before and after the intervention. They also make use of the fact that different hospitals have deployed management consultants at different times, which strongly mitigates against a temporal trend in the intervention effectiveness. As I understand it each hospital acts as its own control, and these differences are then amalgamated across all hospitals. This mitigates (but does not eliminate) selection bias.[3] The authors are careful to allow for autocorrelation, that is lack of independence between the outcome variable within hospitals, and they adjust for all the expected covariates, such as hospital size and teaching status. The efficiency measure was derived from a publically available database comparing the average unit cost for providing diagnosis and treatment of the trust to the national average.

This is a unique and extremely provocative study. However, we need to be very careful in jumping to a cause and effect conclusion. Firstly, large regression-based studies should be interpreted cautiously, since important confounding variables may be omitted, and it is impossible to take into account all interactions (and first and higher orders). Second, we also need to consider reverse causality; it is possible that deployment of management consultants was prompted by managers’ pre-emptive response to challenges. All of that being said, I have not always been persuaded by the value of management consultants during the various director and non-executive director roles I have occupied. The management consultant model is rather different to the CLAHRC model. CLAHRCs make sure all relevant literature is taken into account, we explicate the causal pathways that may lead to both good and bad outcomes (pre-implementation testing / prospective evaluation), and we conduct proof of principle studies as a prelude to evaluation of larger interventions. In short, our approach is more sceptical.

— Richard Lilford, CLAHRC WM Director


  1. Kirkpatrick I, Sturdy AJ, Alvarado N, Blanco-Oliver A, Veronesi G. The impact of management consultants on pubic service efficiency. Policy & Politics. 2018.
  2. Bloom N, Eifert B, Mahajan A, McKenzie D, Roberts J. Does Management Matter? Evidence from India. Q J Econ. 2013; 128(1):1-51.
  3. Brown C, Hofer T, Johal A, Thomson R, Nicholl J, Franklin BD, Lilford RJ. An epistemology of patient safety research: a framework for study design and interpretation. Part 2. Study design. Qual Saf Health Care. 2008. 17;162-9.

And Today We Have the Naming of Parts*

Management research, health services research, operations research, quality and safety research, implementation research – a crowded landscape of words describing concepts that are, at best, not entirely distinct, and at worst synonyms. Some definitions are given in Table 1. Perhaps the easiest one to deal with is ‘operations research’, which has a rather narrow meaning and is used to describe mathematical modelling techniques to derive optimal solutions to complex problems typically dealing with the flow of objects (people) over time. So it is a subset of the broader genre covered by this collection of terms. Quality and safety research puts the cart before the horse by defining the intended objective of an intervention, rather than where in the system the intervention impacts. Since interventions at a system level may have many downstream effects, it seems illogical and indeed potentially harmful, to define research by its objective, an argument made in greater detail elsewhere.[1]

Health Services Research (HSR) can be defined as management research applied to health, and is an acceptable portmanteau term for the construct we seek to define. For those who think the term HSR leaves out the development and evaluation of interventions at service level, the term Health Services and Delivery Research (HS&DR) has been devised. We think this is a fine term to describe management research as applied to the health services, and are pleased that the NIHR has embraced the term, and now has two major funding schemes ­– the HTA programme dealing with clinical research, and the HS&DR dealing with management research. In general, interventions and their related research programmes can be neatly represented as shown in the framework below, represented in a modified Donabedian chain:

078 DCB - Figure 1

So what about implementation research then? Wikipedia defines implementation research as “the scientific study of barriers to and methods of promoting the systematic application of research findings in practice, including in public policy.” However, a recent paper in BMJ states that “considerable confusion persists about its terminology and scope.”[2] Surprised? In what respect does implementation research differ from HS&DR?

Let’s start with the basics:

  1. HS&DR studies interventions at the service level. So does implementation research.
  2. HS&DR aims to improve outcome of care (effectiveness / safety / access / efficiency / satisfaction / acceptability / equity). So does implementation research.
  3. HS&DR seeks to improve outcomes / efficiency by making sure that optimum care is implemented. So does implementation research.
  4. HS&DR is concerned with implementation of knowledge; first knowledge about what clinical care should be delivered in a given situation, and second about how to intervene at the service level. So does implementation research.

This latter concept, concerning the two types of knowledge (clinical and service delivery) that are implemented in HS&DR is a critical one. It seems poorly understood and causes many researchers in the field to ‘fall over their own feet’. The concept is represented here:

078 DCB - Figure 2HS&DR / implementation research resides in the South East quadrant.

Despite all of this, some people insist on keeping the distinction between HS&DR and Implementation Research alive – as in the recent Standards for Reporting Implementation studies (StaRI) Statement.[3] The thing being implemented here may be a clinical intervention, in which case the above figure applies. Or it may be a service delivery intervention. Then they say that once it is proven, it must be implemented, and this implementation can be studied – in effect they are arguing here for a third ring:

078 DCB - Figure 3

This last, extreme South East, loop is redundant because:

  1. Research methods do not turn on whether the research is HS&DR or so-called Implementation Research (as the authors acknowledge). So we could end up in the odd situation of the HS&DR being a before and after study, and the Implementation Research being a cluster RCT! The so-called Implementation Research is better thought of as more HS&DR – seldom is one study sufficient.
  2. The HS&DR itself requires the tenets of Implementation Science to be in place – following the MRC framework, for example – and identifying barriers and facilitators. There is always implementation in any trial of evaluative research, so all HS&DR is Implementation Research – some is early and some is late.
  3. Replication is a central tenet of science and enables context to be explored. For example, “mother and child groups” is an intervention that was shown to be effective in Nepal. It has now been ‘implemented’ in six further sites under cluster RCT evaluation. Four of the seven studies yielded positive results, and three null results. Comparing and contrasting has yielded a plausible theory, so we have a good idea for whom the intervention works and why.[4] All seven studies are implementations, not just the latter six!

So, logical analysis does not yield any clear distinction between Implementation Research on the one hand and HS&DR on the other. The terms might denote some subtle shift of emphasis, but as a communication tool in a crowded lexicon, we think that Implementation Research is a term liable to sow confusion, rather than generate clarity.

Table 1

Term Definitions Sources
Management research “…concentrates on the nature and consequences of managerial actions, often taking a critical edge, and covers any kind of organization, both public and private.” Easterby-Smith M, Thorpe R, Jackson P. Management Research. London: Sage, 2012.
Health Services Research (HSR) “…examines how people get access to health care, how much care costs, and what happens to patients as a result of this care.” Agency for Healthcare Research and Quality. What is AHRQ? [Online]. 2002.
HS&DR “…aims to produce rigorous and relevant evidence on the quality, access and organisation of health services, including costs and outcomes.” INVOLVE. National Institute for Health Research Health Services and Delivery Research (HS&DR) programme. [Online]. 2017.
Operations research “…applying advanced analytical methods to help make better decisions.” Warwick Business School. What is Operational Research? [Online]. 2017.
Patient safety research “…coordinated efforts to prevent harm, caused by the process of health care itself, from occurring to patients.” World Health Organization. Patient Safety. [Online]. 2017.
Comparative Effectiveness research “…designed to inform health-care decisions by providing evidence on the effectiveness, benefits, and harms of different treatment options.” Agency for Healthcare Research and Quality. What is Comparative Effectiveness Research. [Online]. 2017.
Implementation research “…the scientific inquiry into questions concerning implementation—the act of carrying an intention into effect, which in health research can be policies, programmes, or individual practices (collectively called interventions).” Peters DH, Adam T, Alonge O, Agyepong IA, Tran N. Implementation research: what it is and how to do it. BMJ. 2013; 347: f6753.

We have ‘audited’ David Peters’ and colleagues BMJ article and found that every attribute they claim for Implementation Research applies equally well to HS&DR, as you can see in Table 2. However, this does not mean that we should abandon ‘Implementation Science’ – a set of ideas useful in designing an intervention. For example, stakeholders of all sorts should be involved in the design; barriers and facilitators should be identified; and so on. By analogy, I think Safety Research is a back-to-front term, but I applaud the tools and insights that ‘safety science’ provides.

Table 2

“…attempts to solve a wide range of implementation problems”
“…is the scientific inquiry into questions concerning implementation – the act of carrying an intention into effect, which in health research can be policies, programmes, or individual practices (…interventions).”
“…can consider any aspect of implementation, including the factors affecting implementation, the processes of implementation, and the results of implementation.”
“The intent is to understand what, why, and how interventions work in ‘real world’ settings and to test approaches to improve them.”
“…seeks to understand and work within real world conditions, rather than trying to control for these conditions or to remove their influence as causal effects.”
“…is especially concerned with the users of the research and not purely the production of knowledge.”
“…uses [implementation outcome variables] to assess how well implementation has occurred or to provide insights about how this contributes to one’s health status or other important health outcomes.
…needs to consider “factors that influence policy implementation (clarity of objectives, causal theory, implementing personnel, support of interest groups, and managerial authority and resources).”
“…takes a pragmatic approach, placing the research question (or implementation problem) as the starting point to inquiry; this then dictates the research methods and assumptions to be used.”
“…questions can cover a wide variety of topics and are frequently organised around theories of change or the type of research objective.”
“A wide range of qualitative and quantitative research methods can be used…”
“…is usefully defined as scientific inquiry into questions concerning implementation—the act of fulfilling or carrying out an intention.”

 — Richard Lilford, CLAHRC WM Director and Peter Chilton, Research Fellow


  1. Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ. 2010; 341: c4413.
  2. Peters DH, Adam T, Alonge O, Agyepong IA, Tran N. Implementation research: what it is and how to do it. BMJ. 2013; 347: f6753.
  3. Pinnock H, Barwick M, Carpenter CR, et al. Standards for Reporting Implementation Studies (StaRI) Statement. BMJ. 2017; 356: i6795.
  4. Prost A, Colbourn T, Seward N, et al. Women’s groups practising participatory learning and action to improve maternal and newborn health in low-resource settings: a systematic review and meta-analysis. Lancet. 2013; 381: 1736-46.

*Naming of Parts by Henry Reed, which Ray Watson alerted us to:

Today we have naming of parts. Yesterday,

We had daily cleaning. And tomorrow morning,

We shall have what to do after firing. But to-day,

Today we have naming of parts. Japonica

Glistens like coral in all of the neighbouring gardens,

And today we have naming of parts.

I Know that Cracks in Care Between Institutions Undermine Patient Safety, but How Can I Rectify the Problem?

Cracks between institutions

It is well known that danger arises when care is fragmented over many organisations (hospital, general practice, community care, social services, care home, etc.). With the rise in the proportion of patients with chronic and multiple diseases, fragmented care may have become the number one safety issue in modern health care. Confusion of responsibility, silo thinking, contradictory instructions, and over and under treatment are all heightened risks when care is shared between multiple providers – patients will tell you that. The risk is clearly identified, but how can it be mitigated? There is a limit to what can be achieved by structural change – Accountable Care Organisations featured in a recent blog, for instance.[1] Irrespective of the way care is structured, front line staff need to learn how to function in a multidisciplinary, inter-agency setting so that they can properly care for people with complex needs. Simply studying different ways of organising care, as recommended by NICE,[2] does not get to the heart of the problem in our view. The business aphorism “culture eats strategy for breakfast” applies equally to inter-sectoral working in health and social care. Studying how care givers in different places can better work in teams to provide integrated care is hard, but the need to do so cannot be ignored; we must try. We propose first, a method to enhance performance at the sharp end of care, and second, a system to sustain the improvement.

Improving performance

Improving performance of clinicians who need to work as a team, when the members of the team are scattered across different places, and patients have different, complex needs, is a challenge. For a start, there is no fixed syllabus based on ‘proverbial knowledge’. Guidelines deal with conditions one at a time.[3] There can be no set of guidelines that reconciles all possible combinations of disease-specific guidelines for patients suffering from many diseases.[4] [5] Everything is a matter of balance – the need to avoid giving patients more medicines than they can cope with is in tension with the need to provide evidence-based medicines for each condition. The greater the number of medicines prescribed, the lower is the adherence rate to each prescribed medicine, but it is not possible to pre-specify where the optimal prescribing threshold lies.[6] The lack of a specifiable syllabus does not mean performance cannot be enhanced – it is not just proverbial knowledge that can be enhanced through education, tacit knowledge can be too.[7-9] There is an extensive theoretical and empirical literature concerning the teaching of tacit skills; the central idea is for people to work together in solving the kinds of problems they will encounter in the real world.[10] In the process some, previously tacit, knowledge may be abstracted from deliberations to become proverbial (for an example, see box). Management is a topic that is hard to codify. So (highly remunerated) business schools use case studies as the basis for discussion in the expectation that tacit knowledge will be enhanced. We plan to build on theory and experience to implement learning in facilitated groups to help clinical staff provide better integrated care – we will create opportunities for staff of different types to work through scenarios from real life in facilitated groups. We will use published case studies [11] as a template for further scenario development. Group deliberations will be informed by published guidelines that aim to enhance care of patients with multi-morbidity (although these have been written to guide individual consultations rather than to assist management across sectors).[11-13] In the process group members will gain tacit knowledge (and perhaps some proverbial knowledge will emerge as in the example in the box). CLAHRC WM is implementing this method in a study funded by an NIHR Programme Development grant.[14] But how can it be made sustainable?

Box: Hypothetical Scenario Where Proverbial Knowledge Emerges from Discussion of a Complex Topic

The topic of conflicting information came up in a facilitated work group. A general practitioner argued that this was a difficult problem to avoid, since a practitioner could not know what a patient may have been told by another of their many care-givers. One of the patient participants observed that contradictory advice was not just confusing, but distressing. A community physiotherapist said that he usually elicited previous advice from patients so that he would not inadvertently contradict, or appear to contradict, previous advice. The group deliberated the point and concluded that finding out what advice a patient had received was a good idea, and should be included as a default tenet of good practice.


Again we turn to management theory – there are lots to choose from, but they embody similar ideas. We will take for Ferlie and Shortell.[15] To make a method stick, three organisational levels must be synchronised:

  1. Practitioners at the sharp end who must implement change. They will be invited to join multi-disciplinary groupings and participate in the proposed work groups, as above.
  2. The middle level of management who can facilitate or frustrate a new initiative must make staff development an on-going  priority, for example by scheduling team-building activities in time tables. Our CLAHRC is conducting a project on making care safer in care homes, where much can be done to reduce risk at interfaces in care.
  3. The highest levels of management, who can commit resources and drive culture change by force of personality and the authority of high office, must be engaged. This includes hospitals at board levels and local authorities. Patients have a big role to play – they are the only people who experience the entire care pathway and hence who are experts in it. They can campaign for change and for buy-in from top managers.

CLAHRC WM has deep commitment from major participating hospitals in the West Midlands, from Clinical Commissioning Groups, and local authorities. These organisations are all actively engaged in improving interfaces in care, and the draft Sustainability and Transferability Partnership strategy for Birmingham and Solihull includes plans to better integrate care. We will build on these changes to promote and sustain bottom-up education, supported by the Behavioural Psychology group at Warwick Business School, to drive forward this most challenging but important of all initiatives – improving safety across interfaces in care.

— Richard Lilford, CLAHRC WM Director


  1. Lilford RJ. Accountable Care Organisations. NIHR CLAHRC West Midlands. 11 November 2016.
  2. National Institute for Health and Care Excellence. Multimorbidity: clinical assessment and management. London, UK: NICE, 2016.
  3. Wyatt KD, Stuart LM, Brito JP, et al. Out of Context: Clinical Practice Guidelines and Patients with Multiple Chronic Conditions. A Systematic Review. Med Care. 2014; 52 (3s2): s92-100.
  4. Lilford RJ. Multi-morbidity. NIHR CLAHRC West Midlands. 20 November 2015.
  5. Boyd CM, Darer J, Boult C, et al. Clinical Practice Guidelines and Quality of Care for Older Patients With Multiple Comorbid Diseases: Implications for Pay for Performance. JAMA. 2005; 294(6): 716-24.
  6. Tinetti ME, Bogardus ST, Agostini JV. Potential Pitfalls of Disease-Specific Guidelines for Patients with Multiple Conditions. New Engl J Med. 2014; 351: 2870-4.
  7. Patel V, Arocha J, Kaufman D. Expertise and tacit knowledge in medicine. In: Tact knowledge in professional practice: researcher and practitioner perspectives. Sternberg RJ (ed). Mahwah, NJ: Lawrence Erlbaum Associates, 1999.
  8. Nonaka I, von Krogh G. Tacit knowledge and knowledge conversion: Controversy and advancement in organizational knowledge creation theory. Organ Sci. 2009; 23(3): 635-52.
  9. Eraut M. Non-formal learning and tacit knowledge in professional work. Brit J Ed Psychol. 2000; 70: 113-36.
  10. Lilford RJ. Tacit and Explicit Knowledge in Health Care. NIHR CLAHRC West Midlands. 14 August 2015.
  11. American Geriatrics Society Expert Panel on the Care of Older Adults with Multimorbidity. Guiding Principles for the Care of Older Adults with Multimorbidity: An Approach for Clinicians. J Am Geriatr Soc. 2012; 60(10): E1-25.
  12. Muth C, van den Akker M, Blom JW, et al. The Ariadne principles: how to handle multimorbidity in primary care consultations. BMC Medicine. 2014; 12: 223.
  13. American Geriatrics Society Expert Panel on the Care of Older Adults with Diabetes Mellitus. Guidelines Abstracted from the American Geriatrics Society Guidelines for Improving the Care of Older Adults with Diabetes Mellitus: 2013 Update. J Am Geriatr Soc. 2013; 61(11): 2020-6.
  14. Lilford, Combes, Taylor, Mallan, Mendelsohn. Improving clinical decisions and teamwork for patients with multimorbidity in primary care through multidisciplinary education and facilitation. NIHR Programme Grant. 2016-2017.
  15. Ferlie E, & Shortell S. Improving the quality of health care in the United Kingdom and the United States: a framework for change. Milbank Quart. 2001;79(2): 281-315.

Managing Staff: A Role for Tough Love?

Over the years the CLAHRC WM Director has participated in extensive training in HR issues. The training usually starts with feedback from staff on their satisfaction with their work environment and their boss. The idea then is to amend the environment or the behaviour of the boss, with a view to improving staff feedback. It is surely excellent for staff to provide feedback, and for bosses to be humble and to continually strive to be ‘better’ bosses:

062 DC - Managing Staff Figure

One thing a boss may be asked to do is to reduce stress on staff. But where does this stress come from? Ultimately, the competitive external environment. So what can the boss do about that? Presumably, the worker cannot be shielded from the stress. Academics on research contracts face redundancy if they cannot secure research grants. So, taking the stress out of the job would be self-defeating. Bosses should help staff cope with the real and present threats they face, and do them a disservice if they shield them from it. Enter Alia Crum and colleagues, and a series of two wonderful experiments.[1] [2] First, they took interviewees facing stressful interviews, and then bankers facing financial crisis (poor bankers). In both cases, interventions designed to generate a positive mind-set towards stress bolstered coping mechanisms. It also improved receptivity to critical feedback, which is an essential component of academic life. People receive good salaries for tackling difficult and stressful situations. Do not try to pretend that this is not so, but select resilient staff, make them feel a little heroic,[3] and create a team ethos where stress is to be relished! The CLAHRC WM Director promises his team ‘blood, sweat, and tears’. When our grant applications are turned down it is what we were expecting; when they succeed we get a nice surprise!

— Richard Lilford, CLAHRC WM Director


  1. Crum AJ, Salovey P, Achor S. Rethinking Stress: The Role of Mindsets in Determining the Stress Response. J Person Soc Psychol. 2013; 104(4): 716-33.
  2. Crum AJ, Akinola M, Martin A, Fath S. The Benefits of a Stress-is-enhancing Mindset in Both Challenging and Threatening Contexts. 2015. [Under Review].
  3. Lilford RJ. Can We Do Without Heroism in Health Care? NIHR CLAHRC West Midlands News Blog. 20 March 2015.

Service Delivery Research: Researcher-Led or Manager-Led?

The implication behind much Service Delivery Research is that it is researcher-led. After all, it is called “research”. But is this the correct way to conceptualise such research when its purpose is to evaluate an intervention?

For a start, the researcher might not have been around when the intervention was promulgated; many, perhaps most, service interventions are evaluated retrospectively. In the case of such ex-post evaluations the researcher has no part in the intervention and cannot be held responsible for it in any way – the responsibilities of the researchers relate solely to research, such as data, security and analysis. The researcher cannot accept responsibility for the intervention itself. For instance, it would be absurd to hold Nagin and Pepper [1] responsible for the death penalty by virtue of their role in evaluating its effect on homicide rates! Responsibility for selection, design, and implementation of interventions must lie elsewhere.

But even when the study is prospective, for instance, involving a cluster RCT, it does not follow that the researcher is responsible for the intervention. Take, for instance, the Mexican Universal Health Insurance trial.[2] The Mexican Government promulgated the intervention and Professor King and his colleagues had to scramble after the fact, to ensure that it was introduced over an evaluation framework. CLAHRCs work closely with health service and local authority managers, helping to supply their information needs and evaluate service delivery interventions to improve the quality / efficiency / accountability / acceptability of health care. The interventions are ‘owned’ by the health service, in the main.

This makes something of a nonsense of the Ottawa Statement on the ethics of cluster trials – for instance, it says that the researcher must ensure that the study intervention is “adequately justified” and “researchers should protect cluster interests.”[3]

Such statement seems to misplace the responsibility for the intervention. That responsibility must lie with the person who has the statutory duty of care and who is employed by the legal entity charged with protecting client interests. The Chief Executive or her delegate – the ‘Cluster Guardian’ – must bear this responsibility.[4] Of course, that does not let researchers off the hook. For a start, the researcher has responsibility for the research itself: design, data collation, etc. Also, researchers may advise or even recommend an intervention, in which case they have a vicarious responsibility.

Advice or suggestions offered by researchers must be sound – the researcher should not advocate a course of action that is clearly not in the cluster interest and should not deliberately misrepresent information or mislead / wrongly tempt the cluster guardian. But the cluster guardian is the primary moral agent with responsibility to serve the cluster interest. The ethics of doing so are the ethics of policy-making and service interventions generally. Policy-makers are often not very good at making policy, as pointed out by King and Crewe in their book “The Blunders of Our Governments”.[5] But that is a separate topic.

— Richard Lilford, CLAHRC WM Director


  1. Nagin DS & Pepper JV. Deterrence and the Death Penalty. Washington, D.C.: The National Academies Press, 2012.
  2. King G, Gakidou E, Imai K, et al. Public policy for the poor? A randomised assessment of the Mexican universal health insurance programme. Lancet. 2009; 373(9673):1447-54.
  3. Weijer C, Grimshaw JM, Eccles MP, et al. The Ottawa Statement on the Ethical Design and Conduct of Cluster Randomized Trials. PLoS Med. 2012; 9(11): e1001346.
  4. Edwards SJL, Lilford RJ, Hewison J. The ethics of randomised controlled trials from the perspectives of patients, the public, and healthcare professionals. BMJ. 1998; 317(7167): 1209-12.
  5. King A & Crewe I. The Blunders of Our Governments. London: Oneworld Publications, 2013.

Project Management Eats Strategy for Breakfast

The healthcare sector has been learning and implementing lessons from other sectors for some time. The teaching of W. Edwards Deming influenced both systems thinking and largely laid the foundations for quality improvement. Walter Shewhart, who went on to collaborate with Deming, developed the use of tools to measure improvement, including the Statistical Control Chart.

I recently facilitated and undertook a ‘work experience’ placement at Rolls-Royce (Derby), a global company with approximately 54,000 employees that develops, manufactures, and services power systems, and is one of the leading producers of aero engines for large civil aircraft. During my time at the company I was located centrally in the Civil Large Engines (CLE) business project team. The CLE core team has recently defined its vision to “have the best jet engines in the world” within the next ten years, and they have aligned strategic projects with operational projects in order to realise this vision.

My visit was extremely insightful. It was quickly apparent that a robust and standardised approach to project management was the key ingredient driving all business and technical processes. After my first few days at the company, I came home and described the ‘style’ to my husband as ‘project management on steroids’ and we had a giggle, but in truth I was slightly green-eyed about just what had been achieved. People within the company told me that ‘project management’ had improved the efficiency of processes by a staggering 40%. Indeed, I learned that the company had been improving the efficiency of its engines by 1% each year for the past 20 years. Efficiency gains are made both through engine modification to improve fuel efficiency (and, by the way, it costs an airline approximately $1 million to fill up at the fuel pump for the average long-haul flight path) and by reducing the cost of manufacturing or purchasing the parts that made up the engine. Although the standardisation and governance was impressive, (and probably necessary for such a high-value product) the balance between working autonomously, making one’s own value judgements and operating in a highly controlled environment where individual decision making is moderated, felt a little tipped. I wondered if there was any ‘headroom’ or ‘space’ for the staff to be creative within the tightly governed processes. Nevertheless, the UK healthcare sector is particular riddled with ‘silo thinking‘ and we continually fail to look at ways of introducing sustainable change. Perhaps, if we introduced a standardised project management approach to our improvement programmes, we would reap the rewards of efficiency gains and cost savings.

One of the strategic projects currently running is the implementation of a ‘High Performance Culture’ (HPC) tool, developed and tailored by the ‘culture-shaping’ firm Senn Delaney, to “change the organisational culture” from a manufacturing to a supply chain management mind-set. As you might imagine, I got quite excited about the implementation of a ‘culture tool’ and immediately applied my evaluative hat recommending that they should consider a formative evaluation involving ethnographic work to explore the various cultural and geographical layers within the organisation, alongside a staff survey, and that the data should be triangulated to give a clear picture of the intervention. I shared papers authored by Professor Russell Mannion, who has explored culture tools for measurement of patient safety in NHS.[1] A familiar debate emerged around the ability to measure culture and what culture is anyway, as ‘we’ see it as a certain set of behaviours that we want to promote. The psychology of behaviour is also something that we are very interested in and I mentioned the work that Professor Ivo Vlaev has done using the ‘authoritative’ eyes above posters and hand hygiene stations, and how the ‘staring eyes’ appeared to influence whether someone would use the hand hygiene station.[2] As I walked down the corridor on my last day in the CLE office, I overheard a conversation between two people walking in the opposite direction. One said to the other “So I suppose you have had your HPC training and all that” and the other said “Oh yes of course, I am now very much ‘in the zone’.” I really hope that Rolls-Royce takes on my advice and studies the HPC culture tool in some way, and I plan to keep the lines of communication open with the project manager to find out how this progresses.

During my visit, I learned more about the R&D infrastructures and the relationships Rolls-Royce has with a number of HEIs across the globe, which are largely termed ‘University Technology Centres’. Each centre is focused on a particular technology and some centres had Rolls-Royce-employed staff to act very much like our own Leadership and Diffusion Fellows i.e. acting as ‘knowledge-brokers’ between the company (or, in our case, the service) and the research centres. In turn, the knowledge acquired during the development of the technology (e.g. a new way of using a laser drill) would be shared across the overall network of collaborative industry partners – this appeared to be a similar to the premise of the Academic Health Science Network to share better ways of doing things across the patch. The actual product itself, however, would be protected by intellectual property clauses in contractual agreements. The R&D infrastructures have improved the cost, either by reducing the amount of time it takes to make something, or by finding or developing cheaper materials, and the efficiency of component parts that make up the engine. I guess this would translate down-stream to improvements in the overall functionality, efficiency, and cost of the engine.

In summary, my placement was an educational and fascinating experience. I was asked by the Rolls-Royce group to feedback my reflections/observations at their group meeting and my slides can be viewed online. There were many parallels between both ‘organisations’ and some useful lessons than we can both draw upon.

— Nathalie Maillard, Head of Programme Delivery, CLAHRC WM


  1. Davies HTO & Mannion R. Will prescriptions for cultural change improve the NHS? BMJ. 2013; 346: f1305.
  2. King D, Vlaev I, Everett-Thomas R, Fitzpatrick M, Darzi A, Birnbach DJ. “Priming” Hand Hygiene Compliance in Clinical Environments. Health Psychol. 2015. [ePub].

Poking Fun at Service Re-organisations

Tim Jones (University Hospital Birmingham) recently drew the CLAHRC WM Director’s attention to a 2005 paper by Oxman et al. published in the Journal of the Royal Society of Medicine,[1] which he feels you might enjoy. Unlike previous Director’s Choices, this paper does not reveal a counter-intuitive result or refute a long-held theory; instead it shows how revealing humour can be, poking fun at evidence-free management theory and the jargon that covers up this “empirical vacuum”. Some examples:

“We discovered that the literature is almost impenetrable due to creative jargon and the meaningless terminology generated by a variety of cults adhering to different beliefs and led by competing gurus.”

“We identified several over-lapping reasons for reorganizations, including money, revenge, money, elections, money… and no apparent reason at all.”

Of course the Director was reminded of the old refrain:

“We trained hard, but it seemed that every time we were beginning to form up into teams, we would be reorganised. Presumably the plans for our employment were being changed. I was to learn later in life that, perhaps because we are so good at organising, we tend as a nation to meet any new situation by reorganising; and a wonderful method it can be for creating the illusion of progress while producing confusion, inefficiency and demoralization.” – Charlton Ogburn (1957)

— Richard Lilford, CLAHRC WM Director


  1. Oxman AD, Sackett DL, Chalmers I, Prescott TE. A surrealistic mega-analysis of redisorganization theories. J R Soc Med. 2005; 98: 563-8.

The middle-management myth in healthcare

The value of middle managers in large organisations has been questioned for decades. When times get tough, the knives inevitably come out for the ‘men in grey suits’. There are few places where this cynicism towards the strategic importance of middle managers has been more evident in recent years than in the NHS. New research, however, suggests that the NHS may have underestimated the importance of a particular breed of managers – those with a clinical or professional background, referred to here as ‘hybrid’ middle managers.[1]

Identifying the ‘hybrid’ middle manager
A hybrid middle manager is anyone whose professional background enables them to act as a ‘two-way mirror’ [1] – capable not just of assimilating top-down management knowledge, but also of translating and transmitting ideas belonging to clinical practice back up into their organisation. Hybrid middle managers may have various professional backgrounds and may be located at different levels of an organisation – from ward manager to clinical director. Their strategic value does not come by virtue of their role, but rather the level of influence they are able to exert downwards to their teams and upwards, for example, to the wider clinical governance agenda. A ward manager, for example, may have deputy ward managers and team leaders below them, to whom they can broker knowledge cascaded through internal management channels. At the same time, they may offer a credible voice at departmental or divisional management meetings, a role which they can use to share practical knowledge and experience gained from day-to-day clinical practice. This type of hybrid middle manager has been calculated to represent around a third of all staffing in a traditional hospital, compared to just three per cent of ‘pure’ general managers.[2]

The strategic importance of the hybrid middle manager
Studies into private sector corporations commonly highlight the importance of middle managers as ‘knowledge engineers’ – capable of combining visionary concepts emanating from the top of an organisation with practical knowledge from the shop floor.[3] The same has been found to be true in the healthcare setting, where hybrid middle managers are uniquely placed to translate strategic management initiatives into practical applications in a clinical setting. However, the influence of these hybrids in the NHS is more complex and important than the mere ability to bridge the knowledge gap between the top and bottom layers of an organization.[4]

Knowledge brokering in service improvement
Studies into organisational behaviour indicate that hybrid middle managers have an almost unrivalled ability to broker knowledge within and between healthcare organisations. These managers operate at the frontline of service delivery and enjoy a credibility and legitimacy within their clinical communities that is not afforded to more generalist managers. They do not just understand the importance of accumulating knowledge, but also what it can be used for. The nature of clinical practice, where knowledge is constantly used alongside individual judgements, means hybrid middle managers are well equipped to act as brokers, connecting the subjective knowledge used in day-to-day clinical decision making with the more specific managerial information used in strategic service planning.  In effect, they are able to apply their professional ‘mindlines’ to more explicit organisational ‘guidelines’. This knowledge-brokering role has been identified as a key component in service improvement. In relation to the clinical governance agenda, their fusing of patient safety knowledge from clinical governance systems and the frontline of clinical practice is crucial to ensuring high-quality care for older patients in hospitals.[4] [5]

Contingencies framing the influence of hybrid middle managers
It important to acknowledge that not all hybrid middle managers are equally important knowledge brokers. The levels of influence they are able to – and, in some cases, are prepared to – exert are dependent on a number of personal and professional circumstances.[4] [5]

Inter-professional standing
The hierarchical nature of healthcare means some professionals have more perceived legitimacy than others. Nurses, for example, have legitimacy with their peers, but this can dissipate when trying to broker knowledge with and between doctors.

Intra-professional standing
Hierarchies also exist within professions, with certain clinical specialities perceived as enjoying a higher status than others, which can have an adverse impact on their ability to influence.

Professional credibility
Concerned at being seen as a manager first and a clinician second, hybrids have been found to use a number of different tactics to try and maintain a level of professional credibility. Some argue that a managerial position allows them to deliver better care. Others position themselves as a ‘representative’ of their profession or take up administrative positions within their professional bodies, which they use as a type of shield to perceived management encroachment.[6] [7]

Personal disposition
The extent to which hybrid middle managers engage with the potential of their role is governed in many cases by their overall perception of general management. Often this view is formulated early in an individual’s career, but the effects, in terms of being reluctant to embrace a knowledge-brokering role, can be long-lasting.[8]

Social capital
Social capital – an individual’s understanding, trust and reciprocity with others [9] – has been identified as a key factor in helping lower-level hybrid middle managers to break down professional boundaries, to broker their unique knowledge and thereby exert strategic influence. Hierarchies are widespread in healthcare, but in organisations where teams had developed a collective identity, there was evidence of effective knowledge brokering that crossed status and inter-disciplinary divides.[10]

— Graeme Currie, Deputy Director CLAHRC WM, Implementation & Organisation Studies Lead


  1. Llewellyn S. Two-way windows’: Clinicians as medical managers. Organ Stud. 2001; 22(4): 593-623.
  2. Walshe K, & Smith L. The NHS management workforce. 2011. London, UK: The King’s Fund.
  3. Nonaka I. Towards middle up/down management: Accelerating information creation. Sloan Manage Rev. 1988; 29: 9-18.
  4. Burgess N, & Currie G. The knowledge brokering role of the hybrid middle manager: The case of healthcare. Br J Manage. 2013; 24(s1): s132-s142.
  5. Currie G, Burgess N, Hayton J. HR practices and knowledge brokering by hybrid middle managers in hospital settings: the influence of professional hierarchy. Hum Resource Manage. 2015. [In Press].
  6. McGivern G, Currie G, Ferlie E, Fitzgerald L, Waring, J. Hybrid manager-professionals’ identity work: The maintenance and hybridization of professionalism in managerial contexts. Public Admin. 2014. [In Press].
  7. Croft C, Currie G, Lockett A. Broken ‘two way windows’? An exploration of professional hybrids. Public Admin. 2014. [In Press].
  8. Croft C, Currie G, Lockett A. The impact of emotionally important social identities on the construction of managerial leader identity: A challenge for nurses in the English NHS. Organ Stud. 2015. [In Press].
  9. Nahapiet J, & Ghoshal S. Social capital, intellectual capital, and the organizational advantage. Acad Manage Rev. 1998; 23(2): 242-66.
  10. Currie G, & White L. Inter-professional barriers and knowledge brokering in an organizational context: the case of healthcare. Organ Stud. 2012; 33(9): 1333-61.

Financial Incentives for Providers of Health Care: The Baggage Handler and the Intensive Care Physician

There is a substantial amount of evidence on financial incentives for providers. As a result, a number of evidence-based theories can be propounded that might help inform where they can be expected to do more good than harm in health care. They can be exemplified by comparing baggage handlers with an intensive care physician.

  1. Incentives can produce beneficial effect where the agent (person at whom incentive is targeted) can reach the objective under their own volition.[1] Where the agent does not have a solution, there is a high-risk of perverse behaviour. Clinical process measures have an advantage over outcomes in this respect.[2] The ITU physician does not know what to do with a high SMR (standardised mortality ratio) – it is not sensitive or specific and does not point to where any problem might lie.[3] But baggage handlers know exactly what to do to improve timely loading and avoid delays in aeroplane departure – take shorter breaks and work faster.
  2. Team incentives are better than individual rewards when outcomes depend on team performance.[4] Baggage handlers should be rewarded in teams – the social forces within teams can be relied upon to improve individual performance. Rewarding ITU physicians would be invidious and demotivating to other members of the team if hypothecated on performance of the ITU. Rewarding the physician for extramural work is, of course, a different matter altogether.
  3. Financial effects are ineffective when the task is heavily cognitively loaded or dramatic. The drama and intellectual challenge for the medical care in the ITU saturate the motivation centre. Without wishing to denigrate their work, this is unlikely to be the case for baggage handlers.
  4. The effect of financial benefits appears ephemeral in an environment where there are other pressures for compliance/improved performance. In modern health care performance measures, especially if shared among collaborating colleagues, seem to be important motivating factors.[5] Like financial incentives, they may induce gaming, if used for punishment and reward.

Readers are asked to contribute other examples, or to disagree with the above conclusions.

–Richard Lilford, CLAHRC WM Director


  1. Gupta N, Shaw JD. Let the Evidence Speak: Financial Incentives are Effective! Compensat Benefit Rev. 1998; 30(2): 26-32.
  2. Lilford R, Mohammed MA, Spiegelhalter D, Thomson R. Use and misuse of process and outcome data in managing performance of acute medical care: avoiding institutional stigma. Lancet. 2004; 363(9415): 1147-54.
  3. Lilford RJ, Pronovost P. Using hospital mortality rates to judge hospital performance: a bad idea that just won’t go away. BMJ. 2010; 340: c2016.
  4. DeMatteo JS, Eby LT, Sundstrom E. Team-Based Rewards: Current Empirical Evidence and Directions for Future Research. Res Organ Behav. 1998; 20:141-83.
  5. Weaver SJ, Lubomksi LH, Wilson RF, Pfoh ER, Martinez KA, Dy SM. Promoting a Culture of Safety as a Patient Safety Strategy: A Systematic Review. Ann Intern Med. 2013; 158 (5 p2): 369-74.

Business management

We are familiar with randomised controlled trials (RCTs) in healthcare, education, criminology, and social policy, but what about business management?

Many management interventions are generic,[1] covering the whole of an organisation. I came across a really interesting RCT of such an intervention recently where whole factories were randomised.[2] The factories were textile organisations in India, and the intervention was to have, or not have, management consultant support. I have always had a rather nihilistic impression of the consultancy industry, but it looks as though I might have been wrong, as the intervention sites performed slightly better than the control sites in terms of an increase in productivity in the first year (17%), and the opening of more production plants (0.259 on average) within three years.

–Richard Lilford, Director of CLAHRC WM


[1] Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ. 2010;341:c4413.

[2] Bloom N, Eifert B, Mahajan A, McKenzie D, Roberts J. Does Management Matter? Evidence from India. Q J Econ. 2013; 128(1):1-51.