Category Archives: Director & Co-Directors’ Blog

Health Economics and Access to Care: Are We Using the Wrong Model?

I woke one morning, many years ago, to the voice of a famous economist sounding off on my bedside radio. He spoiled the equanimity of my morning with his argument that the value of primary care should be evaluated by comparing the costs of the service with the health gain achieved by the service (in terms of quality adjusted life years [QALYs]). That is cobblers! Quite apart from the facile idea that the health gain from primary care can be calibrated with any kind of accuracy, the economist’s health economic model bypasses much of the purpose of healthcare. In this model, health care is simply as an instrument to improve health status. But a little thought will immediately show that health gain is a very incomplete understanding of the reason that people consult doctors. Health care serves a deep psychological need; human beings have turned to healers from the time that we became human beings. Health practitioners are not only valuable for the health gain they can now achieve, but also because they provide human warmth and support. The need for comfort, information, magic, and cure are all entangled. Not only do we need someone to turn out at times of mental or physical distress, but crucially, we also need to know that someone will be there for us when our time comes. And we need this assurance, even when we are perfectly healthy. We could perhaps wrap in the avoidance of catastrophic loss and call this the ‘insurance value’. Nor should the value of information – news about your own body – be underestimated. Berwick & Weinstein found that half of the benefit of an antenatal scan was simply to get a picture of the baby and had nothing to do with its medical purpose.[1]

The classical health economic model of cost utility analysis is well adapted to rationing demand once the patient’s condition has been defined. At that point calculating the relative value of different treatment options is a relatively straightforward issue (Figure 1). However, calculating the return-on-investment from simply providing access to healthcare is a different matter altogether. First, there is the extraordinarily difficult instrumental question of how to hypothecate the treatment effect over the full range of health conditions (Figure 2). Second, there is a need to factor in the value of:

  1. Information.
  2. Solace, comfort, support.
  3. Knowing that access will be available when required – the insurance value.

At the very least, it should recognise that cost-utility analysis for a calculation of a QALY or a DALY is not up to the task. The topic of access is one area where health economics raises many unsolved problems. In a recent news blog we discussed another issue that exposes some of the deep philosophical conundrums at the heart of health economics – the thorny issue of infertility.[2]

Fig 1. Management of a Specific Condition: a task for standard health economics

105 DCB - Health economics Fig 1

Fig 2. Providing Access to Healthcare: Health benefits are diffuse and hence hard to capture

105 DCB - Health economics Fig 2 

— Richard Lilford, CLAHRC WM Director


  1. Berwick DM, Weinstein MC. What do patients value? Willingness to pay for ultrasound in normal pregnancy. Med Care. 1985; 23(7): 881-93.
  2. Lilford RJ. The Health Economics of Infertility Treatment. NIHR CLAHRC West Midlands News Blog. 9 March 2018.

A Framework to Improve Access to Acute Care in Low- and Middle-Income Countries

Universal healthcare is an important goal in global health, as described in previous News Blogs.[1] Key to the concept of universal healthcare is the question of access to health. I lead one of three work packages in the NIHR Global Health Unit for Surgery, directed by Dion Morton. Along with Dr Dmitri Nepogodiev, I have recently returned from a series of meetings with influential doctors, policy-makers and community leaders in Oyo and Osun states in Nigeria. Our host was Dr Wally Adisa of Ife University, to whom we extend our sincere thanks.

Dmitri has reviewed the literature on how barriers may be overcome and access to healthcare facilitated. In Figure 1 we have attempted to synthesise the barriers from the literature and meetings in Nigeria. We use the famous Grossman model,[2] which recognises four phases on the pathway linking symptoms to effective treatment: recognition of the need for help; seeking help; transport to a place where appropriate care can be delivered; and then obtaining care in the healthcare institution. Many of the barriers we have identified could have been discerned though intuition: lack of money; poor understanding of disease and how it can be remedied; reliance on traditional healers; etc. However our investigations have identified certain factors we had not anticipated. For instance, many people are reluctant to call an ambulance, even when available, because they are superstitious of entering such a vehicle.[3]

While poverty is an important factor, limiting access, people who need acute care usually make it to hospital eventually. When we probe the reasons for delay among people who eventually did make it to hospital, we find that delay occurs because resources are not in the right place at the right time. In countries with a low tax base, more use should be made of existing networks and community ‘assets’ , to short-cut the barriers.[4] [5]

Figure 1. Barriers and Facilitators on the Pathway to Acute Care

104 DCB - A Framework to Improve Access to Care in Low - Figure - v2

During our meetings, and in the literature, there is much agreement about what the barriers are: poverty, superstitious beliefs, perceptions (not always erroneous) of poor care in hospitals, lack of facilities with need for further transfer, and so on. Where there is much less agreement, between stakeholders and within the literature, is on the relative importance of the various factors or how they may vary by clinical scenario – obstetric emergency, acute abdomen, trauma, childhood illness, and so on. While Desmond (personal communication) found severe constraints in access due to inadequate transport or ability to pay for transport (phase 3 in Grossman’s model) in a paediatric context in Malawi, Orji found that all the problems were in phase 1 and 2 or 4 in an obstetric context in Nigeria.[6] It is also known that no emergency transport systems are available in 33 (61%) of 54 African countries that answered a recent survey, and many only covered trauma or obstetric emergencies, while few were country-wide. Overall, only 8.7% of the population need could be met.[7] Nor should it be assumed that transport costs are negligible compared to health costs.[8] It is lack of facilities for transport that is the most important problem, not poor roads or hospitals too widely dispersed. Sheer distance can be a problem in some countries, such as Sudan, and lack of roads in other places, such as Ethiopia, but over two-thirds of Africa’s population live within two hours of hospital.[9]

We have produced a list of possible measures to improve access, classified according to whether they stimulate demand or supply. None of these interventions are easy to implement or evaluate. For this reason we plan to engage stakeholders to see what might be feasible, review the literature on what has been tried before, and then develop a health economic model to evaluate the cost-effectiveness of different potential solutions. In a future News Blog we will describe our approach to health economic modelling of this complex, but important, topic.

Factors that may be Tackled by Interventions to Improve Access

Demand for transfer Supply of the means for transfer
Knowledge of treatable illness, such as meningitis, typhoid, snakebite. Tackled through education. For example, messages targeted at misconceptions were massively influential in eliminating the Ebola epidemic. Many superstitions and beliefs are cultural, so different messages will be needed in different places. People can be influenced through local community and religious leaders, as well as through feedback from people who have experienced services.[10] Since most people do reach services, promotion of risk-sharing community schemes or ‘electronic’ wallets to provide resources when and where needed. Women’s participatory groups can also encourage autonomy, making women less reliant on husbands for money or permission.
Use of eHealth in general and eConsulting in particular to help translate awareness of symptoms to the intention to seek help. Public / NGO provision of inexpensive motorcycle taxis, successfully used for labour care in Sierra Leone [11] and Malawi.[12]
Interaction with traditional healers to recognise illnesses responsive to ‘modern medicine’. Encouraging / investing in small enterprises to promote transport, e.g. Uber-taxi style ambulances deployed in Nairobi.[13]

— Richard Lilford, CLAHRC WM Director & Dmitri Nepogodiev, Doctoral Research Fellow in Public Health.


  1. Lilford RJ. A Heretical Suggestion! NIHR CLAHRC West Midlands News Blog. 9 February 2018.
  2. Grossman M. The demand for health: A theoretical and empirical investigation. Cambridge, MA: NBER Books; 1972.
  3. Wilson A, Hillman S, Rosato M, Skelton J, Costello A, Hussein J, MacArthur C, Coomarasamy A. A systematic review and thematic synthesis of qualitative studies on maternal emergency transport in low- and middle-income countries. Int J Gynecol Obstet. 2013; 122: 192-201.
  4. Nielson K, Mock C, Joshipura M, Rubiano AM, Rivara F. Assessment of the Status of Prehospital Care in 13 Low- and Middle-Income Countries. Prehosp Emerg Care. 2012; 16(3): 381-9.
  5. Lilford RJ. Pre-Payment Systems for Access to Healthcare. NIHR CLAHRC West Midlands News Blog. 18 May 2018.
  6. Orji EO, Ogunlola IO, Onwudiegwu U. Brought-in maternal deaths in south-west Nigeria. J Obstet Gynaecol. 2002; 22:4:385-8.
  7. Mould-Millman N-K, Dixon JM, Sefa N, Yancey A, Hollong BG, Hagahmend M, Ginde AA, Wallis LA. The State of Emergency Medical Services (EMS) Systems in AfricaPrehosp Disaster Med. 2017; 32(3):273-83.
  8. Jan S, Laba T-L, Essue BM, Gheorghe A, Muhunthan J, Engelgau M, Mahal A, Griffiths U, McIntyre D, Meng Q, Nugent R, Atun R. Action to address the household economic burden of non-communicable diseases. Lancet. 2018; 391:2047-58.
  9. Ouma PO, Maina J, Thuranira PN, Macharia PM, Alegana VA, English M, Okiro EA, Snow RW. Access to emergency hospital care provided by the public sector in sub-Saharan Africa in 2015: a geocoded inventory and spatial analysis. Lancet Glob Health. 2018; 6:e342-50.
  10. Mould-Millman N-K, Rominski SD, Bogus J, Ginde AA, Zalariah AN, Boatemaah CA, Yancey AH, Akoriyea SK, Campbell TB. Barriers to Accessing Emergency Medical Services in Accra, Ghana: Development of a Survey Instrument and Initial Application in Ghana. Glob Health. 2015; 3(4):577-90.
  11. Bhopal SS, Halpin SJ, Gerein N. Emergency Obstetric Referral in Rural Sierra Leone: What Can Motorbike Ambulances Contribute? A Mixed-Methods Study. Matern Child Health J. 2013; 17: 1038-43.
  12. Hofman JJ, Dzimadzi C, Lungu K, Ratsma EY, Hussein J. Motorcycle ambulances for referral of obstetric emergencies in rural Malawi: Do they reduce delay and what do they cost? Int J Gynecol Obstet. 2008; 102: 191-7.
  13. Moh C. How a speedy emergency services app is saving lives. BBC News. 24 November 2017.

Interim Guidelines for Studies of the Uptake of New Knowledge Based on Routinely Collected Data

CLAHRC West Midlands and CLAHRC East Midlands use Hospital Episode Statistics (HES) to track the effect of new knowledge from effectiveness studies on implementation of the findings from those studies. Acting on behalf of CLAHRCs we have studied uptake of findings from the HTA programme over a five year period (2011-15). We use the HES database to track uptake of study treatments where the use of that treatment is recorded on the HES database – most often these are studies of surgical procedures. We conduct time series analyses to examine the relationship between publication of apparently clear-cut findings and the implementation (or not) of those findings. We have encountered some bear traps in this apparently simple task, which must be carried out with an eye to detail. Our work is ongoing, but here we alert practitioners to some things to look out for based on the literature and our experience. First, note that the use of time series to study clinical practice based on routine data is both similar and different from the use of control charts in statistical process control. For the latter purpose, News Blog readers are referred to the American National Standard (2018).[1] Here are some bear-traps/issues to consider when using databases for the former purpose – namely to scrutinise databases for changes in treatment for a given condition:

  1. Codes. By a long way, the biggest problem you will encounter is the selection of codes. The HTA RCT on treatment of ankle fractures [2] described the type of fracture in completely different language to that used in the HES data. We did the best we could, seeking expert help from an orthopaedic surgeon specialising in the lower limb. Some thoughts:
    1. State the codes or code combinations used. In a recent paper, Costa and colleagues did not state all the codes used in the denominator for their statistics on uptake of treatment for fractures of the lower radius.[3] This makes it impossible to replicate their findings.
    2. Give the reader a comprehensive list of relevant codes highlighting those that you selected. This increases transparency and comparability, and can be included as an appendix.
    3. When uncertain, start with a narrow set of codes that seem to correspond most closely to indications for treatment in the research studies, but also provide results for a wider range – these may reflect ‘spill-over’ effects of study findings or miscoding. Again, the wider search can be included as an appendix, and serves as a kind of sensitivity analysis.
    4. If possible, examine coding practice by examining local databases that may contain detailed clinical information with the routine codes generated by that same institution. This provides empirical information on coding accuracy. We did this with respect to use of tight-fitting casts to treat unstable ankle fracture (found to be non-inferior to more invasive surgical plates [4]) and found that the procedure was coded in different ways. We combined these three codes in our study, although this increases measurement error (reducing the signal) on the assumption that these codes are not specific.
  2. Denominators.
    1. In some cases denominators cannot be ascertained. We encountered this problem in our analysis of surgery for oesophageal reflux, where surgery was found more effective than medical treatment.[5] The counterfactual here is medical therapy that can be delivered in various settings and that is not specific for the index condition. Here we simply had to examine the effects of the trial results on the number of operations carried out country-wide. Seasonal effects are a potential problem with denominator-free data.
    2. For surgical procedures, the procedure should be combined with the counterfactual procedure from the trial to create a denominator. The denominator can also be expanded to include other procedures for the same operation if this makes sense clinically.
  3. Data-interval. The more frequent the index procedure, then the shorter the appropriate interval. If the number of observations falls below a certain threshold, then the data cannot be reported to protect patient privacy, and a wider interval must be used. A six month interval seemed suitable for many surgical procedures.
  4. Of protocols and hypotheses. We have found that the detailed protocol must emerge as an iterative process including discussion with clinical experts. But we think there should be a ‘general’ prior hypothesis for this kind of work. So we specified the dates of publication of the HTA report as our pre-set time point – the equivalent of the primary hypothesis. We applied this date line for all of the procedures examined. However, solipsistic focus on this data line would obviously lead to an impoverished understanding, so we follow a three phase process inspired by Fichte’s thesis-antithesis-synthesis-thesis model [6]:
    1. We test the hypothesis that a linear model fits the data using a CUSUM (cumulative sum) test. The null hypothesis is that the cumulative sum of recursive residuals has an expected value of 0. If it wanders outside the 95% confidence band at any point in time, this indicates that the coefficients have changed and a single linear model does not fit the data.
    2. If the above test indicates a change in the coefficients, we use a Wald test to identify the point at which the model has a break. We estimate two separate models before and after the break data and the slopes/intercepts are compared.
    3. Last we ‘check by members’ and discuss with experts who can fill us in on when guidelines emerged and when other trials may have been published – ideally a literature review would complement this process.
  5. Interpretation. In the absence of contemporaneous controls, cause and effect inference must be cautious.

This is an initial iteration of our thoughts on this topic. However, increasing amounts of data are being captured in routine systems, and databases are increasingly constructed in real time since they are used primarily as a clinical tool. So we thought it would be helpful to start laying down some procedural rules for retrospective use of data to determine long-term trends. We invite readers to comment, enhance and extended this analysis.

— Richard Lilford, CLAHRC WM Director

— Katherine Reeves, Statistical Intelligence Analyst at UHBFT Health Informatics Centre


  1. ASTM International. Standard Practice for Use of Control Charts in Statistical Process Control. Active Standard ASTM E2587. West Conshohocken, PA: ASTM International; 2018.
  2. Keene DJ, Mistry D, Nam J, et al. The Ankle Injury Management (AIM) trial: a pragmatic, multicentre, equivalence randomised controlled trial and economic evaluation comparing close contact casting with open surgical reduction and internal fixation in the treatment of unstable ankle fractures in patients aged over 60 years. Health Technol Assess. 20(75): 1-158.
  3. Costa ML, Jameson SS, Reed MR. Do large pragmatic randomised trials change clinical practice? Assessing the impact of the Distal Radius Acute Fracture Fixation Trial (DRAFFT). Bone Joint J. 2016; 98-B: 410-3.
  4. Willett K, Keene DJ, Mistry D, et al. Close Contact Casting vs Surgery for Initial Treatment of Unstable Ankle Fractures in Older Adults. A Randomized Clinical Trial. JAMA. 2016; 316(14): 1455-63.
  5. Grant A, Wileman S, Ramsay C, et al. The effectiveness and cost-effectiveness of minimal access surgery amongst people with gastro-oesophageal reflux disease – a UK collaborative study. The REFLUX trial. Health Technol Assess. 2008; 12(31): 1–214.
  6. Fichte J. Early Philosophical Writings. Trans. and ed. Breazeale D. Ithaca, NY: Cornell University Press, 1988

Giving Feedback to Patient and Public Advisors: New Guidance for Researchers

Whenever we are asked for our opinion we expect to be thanked and we also like to know if what we have contributed has been useful. If a statistician/qualitative researcher/health economist has contributed to a project, they would (rightfully) expect some acknowledgement and whether their input had been incorporated. As patient and public contributors are key members of the research team, providing valuable insights that shape research design and deliver, it’s right to assume that they should also receive feedback on their contributions. But a recent study led by Dr Elspeth Mathie (CLAHRC East of England) found that routine feedback to PPI contributors is the exception rather than the rule. The mixed methods study (questionnaire and semi-structured interviews) found that feedback was given in a variety of formats with variable satisfaction with feedback. A key finding was that nearly 1 in 5 patient and public contributors (19%) reported never having received feedback for their involvement.[1]

How should feedback be given to public contributors?

There should be no ‘one size fits all’ approach to providing feedback to public contributors. The study recommends early conversations between researchers and public contributors to determine what kind of feedback should be given to contributors and when. The role of a Public and Patient Lead can help to facilitate these discussions and ensure feedback is given and received throughout a research project. Three main categories of feedback were identified:

  • Acknowledgement of contributions – Whether input was received and saying ‘thanks’
  • Information about the impact of contributions – Whether input was useful and how it was incorporated into the project;
  • Study success and progress – Information on whether a project was successful (e.g. securing grant funding/gaining ethical approval) and detail about how the project is progressing.


What are the benefits to providing feedback for public contributors?

The study also explored benefits of giving feedback to contributors. Feedback can:

  • Increase motivation for public contributors to be involved in future research projects;
  • Help improve a contributor’s input into future project (if they know what has been useful, they can provide more of the same);
  • Build the public contributor’s confidence;
  • Help the researcher reflect on public involvement and the impact it has on research.


What does good feedback look like?

Researchers, PPI Leads and public contributors involved in the feedback study have co-produced Guidance for Researchers on providing feedback for public contributors to research.[2] The guidance explores the following:

  • Who gives feedback?
  • Why is PPI feedback important?
  • When to include PPI feedback in research cycle?
  • What type of feedback?
  • How to give feedback?

Many patient and public contributors get involved in research to ‘make a difference’. This Guidance will hopefully help ensure that all contributors learn how their contributions have made a difference and will also inspire them to continue to provide input to future research projects.

— Magdalena Skrybant, PPIE Lead


  1. Mathie E, Wythe H, Munday D, et al. Reciprocal relationships and the importance of feedback in patient and public involvement: A mixed methods study. Health Expect. 2018.
  2. Centre for Research in Public Health and Community Care. Guidance for Researchers: Feedback. 2018

On Integrated Care

Integrated care is a big issue promoted in the NHS and throughout the world. This push to integrate care includes low- and middle-income countries (LMICs). This is so, despite the demonstrated and often spectacular success of vertical programmes to tackle diseases, such as HIV and malnutrition, in LMICs.[1] Yet, there are compelling reasons to integrate care:

  1. A greater proportion of people now survive to suffer multiple chronic diseases affecting multiple organ systems.
  2. Solipsistic focus on specific (vertical) programmes can lead to neglect and poor quality of the generality of care.
  3. Vertical programmes imply that the diagnosis has been made, yet health services need to cater for people with undiagnosed symptoms.

But what about empirical evidence for integrated programmes? One set of programmes commonly integrated are HIV programmes and programmes targeting maternal and neonatal health. A review by the Cochrane HIV/AIDS group,[2] commissioned by USAID, tackled this particular issue back in 2011 – a time when HIV care was still a critical issue. The results were generally positive. For example, pregnancy rates declined when HIV and family planning services were integrated, and recovery rates from malnutrition improved in studies that examined this outcome. The review also identified factors associated with more or less successful migration from vertical to integrated programmes. Better results can be achieved by upfront investment in the integration process itself, focusing on staff education, preparation of appropriate case-notes and community engagement. In my opinion these are generic success factors for any programme of change.

Integrated care is a pervasive theme in NIHR CLAHRC West Midlands. We have recently completed an authoritative overview of 80 systematic reviews on this topic.[3] [4] Our work has stressed the importance of human resources in effecting service change, most particularly the importance of committed middle managers with high emotional intelligence,[5] and the role of ‘expectancy’, by which we mean that targets or incentives should only be used when the people at whom the target is aimed believe that they know how to achieve the target.[6] Our sister centre, the NIHR Global Health Unit on Improving Health in Slums, is also examining optimal health service configurations in slum areas of Africa and Asia,[7] where we will be studying integrated services using tools developed in CLAHRC WM.

— Richard Lilford, CLAHRC WM Director


  1. Lilford RJ. A Heretical Suggestion! NIHR CLAHRC West Midlands News Blog. 9 February 2018.
  2. Kennedy G, Kennedy C, Lindegren ML, Brickley D. Systematic review of integration of maternal, neonatal and child health and nutrition, family planning and HIV. Report No. 11-01-303-02. Washington, D.C.: Global Health Technical Assistance Project; 2011.
  3. Damery S, Flanagan S, Combes G. Does integrated care reduce hospital activity for patients with chronic diseases? An umbrella review of systematic reviews. BMJ Open. 2016; 6: e011952.
  4. Lilford RJ. Future Trends in NHS. NIHR CLAHRC West Midlands News Blog. 25 November 2016.
  5. Burgess N & Currie G. The Knowledge Brokering Role of the Hybrid Middle Level Manager: the Case of Healthcare. Br J Manage. 2013; 24(s1): s132-42.
  6. Lilford RJ. Financial Incentives for Providers of Health Care: The Baggage Handler and the Intensive Care Physician. NIHR CLAHRC West Midlands News Blog. 25 July 2014.
  7. Lilford RJ. Measuring the Quality of Health Care in Low-Income Settings. NIHR CLAHRC West Midlands News Blog. 18 August 2017.

Effective Collaboration between Academics and Practitioners Facilitates Research Uptake in Practice

New research has been conducted by Eivor Oborn, Professor of Entrepreneurship & Innovation at Warwick Business School and Michael Barrett, Professor of  Information Systems & Innovation Studies at Cambridge Judge Business School, to better understand the contribution of collaboration in bridging the gap between research and its uptake in to practice (Inform Organ. 2018; 28[1]: 44-51).

Much has been written on the role of knowledge exchange to bridge the academic-practitioner divide. The common view Is that academics ‘talk funny’, using specialised language, which often leads to the practical take home messages being missed or ‘lost in translation’. The challenge for academics is to learn how to connect ‘theory or evidence driven’ knowledge with practitioners’ knowledge to ‘give sense’ and enable new insights to form.

The research examines four strategies by which academics may leverage their expertise in collaborative relationships with practitioners to realise, what the authors term: ‘Research Impact and Contributions to Knowledge’ (RICK).

  1. Maintain critical distance
    Academics may adopt a strategy of maintaining critical distance in how they engage in academic-practitioner relations for a variety of reasons, for example, to retain control of the subject of investigation.
  2. Prompt deeper engagementAcademics who are immersed in one domain, become fluent in a new language and gain practical expertise in this second (practical) domain. For example, in the Warwick-led NIHR CLAHRC West Midlands, academics are embedded and work closely with their NHS counterparts. This provides academics with knowledge -sharing and -transfer opportunities, enabling them to better respond to the knowledge requirements of the health service, and in some scenarios, co-design research studies, and catalyse upon opportunities to promote the use of evidence from their research activities.
  3. Develop prescience
    Prescience describes a process of anticipating what we need to know – almost akin to ‘horizon-scanning’. A strategy of prescience would aim to anticipate, conceptualize, and influence significant problems that might arise in domains over time. The WBS-led Enterprise Research Centre employs this strategy and seeks to answer one central question: ‘what drives SME growth?’
  4. Achieve hybrid practices
    Engaged scholarship allows academics to expand their networks and collaboration with other domains and in doing so generate an entirely new field of ‘hybrid’ practices.

The research examines how the utility (such as practical or scientific usefulness) of contributions in academic-practitioner collaboration can be maximised. It calls for established journals to support a new genre of articles that involve engaged scholarship, produced by multidisciplinary teams of academic, practitioners and policymakers.

The research is published in Information & Organization journal, together with a collection of articles on Research Impact and Contributions to Knowledge (RICK) – a framework coined by co-author on the above research, Prof Michael Barrett.

— Nathalie Maillard, WBS Impact Officer

Patient Reported Outcome Measures: A Tool for Individual Patient Management, Not Just for Research

Research and practice are often thought of as totally different types of activity. For instance, research is governed by an extensive set of procedural requirements that do not apply to standard practice. However, many inquiries that would have counted as research in earlier times are now embedded in management and governance practices. Take, for instance, outcomes of treatments and procedures. When the CLAHRC WM Director was a young doctor a study of the outcomes of, say, 50 Wertheim’s hysterectomies would be a typical research undertaking. Now, this may be considered a standard audit and may take place as part of a hospital’s prudent monitoring of its work. CLAHRCs have a long tradition of building data analysis into routine practice.

One active area of practice concerns patient-reported outcome measures (PROMs).  Often thought of as endpoints for research, CLAHRC WM collaborator Melanie Calvert and members of the Centre for Patient Reported Outcomes Research, are working closely with clinicians at University Hospital Birmingham NHS Foundation Trust to capture electronic PROMs which are used for real-time monitoring of patient symptoms and to tailor care to individual patient needs. Prof Calvert notes that these data have the potential to be used for multiple purposes: aggregated data may be used to inform and improve service delivery, whilst individual patient data may be used alongside remote clinical monitoring to guide frequency of outpatient appointments. Attendance at the hospital can be flexible and dictated by patient need and response to therapy.

You can hear more about their work and exciting new developments in PROMs research at the forthcoming PROMS conference, sponsored by CLAHRC WM and hosted by the University of Birmingham on Wednesday 20 June 2018. You can register online by clicking here. Registration is open until 11 June.

— Richard Lilford, CLAHRC WM Director

— Melanie Calvert, Professor of Outcomes Methodology

Evaluation of High vs. Low Cost Service Interventions

Generic service interventions vary considerably in their costs. Human resource interventions, such as improving the nurse to patient ratio or making more specialists available over the weekend, tend to be expensive. Other service interventions, such as an educational intervention to improve team working in multi-disciplinary clinical teams, are less expensive. The cost-effective effect size is smaller for lower cost interventions than for those which are more expensive. The axiom, that the cost of an intervention determines the effectiveness threshold at which it becomes cost-effective, has profound implications for the design and analysis of evaluative studies. The nub of the argument is that the size of the health effect that would justify deployment of service interventions may be too small to be detected by affordable or logistically feasible studies when the cost of that intervention is low. Before developing this argument further, let me be clear that by cost I mean net cost (not just the cost of the intervention itself), and that costs must be compared with respect to a common denominator – e.g. cost per patient, cost per 1,000 patients, etc.

Let us imagine that we wish to improve consultant cover at weekends. This is a very expensive intervention (whether measured in terms of the cost of hiring new consultants or the opportunity costs of re-allocating consultant time).[1] Such an intervention would need to provide considerable health gain to justify its substantial cost. In such a case it is reasonable to expect – indeed require – that any evaluative study should be able to detect patient benefit, say in terms of lives saved and adverse events avoided. If no improvement in health gain is detected, then we must conclude that either the study was ‘underpowered’ OR that any effects are too small to justify the intervention costs. If the study was not underpowered – that is to say if the sample size was sufficient to detect health benefits sufficient to justify cost of the intervention – then we conclude that the intervention does not promise good value for money. We leave aside the issue of exactly how the threshold effect (which justifies an intervention cost) can be determined, save to point out that methods to do so exist and that we have advocated use of such methods (prospective health economic modelling) for sometime.[1-3]

Take, as an opposite extreme, an intervention to promote hand-washing – perhaps using ‘nudge theory’. The intervention here is likely to be nugatory – say having a sticker with an illustration of a ‘watching eye’ placed over hospital sinks, for example.[4] Harms are unlikely and intervention costs are low. It follows that there is not much downside to intervening. That is to say, even if the intervention was totally ineffective, no real harm would result. A massive trial with an endpoint such as hospital-acquired infection rates would be overkill in such a scenario. This is because the threshold effect to justify the intervention is much smaller than the minimal difference detectable in any affordable / logistically feasible study. Using ‘upstream’ endpoints, such as “was the intervention deployed?” and “did it increase use of hand-washing materials?” (necessary but not sufficient conditions for effectiveness) would suffice in an evaluation. Many interventions are rather more expensive than promotion of hand-washing, but much less expensive than large HR initiatives; the above mentioned educational intervention to promote team-work, for example. Here it might be too much to expect, or require, quality of life or mortality to change sufficiently for any change to be detectable (statistically) in an affordable trial. However, one might expect to pick up a broad range of other signals that an intervention effect was likely. For example, it may be observed that team working and patient satisfaction had improved, as well as that the intervention was adopted and supported by staff. That one might have to rely on such proxies for QoL and life years has been referred to as “an inconvenient truth in service delivery research.”[5] It is important that grant awarding panels should not follow a one-size fits all approach to service delivery research, but rather that they should tailor their requirements according to the cost of intervention concerned. Likewise, they should be prepared to integrate many sources of evidence in their assessment of health benefit parameters, as argued elsewhere,[1] [6-8] and in the report of a recent study later in this issue of your News Blog.

— Richard Lilford, CLAHRC WM Director


  1. Sutton M, Birbeck SG, Martin G, Meacock R, Morris S, Sculpher M, Street A, Watson SI, Lilford RJ. Economic analysis of service and delivery interventions in health care. Health Serv Del Res. 2018; 6(5).
  2. Girling A, Liflord R, Cole A, Young T. Headroom Approach to Device Development: Current and Future Directions. Int J Technol Assess Health Care. 2015; 31(5): 331-8.
  3. Yao GL, Novielli N, Manaseki-Holland S, Chen YF, van der Klink M, Barach P, Chilton PJ, Lilford RJ, European HANDOVER Research Collaborative. Evaluation of a predevelopment service delivery intervention: an application to improve clinical handovers. BMJ Qual Saf. 2012; 21(s1):i29-i38.
  4. King D, Vlaev I, Everett-Thomas R, Fitzpatrick M, Darzi A, Birnbach DJ. “Priming” Hand Hygiene Compliance in Clinical Environments. Health Psychol. 2016; 35(1): 96-101.
  5. Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ. 2010; 341: c4413.
  6. Watson SI & Lilford RJ. Essay 1: Integrating multiple sources of evidence: a Bayesian perspective. In: Challenges, solutions and future directions in the evaluation of service innovations in health care and public health. Southampton (UK): NIHR Journals Library, 2016.
  7. Watson SI, Chen YF, Bion JF, Aldridge CP, Girling A, Lilford RJ; HiSLAC Collaboration. Protocol for the health economic evaluation of increasing the weekend specialist to patient ratio in hospitals in England. BMJ Open. 2018; 8(2): e015561.
  8. Lilford RJ, Girling AJ, Sheikh A, Coleman JJ, Chilton PJ, Burn SL, Jenkinson DJ, Blake L, Hemming K. Protocol for evaluation of the cost-effectiveness of ePrescribing systems and candidate prototype for other related health information technologies. BMC Health Serv Res. 2014; 14:314

Private Providers are Consulted More Often than Public Providers in Slums

This finding comes from a number of studies across many parts of the world, including:

  1. India [1] – where private providers were both preferred over public providers and consulted more often. Private providers were more accessible in terms of distance from residence.
  2. Kenyan maternity care [2] – women preferred private over public providers, even though the private providers were rated as ‘inappropriate’ by government.
  3. Dhaka slums [3] – this is an important study because it divides health facilities according to Ahmed’s classification.[4] Most commonly consulted were pharmacies (43%), followed by government hospitals (14%), then private hospitals (4%), independent medical practitioners (3%), informal providers (3%), and traditional healers (1%). Dissatisfaction was highest with government hospitals (25%) and lowest with informal providers and pharmacists.
  4. Accra’s Sodom and Gomorrah slum [5] – the facilities accessed were similar to those in Dhaka; 61% pharmacies and 33% hospitals. In this study lack of insurance was a major factor limiting access, while distance from facilities was not.
  5. Mumbai slums [6] ­– this study did not look at pharmacies specifically, but overall local private providers were the most widely used facilities. The use of public providers rose in proportion to the seriousness of the disorder, from 15% at low categories to 42% for serious illness, and 60% for maternal health.

One important conclusion from the above literature is that facilities should be classified to capture those inside a slum and external to it, and that pharmacies / drug stores should have their own stratum and not be elided with informal or private providers. Private allopathic providers should be classified as medical, other registered health professional (nurse / medical officer), community health worker (with formal links to the public service), and informal non-qualified provider. In studies that cross slum boundaries, multi-level modelling should be used to allow for correlations within clusters and avoid an ecological fallacy / Simpson’s paradox.[7]

The above studies are all based on population/household-based questionnaires. Another Dhaka based study takes a different approach [8] – instead of asking people who live in slums where they go for their health care, Adams and colleagues mapped health facilities across six urban slums. They found that 80% of the 1041 facilities identified in their spatial survey were privately operated. Unlike NGO- and government-funded clinics, private health care delivery clinics operate in the evenings. Only a third of staff in these private clinics have a medical qualification. Overall, the ‘density’ of health delivery points across the six slums was 1.5 per 10,000 of population. The average distance to a major government hospital offering outpatient services was 3km.

In our NIHR Global Health Research Unit on Improving Health in Slums we will be combining supply-side surveys of facilities with demand-side household surveys of use and satisfaction. We plan to go further by examining the socio-political structures that have determined patterns of provision and that may facilitate or impede the future development of a more accessible and high-quality service. We will then model the costs and benefits of alternative logistically and politically viable options using an iterative approach. In developing these models we will work closely with residents of slums and with those who control the purse strings.

— Richard Lilford, CLAHRC WM Director


  1. Banerjee A, Bhawalkar JS, Jadhav SL, Rathod H, Khedkar DT. Access to Health Services Among Slum Dwellers in an Industrial Township and Surrounding Rural Areas: A Rapid Epidemiological Assessment. J Family Med Prim Care. 2012; 1(1): 20-6.
  2. Fotso JC & Mukiira C. Perceived quality of and access to care among poor urban women in Kenya and their utilization of delivery care: harnessing the potential of private clinics. Health Policy Plan. 2012; 27: 505-15.
  3. Khan MMH, Grübner O, Krämer A. Frequently used healthcare services in urban slums of Dhaka and adjacent rural areas and their determinants. J Public Health. 2012; 34(2): 261-71.
  4. Ahmed SM, Tomson G, Petzold M, Kabir ZN. Socioeconomic status overrides age and gender in determining health-seeking behaviour in rural Bangladesh. Bull World Health Organ. 2005; 83: 109-17.
  5. Owusu-Ansah F, Tagbor H, Afi Togbe M. Access to health in city slum dwellers: The case of Sodom and Gomorrah in Accra, Ghana. Afr J Prim Health Care Fam Med. 2016; 8(1): a822.
  6. Naydenova E, Raghu A, Ernst J, Sahariah SA, Gandhi M, Murphy G. Healthcare choices in Mumbai slums: A cross-sectional study. Wellcome Open Research 2017; 2: 115.
  7. Lilford RJ. Simpson’s Paradox and Discrimination. NIHR CLAHRC West Midlands News Blog. 28 November 2014.
  8. Adams AM, Islam R, Ahmed T. Who serves the urban poor? A geospatial and descriptive analysis of health services in slum settlements in Dhaka, Bangladesh. Health Policy Plan. 2015; 30: i32-45.

The Affordability of Care – Hard to Measure but Increasingly Important

Traditionally epidemiologists who worked on the relationship between wealth and disease were concerned with the effect of the first on the second. But, of course, disease can affect wealth, and economists like Jeffrey Sachs spotted the resulting circularity: poverty -> disease -> more poverty -> more disease. Increasingly, clinicians have started to worry about the catastrophic costs of disease and my colleague Bertie Squire from Liverpool School of Tropical Medicine is searching for treatment pathways to mitigate the financial consequences of recurrent tuberculosis. The Oregon experiment, reported in your News Blog,[1] shows that the most obvious benefit from extending insurance coverage to the un-insured lies in reducing the incidence of catastrophic loss.

Catastrophic loss:Events whose consequences are extremely harsh in their severity, relating to one or more losses such as bankruptcy, total loss of assets, or loss of life.” (The Law Dictionary, 2017).

An important question then, is how generous can publically financed insurance be? Or, to put the question another way, how can the affordability of health care be measured? This is a rather different question to that of the affordability of a particular new technology – a question of its Incremental Cost Effectiveness Ratios. This is because HTA is designed to determine the upper bound on ‘affordability’, while the fiscal question of affordability as a whole is concerned with total expenditure.

A paper in a recent issue of JAMA proposes an approach based on the total health costs divided by the median household income.[2] This might be a useful rule of thumb, but it is beset by problems, as pointed out in two leading articles.[3] [4] One such problem arises from the observation that some of the costs of health care / insurance premiums likely come out of household incomes – companies would probably pay employees more if it were not for the insurance premiums – so there is some double counting going on. More fundamentally, affordability cannot be inferred simply by the proportion of expenditure going on health care. One could argue, for instance, that the richer the country (higher the per capita GDP), the greater should be the expenditure on health. One way to get at the affordability construct would be to examine the cost of health care as a proportion of money left over after subtracting the ‘essentials’ of housing, food, clothing and transport to and from school / work. Another would be to calculate the effect of health care costs on how many families tip over into bankruptcy or teeter on the edge thereof. Unaffordability would still vary by type of family and type of insurance system, especially in a variegated health system like that in the USA. A simple number, like proportion of GDP spent on health, can only give a very coarse-grained idea of the consequences of increasing or decreasing the proportion of resources dedicated to health care. It is also important to consider the effect of high health care costs on the broader economy. There is always a danger that, absent price signals, the allocation to health will exceed what can be justified in terms of the benefit realised. That is to say that, given information asymmetries, health care will be driven more by provider than consumer needs.

— Richard Lilford, CLAHRC WM Director


  1. Baicker K, Taubman SL, Allen HL, et al. The Oregon Experiment – Effects of Medicaid on Clinical OutcomesN Engl J Med. 2013; 368: 1713-22.
  2. Emanuel EJ, Glickman A, Johnson D. Measuring the burden of health care costs on US families: the Affordability Index. JAMA. 2017; 318(19): 1863-4.
  3. Antos J, Capretta JC. Challenges in Measuring the Affordability of US Health Care. JAMA. 2017; 318(19): 1871-2.
  4. Reinhardt U. What Level of Health Spending Is “Affordable?” JAMA. 2017; 318(19): 1869-70.