Tag Archives: Patients

Update on Ratios of Patients to Qualified Nurses

News Blog readers may know that there is a considerable literature on nursing skill mix and patient outcomes in hospital. One of the most important studies is Paul Shekelle’s masterful systematic review from 2013.[1] Taken in the round, the literature shows a consistent association between the ratio of skilled nurses to patients and improved outcomes. A recent large cross-sectional study from a number of European countries reaches similar conclusions [2]; many outcomes of hospital care (including death rates) were improved in association with high levels of qualified nurses. Mortality reduction in hospitals with a favourable ratio of qualified nurses to patients were about 10% lower than in those with a less favourable ratio. An interesting question relates to what nurses do that could make such a large difference. An obvious mediating factor would be vigilance in recording vital signs and responding appropriately to signs of deteriorating physiology. Managing new technology, such as infusion equipment, may also be important. Getting the right medicine into the right patient at the right time is yet a further way good nursing could improve outcomes. Improved ratios are also strongly associated with patient satisfaction. Reassurance and tender care may mediate better physical outcomes given the close interplay between the nervous and immune systems.[3] These, and other, causal pathways are represented in the figure.

086 DCi - Update on Patient to Qualified Nurse Ratios

The above study did not look at process variables that might mediate a beneficial impact on nursing time. However, given  plausible mechanisms by which nurses may improve outcomes and consistent, albeit non-experimental, evidence it is not unreasonable to conclude that improving the ratio of qualified nurses to patients will improve care. Saving money by skill substitution is therefore likely to be a false economy since health economic models are sensitive to quite modest reductions in adverse events.[4]

 — Richard Lilford, CLAHRC WM Director


  1. Shekelle PG. Nurse-patient ratios as a patient safety strategy: a systematic review. Ann Intern Med. 2013; 158(5 Pt 2): 404-9.
  2. Aiken LH, Sloane D, Griffiths P, et al. Nursing skill mix in European hospitals: cross-sectional study of the association with mortality, patient ratings, and quality of care. BMJ Qual Saf. 2017; 26(7): 559-68.
  3. Lilford RJ. Brain Activity and Heart Disease – a New Mechanism. NIHR CLAHRC West Midlands News Blog. 9 June 2017.
  4. Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ. 2010; 341: c4413.

Measuring the Quality of Health Care in Low-Income Settings

Measuring the quality of health care in High-Income Countries (HIC) is deceptively difficult, as shown by work carried out by many research groups, including CLAHRC WM.[1-5] However, a large amount of information is collected routinely by health care facilities in HICs. This data includes outcome data, such as Standardised Mortality Rates (SMRs), death rates from ’causes amenable to health care’, readmission rates, morbidity rates (such as pressure damage), and patient satisfaction, along with process data, such as waiting times, prescribing errors, and antibiotic use. There is controversy over many of these endpoints, and some are much better barometers of safety than others. While incident reporting systems provide a very poor basis for epidemiological studies (that is not their purpose), case-note review provides arguably the best and most widely used method for formal study of care quality – at least in hospitals.[3] [6] [7] Measuring safety in primary care is inhibited by the less comprehensive case-notes found in primary care settings as compared to hospital case-notes. Nevertheless, increasing amounts of process information is now available from general practices, particularly in countries (such as the UK) that collect this information routinely in electronic systems. It is possible, for example, to measure rates of statin prescriptions for people with high cardiovascular risk, and anticoagulants for people with ventricular fibrillation, as our CLAHRC has shown.[8] [9] HICs also conduct frequent audits of specific aspects of care – essentially by asking clinicians to fill in detailed pro formas for patients in various categories. For instance, National Audits in the UK have been carried out into all patients experiencing a myocardial infarction.[10] Direct observation of care has been used most often to understand barriers and facilitators to good practice, rather than to measure quality / safety in a quantitative way. However, routine data collection systems provide a measure of patient satisfaction with care – in the UK people who were admitted to hospital are surveyed on a regular basis [11] and general practices are required to arrange for anonymous patient feedback every year.[12] Mystery shoppers (simulated patients) have also been used from time to time, albeit not as a comparative epidemiological tool.[13]

This picture is very different in Low- and Middle-Income Countries (LMIC) and, again, it is yet more difficult to assess quality of out of hospital care than of hospital care.[14] Even in hospitals routine mortality data may not be available, let alone process data. An exception is the network of paediatric centres established in Kenya by Prof Michael English.[15] Occasionally large scale bespoke studies are carried out in LMICs – for example, a recent study in which CLAHRC WM participated, measured 30 day post-operative mortality rates in over 60 hospitals across low-, middle- and high-income countries.[16]

The quality and outcomes of care in community settings in LMICs is a woefully understudied area. We are attempting to correct this ‘dearth’ of information in a study in nine slums spread across four African and Asian countries. One of the largest obstacles to such a study is the very fragmented nature of health care provision in community settings in LMICs – a finding confirmed by a recent Lancet commission.[17] There are no routine data collection systems, and even deaths are not registered routinely. Where to start?

In this blog post I lay out a framework for measurement of quality from largely isolated providers, many of whom are unregulated, in a system where there is no routine system of data and no archive of case-notes. In such a constrained situation I can think of three (non-exclusive) types of study:

  1. Direct observation of the facilities where care is provided without actually observing care or its effects. Such observation is limited to some of the basic building blocks of a health care system – what services are present (e.g. number of pharmacies per 1,000 population) and availability (how often the pharmacy is open; how often a doctor / nurse / medical officer is available for consultation in a clinic). Such a ‘mapping’ exercise does not capture all care provided – e.g. it will miss hospital care and municipal / hospital-based outreach care, such as vaccination provided by Community Health Workers. It will also miss any IT based care using apps or online consultations.
  2. Direct observation of the care process by external observers. Researchers can observe care from close up, for example during consultations. Such observations can cover the humanity of care (which could be scored) and/or technical quality (which again could be scored against explicit standards and/or in a holistic (implicit) basis).[6] [7] An explicit standard would have to be based mainly on ‘if-then’ rules – e.g. if a patient complained of weight loss, excessive thirst, or recurrent boils, did the clinicians test their urine for sugar; if the patient complained of persistent productive cough and night sweats was a test for TB arranged? Implicit standards suffer from low reliability (high inter-observer variation).[18] Moreover, community providers in LMICs are arguably likely to be resistant to what they might perceive as an intrusive or even threatening form of observation. Those who permitted such scrutiny are unlikely to constitute a random sample. More vicarious observations – say of the length of consultations – would have some value, but might still be seen as intrusive. Provided some providers would permit direct observation, their results may represent an ‘upper bound’ on performance.
  3. Quality as assessed through the eyes of the patient / members of the public. Given the limitations of independent observation, the lack of anamnestic records of clinical encounters in the form of case-notes, absence of routine data, and likely limitations on access by independent direct observers, most information may need to be collected from patients themselves, or as we discuss, people masquerading as patients (simulated patients / mystery shoppers). The following types of data collection methods can be considered:
    1. Questions directed at members of the public regarding preventive services. So, households could be asked about vaccinations, surveillance (say for malnutrition), and their knowledge of screening services offered on a routine basis. This is likely to provide a fairly accurate measure of the quality of preventive services (provided the sampling strategy was carefully designed to yield a representative sample). This method could also provide information on advice and care provided through IT resources. This is a situation where some anamnestic data collection would be possible (with the permission of the respondent) since it would be possible to scroll back through the electronic ‘record’.
    2. Opinion surveys / debriefing following consultations. This method offers a viable alternative to observation of consultations and would be less expensive (though still not inexpensive). Information on the kindness / humanity of services could be easily obtained and quantified, along with ease of access to ambulatory and emergency care.[19] Measuring clinical quality would again rely on observations against a gold standard,[20] but given the large number of possible clinical scenarios standardising quality assessment would be tricky. However, a coarse-grained assessment would be possible and, given the low quality levels reported anecdotally, failure to achieve a high degree of standardisation might not vitiate collection of important information. Such a method might provide insights into the relative merits and demerits of traditional vs. modern health care, private vs. public, etc., provided that these differences were large.
    3. Simulated patients offering standardised clinical scenarios. This is arguably the optimal method of technical quality assessment in settings where case-notes are perfunctory or not available. Again, consultations could be scored for humanity of care and clinical/ technical competence, and again explicit and/or implicit standards could be used. However, we do not believe it would be ethical to use this method without obtaining assent from providers. There are some examples of successful use of the methods in LMICs.[21] [22] However, if my premise is accepted that providers must assent to use of simulated patients, then it is necessary to first establish trust between providers and academic teams, and this takes time. Again, there is a high probability that only the better providers will provide assent, in which case observations would likely represent ‘upper bounds’ on quality.

In conclusion, I think that the basic tools of quality assessment, in the current situation where direct observation and/or simulated patients are not acceptable, is a combination of:

  1. Direct observation of facilities that exist, along with ease of access to them, and
  2. Debriefing of people who have recently used the health facilities, or who might have received preventive services that are not based in these facilities.

We do not think that the above mentioned shortcomings of these methods is a reason to eschew assessment of service quality in community settings (such as slums) in LMICs – after all, one of the most powerful levers to improvement is quantitative evidence of current care quality.[23] [24] The perfect should not be the enemy of the good. Moreover, if the anecdotes I have heard regarding care quality (providers who hand out only three types of pill – red, yellow and blue; doctors and nurses who do not turn up for work; prescription of antibiotics for clearly non-infectious conditions) are even partly true, then these methods would be more than sufficient to document standards and compare them across types of provider and different settings.

— Richard Lilford, CLAHRC WM Director


  1. Brown C, Hofer T, Johal A, Thomson R, Nicholl J, Franklin BD, Lilford RJ. An epistemology of patient safety research: a framework for study design and interpretation. Part 1. Conceptualising and developing interventions. Qual Saf Health Care. 2008; 17(3): 158-62.
  2. Brown C, Hofer T, Johal A, Thomson R, Nicholl J, Franklin BD, Lilford RJ. An epistemology of patient safety research: a framework for study design and interpretation. Part 2. Study design. Qual Saf Health Care. 2008; 17(3): 163-9.
  3. Brown C, Hofer T, Johal A, Thomson R, Nicholl J, Franklin BD, Lilford RJ. An epistemology of patient safety research: a framework for study design and interpretation. Part 3. End points and measurement. Qual Saf Health Care. 2008; 17(3): 170-7.
  4. Brown C, Hofer T, Johal A, Thomson R, Nicholl J, Franklin BD, Lilford RJ. An epistemology of patient safety research: a framework for study design and interpretation. Part 4. One size does not fit all. Qual Saf Health Care. 2008; 17(3): 178-81.
  5. Brown C, Lilford R. Evaluating service delivery interventions to enhance patient safety. BMJ. 2008; 337: a2764.
  6. Benning A, Ghaleb M, Suokas A, Dixon-Woods M, Dawson J, Barber N, et al. Large scale organisational intervention to improve patient safety in four UK hospitals: mixed method evaluation. BMJ. 2011; 342: d195.
  7. Benning A, Dixon-Woods M, Nwulu U, Ghaleb M, Dawson J, Barber N, et al. Multiple component patient safety intervention in English hospitals: controlled evaluation of second phase. BMJ. 2011; 342: d199.
  8. Finnikin S, Ryan R, Marshall T. Cohort study investigating the relationship between cholesterol, cardiovascular risk score and the prescribing of statins in UK primary care: study protocol. BMJ Open. 2016; 6(11): e013120.
  9. Adderley N, Ryan R, Marshall T. The role of contraindications in prescribing anticoagulants to patients with atrial fibrillation: a cross-sectional analysis of primary care data in the UK. Br J Gen Pract. 2017. [ePub].
  10. Herrett E, Smeeth L, Walker L, Weston C, on behalf of the MINAP Academic Group. The Myocardial Ischaemia National Audit Project (MINAP). Heart. 2010; 96: 1264-7.
  11. Care Quality Commission. Adult inpatient survey 2016. Newcastle-upon-Tyne, UK: Care Quality Commission, 2017.
  12. Ipsos MORI. GP Patient Survey. National Report. July 2017 Publication. London: NHS England, 2017.
  13. Grant C, Nicholas R, Moore L, Sailsbury C. An observational study comparing quality of care in walk-in centres with general practice and NHS Direct using standardised patients. BMJ. 2002; 324: 1556.
  14. Nolte E & McKee M. Measuring and evaluating performance. In: Smith RD & Hanson K (eds). Health Systems in Low- and Middle-Income Countries: An economic and policy perspective. Oxford: Oxford University Press; 2011.
  15. Tuti T, Bitok M, Malla L, Paton C, Muinga N, Gathara D, et al. Improving documentation of clinical care within a clinical information network: an essential initial step in efforts to understand and improve care in Kenyan hospitals. BMJ Global Health. 2016; 1(1): e000028.
  16. Global Surg Collaborative. Mortality of emergency abdominal surgery in high-, middle- and low-income countries. Br J Surg. 2016; 103(8): 971-88.
  17. McPake B, Hanson K. Managing the public-private mix to achieve universal health coverage. Lancet. 2016; 388: 622-30.
  18. Lilford R, Edwards A, Girling A, Hofer T, Di Tanna GL, Petty J, Nicholl J. Inter-rater reliability of case-note audit: a systematic review. J Health Serv Res Policy. 2007; 12(3): 173-80.
  19. Schoen C, Osborn R, Huynh PT, Doty M, Davis K, Zapert K, Peugh J. Primary Care and Health System Performance: Adults’ Experiences in Five Countries. Health Aff. 2004.
  20. Kruk ME & Freedman LP. Assessing health system performance in developing countries: A review of the literature. Health Policy. 2008; 85: 263-76.
  21. Smith F. Private local pharmacies in low- and middle-income countries: a review of interventions to enhance their role in public health. Trop Med Int Health. 2009; 14(3): 362-72.
  22. Satyanarayana S, Kwan A, Daniels B, Subbaramn R, McDowell A, Bergkvist S, et al. Use of standardised patients to assess antibiotic dispensing for tuberculosis by pharmacies in urban India: a cross-sectional study. Lancet Infect Dis. 2016; 16(11): 1261-8.
  23. Kudzma E C. Florence Nightingale and healthcare reform. Nurs Sci Q. 2006; 19(1): 61-4.
  24. Donabedian A. The end results of health care: Ernest Codman’s contribution to quality assessment and beyond. Milbank Q. 1989; 67(2): 233-56.

Patient Involvement in Patient Safety: Null Result from a High Quality Study

Most patient safety evaluations are simple before and after / time series improvement studies. So it is always refreshing to find a study with contemporaneous controls. Lawton and her colleagues report a nice cluster randomized trial covering 33 hospital wards in five hospitals.[1] They evaluate a well-known patient safety intervention based on the idea of giving patients a more active role in monitoring safety on their ward.

The trial produced a null result, but some of the measures of safety were in the right direction and there was a correlation between the enthusiasm/fidelity with which the intervention was implemented and measures of safety.

Safety is hard to measure (as the authors state), and improvement often builds on a number of small incremental changes. So, it would be very nice to see this intervention replicated, possibly with measures to generate greater commitment from ward staff.
Here is the problem with patient safety research; on the one hand the subject of patient safety is full of hubristic claims made on the basis of insufficient (weak) evidence. On the other hand, high quality studies, such as the one reported here, often fail to find an effect. In many cases, as in the study reported here, there are reasons to suspect a type 2 error (false negative result). Beware also the rising tide – the phenomenon that arises where a trial occurs in the context of a strong secular trend – this trend ‘swallows up’ the headroom for a marginal intervention effect.[2] What is to be done? First, do not declare defeat too early. Second, be prepared to either carry out larger studies or replication studies that can be combined in a meta-analysis. Third, make multiple measurements across a causal chain [3] and synthesise this disparate data using Bayesian networks.[4] Fourth, further to the Bayesian approach, do not dichotomise results on the standard frequentist statistical convention into null and positive. It is stupid to classify a p-value of 0.06 as null if other evidence supports an effect, or to classify a p-value of 0.04 as positive if other data point the opposite way. Knowledge of complex areas, such as service interventions to improve safety, should take account of patterns in the data and information external to the index study. Bayesian networks provide a framework for such an analysis.[4] [5]

— Richard Lilford, CLAHRC WM Director


  1. Lawton R, O’Hara JK, Sheard L, et al. Can patient involvement improve patient safety? A cluster randomised control trial of the Patient Reporting and Action for a Safe Environment (PRASE) intervention. BMJ Qual Saf. 2017; 26: 622-31.
  2. Chen YF, Hemming K, Stevens AJ, Lilford RJ. Secular trends and evaluation of complex interventions: the rising tide phenomenon. BMJ Qual Saf. 2016; 25: 303-10.
  3. Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ. 2010; 341: c4413.
  4. Watson SI & Lilford RJ. Essay 1: Integrating multiple sources of evidence: a Bayesian perspective. In: Challenges, solutions and future directions in the evaluation of service innovations in health care and public health. Southampton (UK): NIHR Journals Library, 2016.
  5. Lilford RJ, Girling AJ, Sheikh, et al. Protocol for evaluation of the cost-effectiveness of ePrescribing systems and candidate prototype for other related health information technologies. BMC Health Serv Res. 2014; 14: 314.

Patient and Public Involvement in Data Collection

Further to last fortnight’s News blog article [1] I have found a further study in which patients participated in data collection.[2] This paper, by and large, corroborates the procedural requirements for public and patient involvement in data collection that I had specified. For example, it was necessary for lay observers to undergo DBS checks; the ethics approval form had to include lay observers; and training had to be arranged for the lay observers. Recruitment of lay observers proved more difficult than anticipated. The lay observers had a positive experience and brought a different perspective to the research according to feedback. The extent to which observer perspective is a good thing is, however, contestable. Generally I think the role of the observer is to collect data for analysis, and not colour it with a ‘perspective’. The professional researchers on the project felt that having lay researchers involved increased their workloads. The thorny issues of payment and selection do not seem to have been fully discussed in this paper. Also not discussed was the idea that, in qualitative research, respondents may be less inhibited to disclose information to a lay observer. Let the debate continue!

— Richard Lilford, CLAHRC WM Director


  1. Lilford RJ. Patient and Public Involvement: Direct Involvement of Patient Representatives in Data Collection. NIHR CLAHRC West Midlands News Blog. 4 August 2017.
  2. Garfield S, Jheeta S, Jacklin A, Bischler A, Norton C, Franklin BD. Patient and public involvement in data collection for health services research: a descriptive study. Res Involve Engage. 2015; 1: 8.

Patient and Public Involvement: Direct Involvement of Patient Representatives in Data Collection

It is widely accepted that the public and patient voice should be heard loud and clear in the selection of studies, in the design of those studies, and in the interpretation and dissemination of the findings. But what about involvement of patient and the public in the collection of data? Before science became professionalised, all scientists could have been considered members of the public. Robert Hooke, for example, could have called himself architect, philosopher, physicist, chemist, or just Hooke. Today, the public are involved in data collection in many scientific enterprises. For example, householders frequently contribute data on bird populations, and Prof Brian Cox involved the public in the detection of new planets in his highly acclaimed television series. In medicine, patients have been involved in collecting data; for example patients with primary biliary cirrhosis were the data collectors in a randomised trial.[1] However, the topic of public and patient involvement in data collection is deceptively complex. This is because there are numerous procedural safeguards governing access to users of the health service and that restrict disbursement of the funds that are used to pay for research.

Let us consider first the issue of access to patients. It is not permissible to collect research data without undergoing certain procedural checks; in the UK it is necessary to be ratified by the Disclosure and Barring Service (DBS) and to have necessary permissions from the institutional authorities. You simply cannot walk onto a hospital ward and start handing out questionnaires or collecting blood samples.

Then there is the question of training. Before collecting data from patients it is necessary to be trained in how to do so, covering both salient ethical and scientific principles. Such training is not without its costs, which takes us to the next issue.

Researchers are paid for their work and, irrespective of whether the funds are publically or privately provided, access to payment is governed by fiduciary and equality/diversity legislation and guidelines. Access to scarce resources is usually governed by some sort of competitive selection process.

None of the above should be taken as an argument against patients and the public taking part in data collection. It does, however, mean that this needs to be a carefully managed process. Of course things are very much simpler if access to patients is not required. For example, conducting a literature survey would require only that the person doing it was technically competent and in many cases members of the public would already have all, or some, of the necessary skills. I would be very happy to collaborate with a retired professor of physics (if anyone wants to volunteer!). But that is not the point. The point is that procedural safeguards must be applied, and this entails management structures that can manage the process.

Research may be carried out by accessing members of the public who are not patients, or at least who are not accessed through the health services. As far as I know there are no particular restrictions on doing so, and I guess that such contact is governed by the common law covering issues such as privacy, battery, assault, and so on. The situation becomes different, however, if access is achieved through a health service organisation, or conducted on behalf of an institution, such as a university. Then presumably any member of the public wishing to collect data from other members of the public would fall under the governance arrangements of the relevant institution. The institution would have to ensure not only that the study was ethical, but that the data-collectors had the necessary skills and that funds were disbursed in accordance with the law. Institutions already deploy ‘freelance’ researchers, so I presume that the necessary procedural arrangements are already in place.

This analysis was stimulated by a discussion in the PPI committee of CLAHRC West Midlands, and represents merely my personal reflections based on first principles. It does not represent my final, settled position, let alone that of the CLAHRC WM, or any other institution. Rather it is an invitation for further comment and analysis.

— Richard Lilford, CLAHRC WM Director


  1. Browning J, Combes B, Mayo MJ. Long-term efficacy of sertraline as a treatment for cholestatic pruritus in patients with primary biliary cirrhosis. Am J Gastroenterol. 2003; 98: 2736-41.

‘Information is not knowledge’: Communication of Scientific Evidence and how it can help us make the right decisions

Every one of us is required to make many decisions: from small decisions, such as what shoes to wear with an outfit or whether to have a second slice of cake; to larger decisions, such as whether to apply for a new job or what school to send our children to. For decisions where the outcome can have a large impact we don’t want to play a game of ‘blind man’s buff’ and make a decision at random. We do our utmost to ensure that whatever decision we arrive at, it is the right one. We go through a process of getting hold of information from a variety of sources we trust and processing that knowledge to help us make up our minds. And in this digital age, we have access to more information than ever before.

When it comes to our health, we are often invited to be involved in making shared decisions about our own care as patients. Because it’s our health that’s at stake, this can bring pressures of not only making a decision but also making the right decision. Arriving at a wrong decision can have significant consequences, such as over- or under-medication or missing out from advances in medicine. But how do we know how to make those decisions and where do we get our information from? Before we start taking a new course of medication, for example, how can we find out if the drugs are safe and effective, and how can we find out the risks as well as the benefits?

The Academy of Medical Sciences produced a report, ‘Enhancing the use of scientific evidence to judge the potential benefits and harms of medicine’,[1] which examines what changes would be necessary to help patients make better-informed decisions about taking medication. It is often the case that there is robust scientific evidence that can be useful in helping patients and clinicians make the right choices. However, this information can be difficult to find, hard to understand, and cast adrift in a sea of poor-quality or misleading information. With so much information available, some of it conflicting – is it any surprise that in a Medical Information Survey, almost two-thirds of British adults would trust experiences of friends and family compared to data from clinical trials, which only 37% of British adults would trust?[2]

The report offers recommendations on how scientific evidence can be made available to enable people to weigh up the pros and cons of new medications and arrive at a decision they are comfortable with. These recommendations include: using NHS Choices as a ‘go to’ hub of clear, up-to-date information about medications, with information about benefits and risks that is easy to understand; improving the design, layout and content of patient information leaflets; giving patients longer appointment times so they can have more detailed discussions about medications with their GP; and a traffic-light system to be used by the media to endorse the reliability of scientific evidence.

This is all good news for anyone having to decide whether to start taking a new drug. I would welcome the facility of going to a well-designed website with clear information about the risks and benefits of taking particular drugs rather than my current approach of asking friends and family (most of whom aren’t medically trained), searching online, and reading drug information leaflets that primarily present long lists of side-effects.

Surely this call for clear, accessible information about scientific evidence is just as relevant to all areas of medical research, including applied health. Patients and the public have a right to know how scientific evidence underpinning important decisions in care is generated and to be able to understand that information. Not only do patients and the public also make decisions about aspects of their care, such as whether to give birth at home or in hospital, or whether to take a day off work to attend a health check, but they should also be able to find and understand evidence that explains why care is delivered in a particular way, such as why many GPs now use a telephone triage system before booking in-person appointments. Researchers, clinicians, patients and communicators of research all have a part to play.

In CLAHRC West Midlands, we’re trying to ‘do our bit’. We aim to make accessible a sound body of scientific knowledge through different information channels and our efforts include:

  • Involving patients and the public to write lay summaries of our research projects on our website so people can find out about the research we do.
  • Communication of research evidence in accessible formats, such as CLAHRC BITEs, which are reviewed by our Public Advisors.
  • Method Matters, a series aimed to give members of the public a better understanding of concepts in Applied Health Research.

The recommendations from the Academy of Medical Sciences can provide a useful starting point for further discussions on how we can communicate effectively in applied health research and ensure that scientific evidence, rather than media hype or incomplete or incorrect information, is the basis for decision-making.

— Magdalena Skrybant, CLAHRC WM PPIE Lead


  1. The Academy of Medical Sciences. Enhancing the use of scientific evidence to judge the potential benefits and harms of medicine. London: Academy of Medical Sciences; 2017.
  2. The Academy of Medical Sciences. Academy of Medical Sciences: Medical Information Survey. London: Academy of Medical Sciences; 2016

Numbers and the Doctor/Patient Relationship

I have always been interested in communicating scientific information and probability. A paper co-authored by CLAHRC WM colleague Eivor Oborn [1] therefore caught my eye. The paper concerns numbers and their ‘performativity’, by which the authors mean how the numbers affect doctors, patients, and the interaction between doctors and patients. They use medical consultations in a Swedish rheumatology clinic to explore the issue, since this is a ‘data-rich’ environment. By this I mean charts are used to plot long-run numerical data relating to patient-reported outcomes, medical assessments, and laboratory data. The study shows that the numbers have high salience for patients who generally find graphical representation of long-run data useful. Doctors also find graphical display of trends useful in spotting threats to patient health. However, patients sometimes feel that the data on display take precedence over how they actually feel. That is to say, the doctor tends to focus on the numbers while the patient’s main symptom might not be captured in the numbers. Of course, there is no counterfactual, so how much of this dissatisfaction is caused by availability of numbers is uncertain. Also I felt that more could be said about the extent to which patients, and indeed doctors, really understand the meaning of the numbers they were seeing. Many people have poor numeracy skills and draw erroneous inferences from data. For instance, people tend to over-interpret improving trends following a run of high-values – the issue of regression to the mean, covered in the Method Matters section of a previous News Blog.

— Richard Lilford, CLAHRC WM Director


  1. Essén A & Oborn E. The performativity of numbers in illness management: The case of Swedish Rheumatology. Soc Sci Med. 2017; 184: 134-43.

Doctor-Patient Communication in the NHS

Andrew McDonald (former Chief Executive of Independent Parliamentary Standards Authority) was recently asked by the Marie Curie charity to examine the quality of doctor-patient communication in the NHS, as discussed on BBC Radio 4’s Today programme on 13 March 2017 (you can listen online). His report concluded that communication was woefully inadequate and that patients were not getting the clear and thorough counselling that they needed in order to understand their condition and make informed choices about options in their care. Patients need to understand what is likely to happen to them, and not all patients with the same condition will want to make the same choice(s). Indeed my own work [1] is part of a large body of research, which shows that better information leads to better knowledge, which in turn affects the choices that patients make. Evidence that the medical and caring professions do not communicate in an informative and compassionate way is therefore a matter of great concern.

However, there is a paradox – feedback from patients, that communication should lie at the heart of their care, has not gone unheard. For instance, current medical training is replete with “communication skills” instruction. Why then do patients still feel dissatisfied; why have matters not improved radically? My diagnosis is that good communication is not mainly a technical matter. Contrary to what many people think, the essence of good communication does not lie in avoiding jargon or following a set of techniques – a point often emphasised by my University of Birmingham colleague John Skelton. These technical matters should not be ignored – but they are not the nub of the problem.

In my view good communication requires effort, and poor communication reflects an unwillingness to make that effort; it is mostly a question of attitude. Good communication is like good teaching. A good communicator has to take time to listen and to tailor their responses to the needs of the individual patient. These needs may be expressed verbally or non-verbally, but either way a good communicator needs to be alive to them, and to respond in the appropriate way. Sometimes this will involve rephrasing an explanation, but in other cases the good communicator will respond to emotional cues. For example a sensitive doctor will notice if, in the course of a technical explanation, a patient looks upset – the good doctor will not ignore this cue, but will acknowledge the emotion, invite the patient to discuss his or her feelings, and be ready to deal with the flood of emotion that may result. The good doctor has to do emotional work, for example showing sympathy, not just in what is said, but also in how it is said. I am afraid to say that sometimes the busyness of the doctor is simply used as an excuse to avoid interactive engagements at a deeper emotional level. Yes, bringing feelings to the surface can be uncomfortable, but enduring the discomfort is part of professional life. In fact, recent research carried out by Gill Combes in CLAHRC WM showed that doctors are reticent in bringing psychological issues into the open.[2] Deliberately ignoring emotional clues and keeping things at a superficial level is deeply unsatisfying to patients. Glossing over feelings also impedes communication regarding more technical issues, as it is very hard for a person to assimilate medical information when they are feeling emotional, or nursing bruised feelings. In the long run such a technical approach to communication impoverishes a doctors professional life.

Doctors sometimes say that they should stick to the technical and that the often lengthy business of counselling should be carried out by other health professions, such as nurses. I have argued before that this is a blatant and unforgivable abrogation of responsibility; it vitiates values that lie (and always will lie) at the heart of good medical practice.[3] The huge responsibilities that doctors carry to make the right diagnosis and prescribe the correct treatment entail a psychological intimacy, which is almost unique to medical practice and which cannot easily be delegated. The purchase that a doctor has on a patient’s psyche should not be squandered. It is a kind of power, and like all power it may be wasted, misused or used to excellent effect.

The concept I have tried to explicate is that good communication is a function of ethical practice, professional behaviour and the medical ethos. It lies at the heart of the craft of medicine. If this point is accepted, it has an important corollary – the onus for teaching communication skills lies with medical practitioners rather than with psychologists or educationalists. Doctors must be the role models for other doctors. I was fortunate in my medical school in Johannesburg to be taught by professors of Oslerian ability who inspired me in the art of practice and the synthesis of technical skill and human compassion. Some people have a particular gift for communication with patients, but the rest of us must learn and copy, be honest with ourselves when we have fallen short, and always try to do better. The most important thing a medical school must do is to nourish and reinforce the attitudes that brought the students into medicine in the first place.

— Richard Lilford, CLAHRC WM Director


  1. Wragg JA, Robinson EJ, Lilford RJ. Information presentation and decisions to enter clinical trials: a hypothetical trial of hormone replacement therapy. Soc Sci Med. 2000; 51(3): 453-62.
  2. Combes G, Allen K, Sein K, Girling A, Lilford R. Taking hospital treatments home: a mixed methods case study looking at the barriers and success factors for home dialysis treatment and the influence of a target on uptake rates. Implement Sci. 2015; 10: 148.
  3. Lilford RJ. Two Ideas of What It Is to be a Doctor. NIHR CLAHRC West Midlands News Blog. August 14, 2015.

Evaluating Interventions to Improve the Integration of Care (Among Multiple Providers and Across Multiple Sites)

Typically healthcare improvement programmes have been institution specific examining, for example hospitals, general practices or care homes. While such solipsistic quality improvement initiatives obviously have their place, they also have severe limitations for the patient of today who typically has many complex conditions and whose care is therefore fragmented across many different care providers working in different places. Such patients perceive, and are sometimes the victims of, gaps in the system. Recent attention has therefore turned to approaches to close these gaps, and I am leading an NIHR programme development grant specifically for this purpose (Improving clinical decisions and teamwork for patients with multimorbidity in primary care through multidisciplinary education and facilitation). There are many different approaches to closing these gaps in care and the Nobel Prize winner Elinor Ostrom has featured previously in this News Blog for her seminal work on barriers and facilitators to institution collaboration [1]; while my colleague, CLAHRC WM Deputy Director Graeme Currie, has approached this issue from a management science perspective.

The problem for a researcher is to measure the effectiveness of initiatives to improve care across centres. This is not natural territory for cluster RCTs since it would be necessary to randomise whole ‘health economies’ rather than just organisations such as hospitals or general practices. Furthermore, many of the outcomes that might be observed in such studies, such as standardised mortality rates, are notoriously insensitive to change.[2] The ESTHER Project in Sweden is famous for closing gaps in care across the hospital/community nexus.[3] The evaluation, however, consists of little more than stakeholder interviews where people seem to recite the perceived wisdom of the day as evidence of effectiveness. While I think it is eminently plausible that the intervention was effective, and while the statements made during the qualitative interviews may have a certain verisimilitude, this all seems very weak evidence of effectiveness. It lacks any quantification, such as could be used in a health economic model. Is there a halfway house between a cluster RCT with hard outputs like mortality on the one hand, and ‘how was it for you?’ research on the other?

While it is not easy to come up with a measurement system, there is one person who perceives the entire pathway and that is the patient. The patient is really the only person who can provide an assessment of care quality across multiple providers. There are many patient measures. Some relate to outcome, for instance health and social care related quality of life (EQ-5DL, ASCOT SCT4 and OPOQL-brief [4]). Such measures should be used in service delivery studies, but may be insensitive to change, as stated above. It is therefore important to measure patient perception of the quality of their care. However, such measurements tend to either be non-specific (e.g. LTC-6 [5]) or look at only one aspect of care, such as continuity (PPCMC),[6] treatment burden [7] or person contentedness.[8] We propose a single quality of integrated care tool incorporating dimensions that have been shown to be important to patients and are collaborating with PenCLAHRC who are working on such a tool. Constructs that should be considered include conflicting information from different caregivers; contradicting forms of treatment (such as one clinician countermanding a prescription from another caregiver); duplication or redundancy of advice and information; satisfaction with care overall and with duration of contacts. We suspect that most patients would prefer fewer, more in-depth, contacts to a larger number of rushed contacts.

It might also be possible to design more imaginative qualitative research that goes beyond simply asking questions and uses method to elicit some of their deeper feelings, by prompting their memory. One such method is photo-voice where patients are asked to take photos in various points in their care, and use these as a basis for discussion. We have used such naturalistic settings in our CLAHRC.[9] Such methods could be harnessed in the co-design of services where patients / carers are not just asked how they perceive services, but are actively involved in designing solutions.

Salient quantitative measurements as may be obtained from NHS data systems. Hospital admission and readmission rates should be measured in studies of system-wide change. An effective intervention would result in more satisfied patients with lower rates of hospital admission. What about quantifying physical health? Adverse events in general and mortality in particular have poor sensitivity, such that signal, even after risk adjustment, would only emerge from noise in an extremely large study, or in a very high-risk client group – see ‘More on Integrated Care’ in this News Blog. Adverse events and death can be consolidated into generic health measurements (QALYs/DALYs), but, again, these are insensitive for reasons given above. Evaluating methods to improve the integration of care may be an ‘inconvenient truth scenario’ [10] where it is necessary to rely on process measures and other proxies for clinical / welfare out. Since our CLAHRC is actively exploring the evaluation of service interventions to improve integration of care, we would be very interested to hear from others and explore approaches to evaluating care across care boundaries.

— Richard Lilford, CLAHRC WM Director


  1. Ostrom E. Beyond Markets and States: Polycentric Governance of Complex Economic Systems. Am Econ Rev. 2010; 100(3): 641-72.
  2. Girling AJ, Hofer TP, Wu J, et al. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling study. BMJ Qual Saf. 2012; 21(12): 1052-6.
  3. Institute for Healthcare Improvement. Improving Patient Flow: The Esther Project in Sweden. Boston, MA: Institute for Healthcare Improvement, 2011.
  4. Bowling A, Hankins M, Windle G, Bilotta C, Grant R. A short measure of quality of life in older age: the performance of the brief Older People’s Quality of Life questionnaire (OPQOL-brief). Arch Gerontol Geriatr. 2013; 56: 181-7.
  5. Glasgow RE, Wagner EH, Schaefer J, Mahoney LD, Reid RJ, Greene SM. Development and validation of the Patient Assessment of Chronic Illness Care (PACIC). Med Care. 2005; 43(5): 436-44.
  6. Haggerty JL, Robergr D, Freeman GK, Beaulieu C, Breton M. Validation of a generic measure of continuity of care: When patients encounter several clinicians. Ann Fam Med. 2012; 10: 443-51.
  7. Tran VT, Harrington M, Montori VM, Barnes C, Wicks P, Ravaud P. Adaptation and validation of the Treatment Burden Questionnaire (TBQ) in English using an internet platform. BMC Medicine. 2014; 12: 109.
  8. Mercer SW, Scottish Executive. Care Measure. Scottish Executive 2004
  9. Redwood S, Gale N, Greenfield S. ‘You give us rangoli, we give you talk’ – Using an art-based activity to elicit data from a seldom heard group. BMC Medl Res Methodol. 2012; 12: 7.
  10. Lilford RJ. Integrated Care. NIHR CLAHRC West Midlands News Blog. 19 June 2015.

A Disappointing Article

All that glitters in the fabled New England Journal of Medicine is not gold. A recent article by Dale and colleagues is a masterclass in producing pleasing-sounding statements, and truisms that go precisely nowhere, but impress the undiscerning reader.[1] They write an article in favour of using quality metrics to improve care. Then they show that process measures may focus attention on things that can be counted at the expense of more important things that cannot. So they say we should count “what’s important to patients”. Then they point out that the signal to noise ratio will not emerge in most cases where outcomes are used – patients value not dying from cancer, but you can never judge your clinician’s performance in screening by cancer death rates. They advocate a ‘balanced mixture’ of measures and advertise their own. But they do not say or prove that they have the right balance. And they admit that using payment to change behaviour is effete. But they say it is a good idea. The whole thing is a muddle. Truth is, no one knows how to use metrics in performance management. But we advocate for task-based (clinical process) measures to ensure that the essentials are in place. We think outcome measures are a poor idea except for patient satisfaction and maybe outcomes of a very small number of highly technical procedures.[2]

— Richard Lilford, CLAHRC WM Director


  1. Dale CR, Myint M, Compton-Phillips AL. Counting Better – the Limits and Future of Quality-Based Compensation. New Engl J Med. 2016; 375(7): 609-11.
  2. Lilford RJ. Risk Adjusted Outcomes – Again! NIHR CLAHRC West Midlands News Blog. 24 April 2015.