Tag Archives: Patients

A Study Shows a Correlation Between Complications of Surgery and Patient Satisfaction: But That’s Not the Point

Prabhu, et al. report on the above correlation in a small, single centre study.[1] But that’s hardly the point – complications are not a sensitive or specific sign of poor care.[2] [3] That patients with complications are less satisfied is not news. The article cannot unravel the causal pathway, and is compatible with:

  1. Poor care causes complications and independently causes less satisfaction (A causes B).
  2. Complications cause less satisfaction, irrespective of how they arise (B causes A).
  3. Certain patients are at risk, a priori, of being less satisfied and experiencing more complications (C causes A and B). Of course, controls for some co-variants, such as the Association of Anaesthetists risk classification, were included, but these were limited and I do not see any tests for interactions so the model assumes that confounders have the same effects across sub-groups.

More important are the clear findings reported elsewhere, that patient satisfaction is not a good monitor for the technical quality of clinical care.[4] [5] If this were not so, then market failure / information asymmetry would not be the problem it is in health care.

— Richard Lilford, CLAHRC WM Director


  1. Prabhu KL, Cleghorn MC, Elnahas A, et al. Is quality important to our patients? The relationship between surgical outcomes and patient satisfaction. BMJ Qual Saf. 2018; 27: 48-52.
  2. Benning A, Ghaleb M, Suokas A, Dixon-Woods M, Dawson J, Barber N, et al. Large scale organisational intervention to improve patient safety in four UK hospitals: mixed method evaluation. BMJ. 2011; 342: d195.
  3. Benning A, Dixon-Woods M, Nwulu U, Ghaleb M, Dawson J, Barber N, et al. Multiple component patient safety intervention in English hospitals: controlled evaluation of second phase. BMJ. 2011; 342: d199.
  4. Chang JT, Hays RD, Shekelle PG, et al. Patient’s global ratings of their health care are not associated with the technical quality of their care. Ann Intern Med. 2006; 144: 665-72.
  5. Kupfer JM, Bond EU. Patient satisfaction and patient-centered care: necessary but not equal. JAMA. 2012; 308: 139-40.

Patient’s experience of hospital care at weekends

The “weekend effect”, whereby patients admitted to hospitals during weekends appear to be associated with higher mortality compared with patients who are admitted during weekdays, has received substantial attention from the health service community and the general public alike.[1] Evidence of the weekend effect was used to support the introduction of ‘7-day Service’ policy and associated changes to junior doctor’s contracting arrangement by the NHS,[2-4] which have further propelled debates surrounding the nature and causes of the weekend effect.

Members of the CLAHRC West Midlands are closely involved in the HiSLAC project,[5] which is an NIHR HS&DR Programme funded project led by Professor Julian Bion (University of Birmingham) to evaluate the impact of introducing 7-day consultant-led acute medical services. We are undertaking a systematic review of the weekend effect as part of the project,[6] and one of our challenges is to catch up with the rapidly growing literature fuelled by the public and political attention. Despite that hundreds of papers on this topic have been published, there has been a distinct gap in the academic literature – most of the published papers focus on comparing hospital mortality rates between weekends and weekdays, but virtually no study have compared quantitatively the experience and satisfaction of patients between weekends and weekdays. This was the case until we found a study recently published by Chris Graham of the Picker Institute, who has unique access to data not in the public domain, i.e. the dates of admission to hospital given by the respondents.[7]

This interesting study examined data from two nationwide surveys of acute hospitals in 2014 in England: the A&E department patient survey (with 39,320 respondents representing a 34% response rate) and the adult inpatient survey (with 59,083 respondents representing a 47% response rate). Patients admitted at weekends were less likely to respond compared to those admitted during weekdays, but this was accounted for by patient and admission characteristics (e.g. age groups). Contrary to the inference that would be made on care quality based on hospital mortality rates, respondents attending hospital A&E department during weekends actually reported better experiences with regard to ‘doctors and nurses’ and ‘care and treatment’ compared with those attending during weekdays. Patients who were admitted to hospital through A&E during weekends also rated information given to them in the A&E more favourably. No other significant differences in the reported patient experiences were observed between weekend and weekday A&E visits and hospital admissions. [7]

As always, some cautions are needed when interpreting these intriguing findings. First, as the author acknowledged, patients who died following the A&E visits/admissions were excluded from the surveys, and therefore their experiences were not captured. Second, although potential differences in case mix including age, sex, urgency of admission (elective or not), requirement of a proxy for completing the surveys and presence of long-term conditions were taken into account in the aforementioned findings, the statistical adjustment did not include important factors such as main diagnosis and disease severity which could confound patient experience. Readers may doubt whether these factors could overturn the finding. In that case the mechanisms by which weekend admission may lead to improved satisfaction Is unclear. It is possible that patients have different expectations in terms of hospital care that they receive by day of the week and consequently may rate the same level of care differently. The findings from this study are certainly a very valuable addition to the growing literature that starts to unfold the complexity behind the weekend effect, and are a further testament that measuring care quality based on mortality rates alone is unreliable and certainly insufficient, a point that has long been highlighted by the Director of the CLAHRC West Midlands and other colleagues.[8] [9] Our HiSLAC project continues to collect and examine qualitative,[10] quantitative,[5] [6] and economic [11] evidence related to this topic, so watch the space!

— Yen-Fu Chen, Principal Research Fellow


  1. Lilford RJ, Chen YF. The ubiquitous weekend effect: moving past proving it exists to clarifying what causes it. BMJ Qual Saf 2015;24(8):480-2.
  2. House of Commons. Oral answers to questions: Health. 2015. House of Commons, London.
  3. McKee M. The weekend effect: now you see it, now you don’t. BMJ 2016;353:i2750.
  4. NHS England. Seven day hospital services: the clinical case. 2017.
  5. Bion J, Aldridge CP, Girling A, et al. Two-epoch cross-sectional case record review protocol comparing quality of care of hospital emergency admissions at weekends versus weekdays. BMJ Open 2017;7:e018747.
  6. Chen YF, Boyal A, Sutton E, et al. The magnitude and mechanisms of the weekend effect in hospital admissions: A protocol for a mixed methods review incorporating a systematic review and framework synthesis. Systems Review 2016;5:84.
  7. Graham C. People’s experiences of hospital care on the weekend: secondary analysis of data from two national patient surveys. BMJ Qual Saf 2017;29:29.
  8. Girling AJ, Hofer TP, Wu J, et al. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling study. BMJ Qual Saf 2012;21(12):1052-56.
  9. Lilford R, Pronovost P. Using hospital mortality rates to judge hospital performance: a bad idea that just won’t go away. BMJ 2010;340:c2016.
  10. Tarrant C, Sutton E, Angell E, Aldridge CP, Boyal A, Bion J. The ‘weekend effect’ in acute medicine: a protocol for a team-based ethnography of weekend care for medical patients in acute hospital settings. BMJ Open 2017;7: e016755.
  11. Watson SI, Chen YF, Bion JF, Aldridge CP, Girling A, Lilford RJ. Protocol for the health economic evaluation of increasing the weekend specialist to patient ratio in hospitals in England. BMJ Open 2018:In press.

Is it Possible to Teach Empathy?

News blog readers will know that I am fascinated by the question of whether it is possible to teach people to be kinder, more patient-centered, and to show more empathy. A recent meta-analysis of RCTs sheds important light on the critical issue of empathy training.[1] Unlike previous systematic reviews, this study included only experimental studies. Overall, 19 studies met the inclusion criteria for the meta-analysis.

One important issue concerns how the endpoint was measured. In 11 of the 19 included studies the outcome was an objective measure, while in the remainder the outcome was self-reported.

Overall, educational interventions produced a positive benefit that was statistically significant. When the authors made an adjustment for possible publication bias, the effect size was only slightly reduced, remaining highly significant statistically.

I expected to find that the effect size was greater for the self-reported outcomes than for objective outcomes. In fact, the effect size was larger and more highly significant for the objective measures of effect.

Some people classify empathy training in two forms: cognitive and effective, to cover the intellectual and emotional aspects of empathy. Others have questioned this dichotomy, arguing that the emotional and the cognitive parts have to interact to produce empathetic behaviour. As it turned out, all studies included a cognitive component.

This is a very interesting and important study. My main problem with the study is that they do not give a breakdown according to whether the objective measure was self-reported or objective. Also, the results do not tell us how enduring the effects were. I have argued before that one of the main criteria of good communication and compassionate care is the desire to achieve these projectors. The most important thing to instil is a deep-seated desire to do a better job. It would seem that training has a part to play in achieving this objective. However, sustained exposure to excellent role models is also critically important and a crucial part of the education of health professionals.

— Richard Lilford, CLAHRC WM Director


  1. Teding van Berkhout E & Malouff JM. The Efficacy of Empathy Training: A Meta-analysis of Randomized Controlled Trials. J Counsel Psychol. 2016; 63(1): 32-41.

Fair is Fair: Preventing the Misuse of Visiting Hours to Reduce Inequities

The experience of healthcare as a social activity feels very different when viewed from the perspectives of the patient, their relatives, or the healthcare staff. The patient is the centre of attention, but profoundly dependent; the relatives are independent, but unempowered and on foreign ground; and the staff are on home territory and authoritative. These unequal relationships come into sharp focus in the emotionally charged context of critical illness and the Intensive Care Unit. Which of us would not want our family to be near to us and supported by the staff in such a situation? And yet surveys repeatedly show that there is wide variation between countries in national policies, that restrictive visiting is common in practice, and that there is wide variation between ICUs in how those policies are applied.[1-3] Why should this be so?

When patients are asked, they express a strong preference to be visited by their relatives.[4] Involvement of relatives in their loved one’s care has been linked to improved outcomes in a number of conditions, including stroke.[5] [6] However, nursing staff attitudes to visiting [7] reveal concerns about the additional workload involved in caring for and communicating with relatives, and that their presence by the bedside might impede delivery of care, adversely affect infection control, or result in exhaustion of family members. Deeper enquiry might well reveal a lack of empathy and professional confidence: anxiety about being constantly observed by family members, or that lapses in care might result in criticism.

Netzer and Iwashyna take a social justice perspective to argue that this is wrong, and that ICUs should implement current national best practice guidance by making open visiting for families the default,[8] thereby avoiding selection bias in permitting or restricting access. The authors argue that excluding families from their relative’s care can impact negatively on both the patient and relative. The visiting hours offered to relatives may be misaligned with their working hours, creating a further obstacle for those with less flexibility and support from their employer, especially in a society where zero hour contracts are more common.

Moreover, staff discretion to vary these restrictions creates opportunities for conscious or unconscious selection bias. The authors describe a personal experience in which visiting hours reinforced the racial inequalities seen in US healthcare. Such biases might also affect other minorities such as same-sex couples, or transgender communities. Training in equality and diversity organised by NHS Trusts might minimise conscious bias, but the fact remains that while restricted visiting is the default, discretion increases the opportunity for social discrimination.

In considering an open visiting policy, attention must be paid to the potential negatives this may pose. Organisations will be conscious of staff limitations and resources, and the potential for abusive/disruptive family members. Ethnic minority or migrant families bring with them different cultural norms and behaviours which may impact adversely on the family members of indigenous patients. Implementation of open visiting would need to include contingencies to cope with such events as they occur, an example being training staff to have the necessary skills and behaviours to deal with such situations. We are working on this as part of the HS&DR-funded PEARL Project (Patient Experience And Reflective Learning), which also includes interventions designed to maximise empathy.[9] Ultimately, the level of involvement of relatives in their family members’ care should be a decision made by the patient and the family and supported by professionally confident and compassionate staff.

— Olivia Brookes, PEARL Project Manager;
— Prof Julian Bion, PEARL Chief Investigator, Professor of Intensive Care Medicine


  1. Liu V, Read JL, Scruth E, Cheng E. Visitation policies and practices in US ICUs. Crit Care. 2013; 17(2):R71.
  2. Giannini A, Miccinesi G, Leoncino S. Visiting policies in Italian intensive care units: a nationwide survey. Intensive Care Med. 2008; 34(7):1256-62.
  3. Greisen G, Mirante N, Haumont D, Pierrat V, Pallás-Alonso CR, Warren I, Smit BJ, Westrup B, Sizun J, Maraschini A, Cuttini M; ESF Network. Parents, siblings and grandparents in the Neonatal Intensive Care Unit. A survey of policies in eight European countries. Acta Paediatr. 2009 Nov;98(11):1744-50.
  4. Wu C, Melnikow J, Dinh T, Holmes JF, Gaona SD, Bottyan T, Paterniti D, Nishijima DK. Patient Admission Preferences and Perceptions. West J Emerg Med. 2015; 16(5):707-14.
  5. Inouye SK, Bogardus ST, Jr., Charpentier PA, et al. A multicomponent intervention to prevent delirium in hospitalized older patients. New Engl J Med. 1999; 340(9):669-76.
  6. Tsouna-Hadjis E, Vemmos KN, Zakopoulos N, Stamatelopoulos S. First-stroke recovery process: the role of family social support. Arch Phys Med Rehabil. 2000; 81(7): 881-7.
  7. Berti D, Ferdinande P, Moons P. Beliefs and attitudes of intensive care nurses toward visits and open visiting policy. Intensive Care Med. 2007; 33(6): 1060-5.
  8. Netzer G, Iwashyna TJ. Fair is Fair: Preventing the Misuse of Visiting Hours to Reduce Inequities. Ann Am Thorac Soc. 2017.
  9. Teding van Berkhout E, Malouff JM. The efficacy of empathy training: A meta-analysis of randomized controlled trials. J Couns Psychol. 2016; 63(1):32-41.

Update on Ratios of Patients to Qualified Nurses

News Blog readers may know that there is a considerable literature on nursing skill mix and patient outcomes in hospital. One of the most important studies is Paul Shekelle’s masterful systematic review from 2013.[1] Taken in the round, the literature shows a consistent association between the ratio of skilled nurses to patients and improved outcomes. A recent large cross-sectional study from a number of European countries reaches similar conclusions [2]; many outcomes of hospital care (including death rates) were improved in association with high levels of qualified nurses. Mortality reduction in hospitals with a favourable ratio of qualified nurses to patients were about 10% lower than in those with a less favourable ratio. An interesting question relates to what nurses do that could make such a large difference. An obvious mediating factor would be vigilance in recording vital signs and responding appropriately to signs of deteriorating physiology. Managing new technology, such as infusion equipment, may also be important. Getting the right medicine into the right patient at the right time is yet a further way good nursing could improve outcomes. Improved ratios are also strongly associated with patient satisfaction. Reassurance and tender care may mediate better physical outcomes given the close interplay between the nervous and immune systems.[3] These, and other, causal pathways are represented in the figure.

086 DCi - Update on Patient to Qualified Nurse Ratios

The above study did not look at process variables that might mediate a beneficial impact on nursing time. However, given  plausible mechanisms by which nurses may improve outcomes and consistent, albeit non-experimental, evidence it is not unreasonable to conclude that improving the ratio of qualified nurses to patients will improve care. Saving money by skill substitution is therefore likely to be a false economy since health economic models are sensitive to quite modest reductions in adverse events.[4]

 — Richard Lilford, CLAHRC WM Director


  1. Shekelle PG. Nurse-patient ratios as a patient safety strategy: a systematic review. Ann Intern Med. 2013; 158(5 Pt 2): 404-9.
  2. Aiken LH, Sloane D, Griffiths P, et al. Nursing skill mix in European hospitals: cross-sectional study of the association with mortality, patient ratings, and quality of care. BMJ Qual Saf. 2017; 26(7): 559-68.
  3. Lilford RJ. Brain Activity and Heart Disease – a New Mechanism. NIHR CLAHRC West Midlands News Blog. 9 June 2017.
  4. Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ. 2010; 341: c4413.

Measuring the Quality of Health Care in Low-Income Settings

Measuring the quality of health care in High-Income Countries (HIC) is deceptively difficult, as shown by work carried out by many research groups, including CLAHRC WM.[1-5] However, a large amount of information is collected routinely by health care facilities in HICs. This data includes outcome data, such as Standardised Mortality Rates (SMRs), death rates from ’causes amenable to health care’, readmission rates, morbidity rates (such as pressure damage), and patient satisfaction, along with process data, such as waiting times, prescribing errors, and antibiotic use. There is controversy over many of these endpoints, and some are much better barometers of safety than others. While incident reporting systems provide a very poor basis for epidemiological studies (that is not their purpose), case-note review provides arguably the best and most widely used method for formal study of care quality – at least in hospitals.[3] [6] [7] Measuring safety in primary care is inhibited by the less comprehensive case-notes found in primary care settings as compared to hospital case-notes. Nevertheless, increasing amounts of process information is now available from general practices, particularly in countries (such as the UK) that collect this information routinely in electronic systems. It is possible, for example, to measure rates of statin prescriptions for people with high cardiovascular risk, and anticoagulants for people with ventricular fibrillation, as our CLAHRC has shown.[8] [9] HICs also conduct frequent audits of specific aspects of care – essentially by asking clinicians to fill in detailed pro formas for patients in various categories. For instance, National Audits in the UK have been carried out into all patients experiencing a myocardial infarction.[10] Direct observation of care has been used most often to understand barriers and facilitators to good practice, rather than to measure quality / safety in a quantitative way. However, routine data collection systems provide a measure of patient satisfaction with care – in the UK people who were admitted to hospital are surveyed on a regular basis [11] and general practices are required to arrange for anonymous patient feedback every year.[12] Mystery shoppers (simulated patients) have also been used from time to time, albeit not as a comparative epidemiological tool.[13]

This picture is very different in Low- and Middle-Income Countries (LMIC) and, again, it is yet more difficult to assess quality of out of hospital care than of hospital care.[14] Even in hospitals routine mortality data may not be available, let alone process data. An exception is the network of paediatric centres established in Kenya by Prof Michael English.[15] Occasionally large scale bespoke studies are carried out in LMICs – for example, a recent study in which CLAHRC WM participated, measured 30 day post-operative mortality rates in over 60 hospitals across low-, middle- and high-income countries.[16]

The quality and outcomes of care in community settings in LMICs is a woefully understudied area. We are attempting to correct this ‘dearth’ of information in a study in nine slums spread across four African and Asian countries. One of the largest obstacles to such a study is the very fragmented nature of health care provision in community settings in LMICs – a finding confirmed by a recent Lancet commission.[17] There are no routine data collection systems, and even deaths are not registered routinely. Where to start?

In this blog post I lay out a framework for measurement of quality from largely isolated providers, many of whom are unregulated, in a system where there is no routine system of data and no archive of case-notes. In such a constrained situation I can think of three (non-exclusive) types of study:

  1. Direct observation of the facilities where care is provided without actually observing care or its effects. Such observation is limited to some of the basic building blocks of a health care system – what services are present (e.g. number of pharmacies per 1,000 population) and availability (how often the pharmacy is open; how often a doctor / nurse / medical officer is available for consultation in a clinic). Such a ‘mapping’ exercise does not capture all care provided – e.g. it will miss hospital care and municipal / hospital-based outreach care, such as vaccination provided by Community Health Workers. It will also miss any IT based care using apps or online consultations.
  2. Direct observation of the care process by external observers. Researchers can observe care from close up, for example during consultations. Such observations can cover the humanity of care (which could be scored) and/or technical quality (which again could be scored against explicit standards and/or in a holistic (implicit) basis).[6] [7] An explicit standard would have to be based mainly on ‘if-then’ rules – e.g. if a patient complained of weight loss, excessive thirst, or recurrent boils, did the clinicians test their urine for sugar; if the patient complained of persistent productive cough and night sweats was a test for TB arranged? Implicit standards suffer from low reliability (high inter-observer variation).[18] Moreover, community providers in LMICs are arguably likely to be resistant to what they might perceive as an intrusive or even threatening form of observation. Those who permitted such scrutiny are unlikely to constitute a random sample. More vicarious observations – say of the length of consultations – would have some value, but might still be seen as intrusive. Provided some providers would permit direct observation, their results may represent an ‘upper bound’ on performance.
  3. Quality as assessed through the eyes of the patient / members of the public. Given the limitations of independent observation, the lack of anamnestic records of clinical encounters in the form of case-notes, absence of routine data, and likely limitations on access by independent direct observers, most information may need to be collected from patients themselves, or as we discuss, people masquerading as patients (simulated patients / mystery shoppers). The following types of data collection methods can be considered:
    1. Questions directed at members of the public regarding preventive services. So, households could be asked about vaccinations, surveillance (say for malnutrition), and their knowledge of screening services offered on a routine basis. This is likely to provide a fairly accurate measure of the quality of preventive services (provided the sampling strategy was carefully designed to yield a representative sample). This method could also provide information on advice and care provided through IT resources. This is a situation where some anamnestic data collection would be possible (with the permission of the respondent) since it would be possible to scroll back through the electronic ‘record’.
    2. Opinion surveys / debriefing following consultations. This method offers a viable alternative to observation of consultations and would be less expensive (though still not inexpensive). Information on the kindness / humanity of services could be easily obtained and quantified, along with ease of access to ambulatory and emergency care.[19] Measuring clinical quality would again rely on observations against a gold standard,[20] but given the large number of possible clinical scenarios standardising quality assessment would be tricky. However, a coarse-grained assessment would be possible and, given the low quality levels reported anecdotally, failure to achieve a high degree of standardisation might not vitiate collection of important information. Such a method might provide insights into the relative merits and demerits of traditional vs. modern health care, private vs. public, etc., provided that these differences were large.
    3. Simulated patients offering standardised clinical scenarios. This is arguably the optimal method of technical quality assessment in settings where case-notes are perfunctory or not available. Again, consultations could be scored for humanity of care and clinical/ technical competence, and again explicit and/or implicit standards could be used. However, we do not believe it would be ethical to use this method without obtaining assent from providers. There are some examples of successful use of the methods in LMICs.[21] [22] However, if my premise is accepted that providers must assent to use of simulated patients, then it is necessary to first establish trust between providers and academic teams, and this takes time. Again, there is a high probability that only the better providers will provide assent, in which case observations would likely represent ‘upper bounds’ on quality.

In conclusion, I think that the basic tools of quality assessment, in the current situation where direct observation and/or simulated patients are not acceptable, is a combination of:

  1. Direct observation of facilities that exist, along with ease of access to them, and
  2. Debriefing of people who have recently used the health facilities, or who might have received preventive services that are not based in these facilities.

We do not think that the above mentioned shortcomings of these methods is a reason to eschew assessment of service quality in community settings (such as slums) in LMICs – after all, one of the most powerful levers to improvement is quantitative evidence of current care quality.[23] [24] The perfect should not be the enemy of the good. Moreover, if the anecdotes I have heard regarding care quality (providers who hand out only three types of pill – red, yellow and blue; doctors and nurses who do not turn up for work; prescription of antibiotics for clearly non-infectious conditions) are even partly true, then these methods would be more than sufficient to document standards and compare them across types of provider and different settings.

— Richard Lilford, CLAHRC WM Director


  1. Brown C, Hofer T, Johal A, Thomson R, Nicholl J, Franklin BD, Lilford RJ. An epistemology of patient safety research: a framework for study design and interpretation. Part 1. Conceptualising and developing interventions. Qual Saf Health Care. 2008; 17(3): 158-62.
  2. Brown C, Hofer T, Johal A, Thomson R, Nicholl J, Franklin BD, Lilford RJ. An epistemology of patient safety research: a framework for study design and interpretation. Part 2. Study design. Qual Saf Health Care. 2008; 17(3): 163-9.
  3. Brown C, Hofer T, Johal A, Thomson R, Nicholl J, Franklin BD, Lilford RJ. An epistemology of patient safety research: a framework for study design and interpretation. Part 3. End points and measurement. Qual Saf Health Care. 2008; 17(3): 170-7.
  4. Brown C, Hofer T, Johal A, Thomson R, Nicholl J, Franklin BD, Lilford RJ. An epistemology of patient safety research: a framework for study design and interpretation. Part 4. One size does not fit all. Qual Saf Health Care. 2008; 17(3): 178-81.
  5. Brown C, Lilford R. Evaluating service delivery interventions to enhance patient safety. BMJ. 2008; 337: a2764.
  6. Benning A, Ghaleb M, Suokas A, Dixon-Woods M, Dawson J, Barber N, et al. Large scale organisational intervention to improve patient safety in four UK hospitals: mixed method evaluation. BMJ. 2011; 342: d195.
  7. Benning A, Dixon-Woods M, Nwulu U, Ghaleb M, Dawson J, Barber N, et al. Multiple component patient safety intervention in English hospitals: controlled evaluation of second phase. BMJ. 2011; 342: d199.
  8. Finnikin S, Ryan R, Marshall T. Cohort study investigating the relationship between cholesterol, cardiovascular risk score and the prescribing of statins in UK primary care: study protocol. BMJ Open. 2016; 6(11): e013120.
  9. Adderley N, Ryan R, Marshall T. The role of contraindications in prescribing anticoagulants to patients with atrial fibrillation: a cross-sectional analysis of primary care data in the UK. Br J Gen Pract. 2017. [ePub].
  10. Herrett E, Smeeth L, Walker L, Weston C, on behalf of the MINAP Academic Group. The Myocardial Ischaemia National Audit Project (MINAP). Heart. 2010; 96: 1264-7.
  11. Care Quality Commission. Adult inpatient survey 2016. Newcastle-upon-Tyne, UK: Care Quality Commission, 2017.
  12. Ipsos MORI. GP Patient Survey. National Report. July 2017 Publication. London: NHS England, 2017.
  13. Grant C, Nicholas R, Moore L, Sailsbury C. An observational study comparing quality of care in walk-in centres with general practice and NHS Direct using standardised patients. BMJ. 2002; 324: 1556.
  14. Nolte E & McKee M. Measuring and evaluating performance. In: Smith RD & Hanson K (eds). Health Systems in Low- and Middle-Income Countries: An economic and policy perspective. Oxford: Oxford University Press; 2011.
  15. Tuti T, Bitok M, Malla L, Paton C, Muinga N, Gathara D, et al. Improving documentation of clinical care within a clinical information network: an essential initial step in efforts to understand and improve care in Kenyan hospitals. BMJ Global Health. 2016; 1(1): e000028.
  16. Global Surg Collaborative. Mortality of emergency abdominal surgery in high-, middle- and low-income countries. Br J Surg. 2016; 103(8): 971-88.
  17. McPake B, Hanson K. Managing the public-private mix to achieve universal health coverage. Lancet. 2016; 388: 622-30.
  18. Lilford R, Edwards A, Girling A, Hofer T, Di Tanna GL, Petty J, Nicholl J. Inter-rater reliability of case-note audit: a systematic review. J Health Serv Res Policy. 2007; 12(3): 173-80.
  19. Schoen C, Osborn R, Huynh PT, Doty M, Davis K, Zapert K, Peugh J. Primary Care and Health System Performance: Adults’ Experiences in Five Countries. Health Aff. 2004.
  20. Kruk ME & Freedman LP. Assessing health system performance in developing countries: A review of the literature. Health Policy. 2008; 85: 263-76.
  21. Smith F. Private local pharmacies in low- and middle-income countries: a review of interventions to enhance their role in public health. Trop Med Int Health. 2009; 14(3): 362-72.
  22. Satyanarayana S, Kwan A, Daniels B, Subbaramn R, McDowell A, Bergkvist S, et al. Use of standardised patients to assess antibiotic dispensing for tuberculosis by pharmacies in urban India: a cross-sectional study. Lancet Infect Dis. 2016; 16(11): 1261-8.
  23. Kudzma E C. Florence Nightingale and healthcare reform. Nurs Sci Q. 2006; 19(1): 61-4.
  24. Donabedian A. The end results of health care: Ernest Codman’s contribution to quality assessment and beyond. Milbank Q. 1989; 67(2): 233-56.

Patient Involvement in Patient Safety: Null Result from a High Quality Study

Most patient safety evaluations are simple before and after / time series improvement studies. So it is always refreshing to find a study with contemporaneous controls. Lawton and her colleagues report a nice cluster randomized trial covering 33 hospital wards in five hospitals.[1] They evaluate a well-known patient safety intervention based on the idea of giving patients a more active role in monitoring safety on their ward.

The trial produced a null result, but some of the measures of safety were in the right direction and there was a correlation between the enthusiasm/fidelity with which the intervention was implemented and measures of safety.

Safety is hard to measure (as the authors state), and improvement often builds on a number of small incremental changes. So, it would be very nice to see this intervention replicated, possibly with measures to generate greater commitment from ward staff.
Here is the problem with patient safety research; on the one hand the subject of patient safety is full of hubristic claims made on the basis of insufficient (weak) evidence. On the other hand, high quality studies, such as the one reported here, often fail to find an effect. In many cases, as in the study reported here, there are reasons to suspect a type 2 error (false negative result). Beware also the rising tide – the phenomenon that arises where a trial occurs in the context of a strong secular trend – this trend ‘swallows up’ the headroom for a marginal intervention effect.[2] What is to be done? First, do not declare defeat too early. Second, be prepared to either carry out larger studies or replication studies that can be combined in a meta-analysis. Third, make multiple measurements across a causal chain [3] and synthesise this disparate data using Bayesian networks.[4] Fourth, further to the Bayesian approach, do not dichotomise results on the standard frequentist statistical convention into null and positive. It is stupid to classify a p-value of 0.06 as null if other evidence supports an effect, or to classify a p-value of 0.04 as positive if other data point the opposite way. Knowledge of complex areas, such as service interventions to improve safety, should take account of patterns in the data and information external to the index study. Bayesian networks provide a framework for such an analysis.[4] [5]

— Richard Lilford, CLAHRC WM Director


  1. Lawton R, O’Hara JK, Sheard L, et al. Can patient involvement improve patient safety? A cluster randomised control trial of the Patient Reporting and Action for a Safe Environment (PRASE) intervention. BMJ Qual Saf. 2017; 26: 622-31.
  2. Chen YF, Hemming K, Stevens AJ, Lilford RJ. Secular trends and evaluation of complex interventions: the rising tide phenomenon. BMJ Qual Saf. 2016; 25: 303-10.
  3. Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ. 2010; 341: c4413.
  4. Watson SI & Lilford RJ. Essay 1: Integrating multiple sources of evidence: a Bayesian perspective. In: Challenges, solutions and future directions in the evaluation of service innovations in health care and public health. Southampton (UK): NIHR Journals Library, 2016.
  5. Lilford RJ, Girling AJ, Sheikh, et al. Protocol for evaluation of the cost-effectiveness of ePrescribing systems and candidate prototype for other related health information technologies. BMC Health Serv Res. 2014; 14: 314.

Patient and Public Involvement in Data Collection

Further to last fortnight’s News blog article [1] I have found a further study in which patients participated in data collection.[2] This paper, by and large, corroborates the procedural requirements for public and patient involvement in data collection that I had specified. For example, it was necessary for lay observers to undergo DBS checks; the ethics approval form had to include lay observers; and training had to be arranged for the lay observers. Recruitment of lay observers proved more difficult than anticipated. The lay observers had a positive experience and brought a different perspective to the research according to feedback. The extent to which observer perspective is a good thing is, however, contestable. Generally I think the role of the observer is to collect data for analysis, and not colour it with a ‘perspective’. The professional researchers on the project felt that having lay researchers involved increased their workloads. The thorny issues of payment and selection do not seem to have been fully discussed in this paper. Also not discussed was the idea that, in qualitative research, respondents may be less inhibited to disclose information to a lay observer. Let the debate continue!

— Richard Lilford, CLAHRC WM Director


  1. Lilford RJ. Patient and Public Involvement: Direct Involvement of Patient Representatives in Data Collection. NIHR CLAHRC West Midlands News Blog. 4 August 2017.
  2. Garfield S, Jheeta S, Jacklin A, Bischler A, Norton C, Franklin BD. Patient and public involvement in data collection for health services research: a descriptive study. Res Involve Engage. 2015; 1: 8.

Patient and Public Involvement: Direct Involvement of Patient Representatives in Data Collection

It is widely accepted that the public and patient voice should be heard loud and clear in the selection of studies, in the design of those studies, and in the interpretation and dissemination of the findings. But what about involvement of patient and the public in the collection of data? Before science became professionalised, all scientists could have been considered members of the public. Robert Hooke, for example, could have called himself architect, philosopher, physicist, chemist, or just Hooke. Today, the public are involved in data collection in many scientific enterprises. For example, householders frequently contribute data on bird populations, and Prof Brian Cox involved the public in the detection of new planets in his highly acclaimed television series. In medicine, patients have been involved in collecting data; for example patients with primary biliary cirrhosis were the data collectors in a randomised trial.[1] However, the topic of public and patient involvement in data collection is deceptively complex. This is because there are numerous procedural safeguards governing access to users of the health service and that restrict disbursement of the funds that are used to pay for research.

Let us consider first the issue of access to patients. It is not permissible to collect research data without undergoing certain procedural checks; in the UK it is necessary to be ratified by the Disclosure and Barring Service (DBS) and to have necessary permissions from the institutional authorities. You simply cannot walk onto a hospital ward and start handing out questionnaires or collecting blood samples.

Then there is the question of training. Before collecting data from patients it is necessary to be trained in how to do so, covering both salient ethical and scientific principles. Such training is not without its costs, which takes us to the next issue.

Researchers are paid for their work and, irrespective of whether the funds are publically or privately provided, access to payment is governed by fiduciary and equality/diversity legislation and guidelines. Access to scarce resources is usually governed by some sort of competitive selection process.

None of the above should be taken as an argument against patients and the public taking part in data collection. It does, however, mean that this needs to be a carefully managed process. Of course things are very much simpler if access to patients is not required. For example, conducting a literature survey would require only that the person doing it was technically competent and in many cases members of the public would already have all, or some, of the necessary skills. I would be very happy to collaborate with a retired professor of physics (if anyone wants to volunteer!). But that is not the point. The point is that procedural safeguards must be applied, and this entails management structures that can manage the process.

Research may be carried out by accessing members of the public who are not patients, or at least who are not accessed through the health services. As far as I know there are no particular restrictions on doing so, and I guess that such contact is governed by the common law covering issues such as privacy, battery, assault, and so on. The situation becomes different, however, if access is achieved through a health service organisation, or conducted on behalf of an institution, such as a university. Then presumably any member of the public wishing to collect data from other members of the public would fall under the governance arrangements of the relevant institution. The institution would have to ensure not only that the study was ethical, but that the data-collectors had the necessary skills and that funds were disbursed in accordance with the law. Institutions already deploy ‘freelance’ researchers, so I presume that the necessary procedural arrangements are already in place.

This analysis was stimulated by a discussion in the PPI committee of CLAHRC West Midlands, and represents merely my personal reflections based on first principles. It does not represent my final, settled position, let alone that of the CLAHRC WM, or any other institution. Rather it is an invitation for further comment and analysis.

— Richard Lilford, CLAHRC WM Director


  1. Browning J, Combes B, Mayo MJ. Long-term efficacy of sertraline as a treatment for cholestatic pruritus in patients with primary biliary cirrhosis. Am J Gastroenterol. 2003; 98: 2736-41.

‘Information is not knowledge’: Communication of Scientific Evidence and how it can help us make the right decisions

Every one of us is required to make many decisions: from small decisions, such as what shoes to wear with an outfit or whether to have a second slice of cake; to larger decisions, such as whether to apply for a new job or what school to send our children to. For decisions where the outcome can have a large impact we don’t want to play a game of ‘blind man’s buff’ and make a decision at random. We do our utmost to ensure that whatever decision we arrive at, it is the right one. We go through a process of getting hold of information from a variety of sources we trust and processing that knowledge to help us make up our minds. And in this digital age, we have access to more information than ever before.

When it comes to our health, we are often invited to be involved in making shared decisions about our own care as patients. Because it’s our health that’s at stake, this can bring pressures of not only making a decision but also making the right decision. Arriving at a wrong decision can have significant consequences, such as over- or under-medication or missing out from advances in medicine. But how do we know how to make those decisions and where do we get our information from? Before we start taking a new course of medication, for example, how can we find out if the drugs are safe and effective, and how can we find out the risks as well as the benefits?

The Academy of Medical Sciences produced a report, ‘Enhancing the use of scientific evidence to judge the potential benefits and harms of medicine’,[1] which examines what changes would be necessary to help patients make better-informed decisions about taking medication. It is often the case that there is robust scientific evidence that can be useful in helping patients and clinicians make the right choices. However, this information can be difficult to find, hard to understand, and cast adrift in a sea of poor-quality or misleading information. With so much information available, some of it conflicting – is it any surprise that in a Medical Information Survey, almost two-thirds of British adults would trust experiences of friends and family compared to data from clinical trials, which only 37% of British adults would trust?[2]

The report offers recommendations on how scientific evidence can be made available to enable people to weigh up the pros and cons of new medications and arrive at a decision they are comfortable with. These recommendations include: using NHS Choices as a ‘go to’ hub of clear, up-to-date information about medications, with information about benefits and risks that is easy to understand; improving the design, layout and content of patient information leaflets; giving patients longer appointment times so they can have more detailed discussions about medications with their GP; and a traffic-light system to be used by the media to endorse the reliability of scientific evidence.

This is all good news for anyone having to decide whether to start taking a new drug. I would welcome the facility of going to a well-designed website with clear information about the risks and benefits of taking particular drugs rather than my current approach of asking friends and family (most of whom aren’t medically trained), searching online, and reading drug information leaflets that primarily present long lists of side-effects.

Surely this call for clear, accessible information about scientific evidence is just as relevant to all areas of medical research, including applied health. Patients and the public have a right to know how scientific evidence underpinning important decisions in care is generated and to be able to understand that information. Not only do patients and the public also make decisions about aspects of their care, such as whether to give birth at home or in hospital, or whether to take a day off work to attend a health check, but they should also be able to find and understand evidence that explains why care is delivered in a particular way, such as why many GPs now use a telephone triage system before booking in-person appointments. Researchers, clinicians, patients and communicators of research all have a part to play.

In CLAHRC West Midlands, we’re trying to ‘do our bit’. We aim to make accessible a sound body of scientific knowledge through different information channels and our efforts include:

  • Involving patients and the public to write lay summaries of our research projects on our website so people can find out about the research we do.
  • Communication of research evidence in accessible formats, such as CLAHRC BITEs, which are reviewed by our Public Advisors.
  • Method Matters, a series aimed to give members of the public a better understanding of concepts in Applied Health Research.

The recommendations from the Academy of Medical Sciences can provide a useful starting point for further discussions on how we can communicate effectively in applied health research and ensure that scientific evidence, rather than media hype or incomplete or incorrect information, is the basis for decision-making.

— Magdalena Skrybant, CLAHRC WM PPIE Lead


  1. The Academy of Medical Sciences. Enhancing the use of scientific evidence to judge the potential benefits and harms of medicine. London: Academy of Medical Sciences; 2017.
  2. The Academy of Medical Sciences. Academy of Medical Sciences: Medical Information Survey. London: Academy of Medical Sciences; 2016