Most patient safety evaluations are simple before and after / time series improvement studies. So it is always refreshing to find a study with contemporaneous controls. Lawton and her colleagues report a nice cluster randomized trial covering 33 hospital wards in five hospitals. They evaluate a well-known patient safety intervention based on the idea of giving patients a more active role in monitoring safety on their ward.
The trial produced a null result, but some of the measures of safety were in the right direction and there was a correlation between the enthusiasm/fidelity with which the intervention was implemented and measures of safety.
Safety is hard to measure (as the authors state), and improvement often builds on a number of small incremental changes. So, it would be very nice to see this intervention replicated, possibly with measures to generate greater commitment from ward staff.
Here is the problem with patient safety research; on the one hand the subject of patient safety is full of hubristic claims made on the basis of insufficient (weak) evidence. On the other hand, high quality studies, such as the one reported here, often fail to find an effect. In many cases, as in the study reported here, there are reasons to suspect a type 2 error (false negative result). Beware also the rising tide – the phenomenon that arises where a trial occurs in the context of a strong secular trend – this trend ‘swallows up’ the headroom for a marginal intervention effect. What is to be done? First, do not declare defeat too early. Second, be prepared to either carry out larger studies or replication studies that can be combined in a meta-analysis. Third, make multiple measurements across a causal chain  and synthesise this disparate data using Bayesian networks. Fourth, further to the Bayesian approach, do not dichotomise results on the standard frequentist statistical convention into null and positive. It is stupid to classify a p-value of 0.06 as null if other evidence supports an effect, or to classify a p-value of 0.04 as positive if other data point the opposite way. Knowledge of complex areas, such as service interventions to improve safety, should take account of patterns in the data and information external to the index study. Bayesian networks provide a framework for such an analysis. 
— Richard Lilford, CLAHRC WM Director
- Lawton R, O’Hara JK, Sheard L, et al. Can patient involvement improve patient safety? A cluster randomised control trial of the Patient Reporting and Action for a Safe Environment (PRASE) intervention. BMJ Qual Saf. 2017; 26: 622-31.
- Chen YF, Hemming K, Stevens AJ, Lilford RJ. Secular trends and evaluation of complex interventions: the rising tide phenomenon. BMJ Qual Saf. 2016; 25: 303-10.
- Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ. 2010; 341: c4413.
- Watson SI & Lilford RJ. Essay 1: Integrating multiple sources of evidence: a Bayesian perspective. In: Challenges, solutions and future directions in the evaluation of service innovations in health care and public health. Southampton (UK): NIHR Journals Library, 2016.
- Lilford RJ, Girling AJ, Sheikh, et al. Protocol for evaluation of the cost-effectiveness of ePrescribing systems and candidate prototype for other related health information technologies. BMC Health Serv Res. 2014; 14: 314.