Tag Archives: Director & Co-Directors’ Blog

Measuring Quality of Care

Measuring quality of care is not a straightforward business:

  1. Routinely collected outcome data tend to be misleading because of very poor ratios of signal to noise.[1]
  2. Clinical process (criterion based) measures require case note review and miss important errors of omission, such as diagnostic errors.
  3. Adverse events also require case note review and are prone to measurement error.[2]

Adverse event review is widely practiced, usually involving a two-stage process:

  1. A screening process (sometimes to look for warning features [triggers]).
  2. A definitive phase to drill down in more detail and refute or confirm (and classify) the event.

A recent HS&DR report [3] is important for two particular reasons:

  1. It shows that a one-stage process is as sensitive as the two-stage process. So triggers are not needed; just as many adverse events can be identified if notes are sampled at random.
  2. In contrast to (other) triggers, deaths really are associated with a high rate of adverse events (apart, of course, from the death itself). In fact not only are adverse events more common among patients who have died than among patients sampled at random (nearly 30% vs. 10%), but the preventability rates (probability that a detected adverse event was preventable) also appeared slightly higher (about 60% vs. 50%).

This paper has clear implications for policy and practice, because if we want a population ‘enriched’ for high adverse event rates (on the ‘canary in the mineshaft’ principle), then deaths provide that enrichment. The widely used trigger tool, however, serves no useful purpose – it does not identify a higher than average risk population, and it is more resource intensive. It should be consigned to history.

Lastly, England and Wales have mandated a process of death review, and the adverse event rate among such cases is clearly of interest. A word of caution is in order here. The reliability (inter-observer agreement) in this study was quite high (Kappa 0.5), but not high enough for comparisons across institutions to be valid. If cross-institutional comparisons are required, then:

  1. A set of reviewers must review case notes across hospitals.
  2. At least three reviewers should examine each case note.
  3. Adjustment must be made for reviewer effects, as well as prognostic factors.

The statistical basis for these requirements are laid out in detail elsewhere.[4] It is clear that reviewers should not review notes from their own hospitals, if any kind of comparison across institutions is required – the results will reflect the reviewers rather than the hospitals.

Richard Lilford, CLAHRC WM Director

References:

  1. Girling AJ, Hofer TP, Wu J, et al. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling studyBMJ Qual Saf. 2012; 21(12): 1052-6.
  2. Lilford R, Mohammed M, Braunholtz D, Hofer T. The measurement of active errors: methodological issues. Qual Saf Health Care. 2003; 12(s2): ii8-12.
  3. Mayor S, Baines E, Vincent C, et al. Measuring harm and informing quality improvement in the Welsh NHS: the longitudinal Welsh national adverse events study. Health Serv Deliv Res. 2017; 5(9).
  4. Manaseki-Holland S, Lilford RJ, Bishop JR, Girling AJ, Chen YF, Chilton PJ, Hofer TP; UK Case Note Review Group. Reviewing deaths in British and US hospitals: a study of two scales for assessing preventability. BMJ Qual Saf. 2016. [ePub].

Wrong Medical Theories do Great Harm but Wrong Psychology Theories are More Insidious

Back in the 1950s, when I went from nothing to something, a certain Dr Spock bestrode the world of child rearing like a colossus. Babies, said Spock, should be put down to sleep in the prone position. Only years later did massive studies show that children are much less likely to experience ‘cot death’ or develop joint problems if they are placed supine – on their backs. Although I survived prone nursing to become a CLAHRC director, tens of thousands of children must have died thanks to Dr Spock’s ill-informed theory.

So, I was fascinated by an article in the Guardian newspaper, titled ‘No evidence to back the idea of learning styles’.[1] The article was signed by luminaries from the world of neuroscience, including Colin Blakemore (who I knew, and liked, when he was head of the MRC). I decided to retrieve the article on which the Guardian piece was mainly based – a review in ‘Psychological Science in the Public Interest’.[2]

The core idea is that people have clear preferences for how they prefer to receive information (e.g. pictorial vs. verbal) and that teaching is most effective if delivered according to the preferred style. This idea is widely accepted among psychologists and educationalists, and is advocated in many current textbooks. Numerous tests have been devised to diagnose a person’s learning style so that their instruction can be tailored accordingly. Certification programmes are offered, some costing thousands of dollars. A veritable industry has grown up around this theory. The idea belongs to a larger set of ideas, originating with Jung, called ‘type theories’; the notion that people fall into distinct groups or ‘types’, from which predictions can be made. The Myers-Briggs ‘type’ test is still deployed as part of management training and I have been subjected to this instrument, despite the fact that its validity as the basis for selection or training has not been confirmed in objective studies. People seem to cling to the idea that types are critically important. That types exist is not the issue of contention (males/females; extrovert/introvert), it is what they mean (learn in different ways; perform differently in meetings) that is disputed. In the case of learning styles the hypothesis of interest is that the style (which can be observed ex ante) meshes with a certain type of instruction (the benefit of which can be observed ex post). The meshing hypothesis holds that different modes of instruction are optimal for different types of person “because different modes of presentation exploit the specific perceptual and cognitive strengths of different individuals.” This hypothesis entails the assumption that people with a certain style (based, say on a diagnostic instrument or ‘tool’) will experience better educational outcomes when taught in one way (say, pictorial) than when taught in another way (say, verbal). It is precisely this (‘meshing’) hypothesis that the authors set out to test.

Note then that finding that people have different preferences does not confirm the hypothesis. Likewise, finding that different ability levels correlate with these preferences would not confirm the hypothesis. The hypothesis would be confirmed by finding that teaching method 1 is more effective than method 2 in type A people, while teaching method 2 is more effective than teaching method 1 in type B people.

The authors find, from the voluminous literature, only four studies that test the above hypothesis. One of these was of weak design. The three stronger studies provide null results. The weak study did find a style-by-treatment interaction, but only after “the outliers were excluded for unspecified reasons.”

Of course, the null results do not exclude the possibility of an effect, particularly a small effect, as the authors point out. To shed further light on the subject they explore related literatures. First they examine aptitude (rather than just learning style preference) to see whether there is an interaction between aptitude and pedagogic method. Here the literature goes right back to Cornbach in 1957. One particular hypothesis was that high aptitude students fare better in a less structured teaching format, while those with less aptitude fare better where the format is structured and explicit. Here the evidence is mixed, such that in about half of studies, less structure suits high ability students, while more structure suits less able students – one (reasonable) interpretation for the different results is that there may be certain contexts where aptitude/treatment interactions do occur and others where they do not. Another hypothesis concerns an aspect of personality called ‘locus of control’. It was hypothesised that an internal locus of control (people who incline to believe their destiny lies in their own hands) would mesh with an unstructured format of instruction and vice versa. Here the evidence, taken in the round, tends to confirm the hypothesis.

So, there is evidence (not definitive, but compelling) for an interaction between personality and aptitude and teaching method. There is no such evidence for learning style preference. This does not mean that some students will need an idea to be explained one way while others need it explained in a different way. This is something good teachers sense as they proceed, as emphasised in a previous blog.[3] But tailoring your explanation according to the reaction of students is one thing, determining it according to a pre-test is another. In fact, the learning style hypothesis may impede good teaching by straightjacketing teaching according to a pre-determined format, rather than encouraging teachers to adapt to the needs of students in real time. Receptivity to the expressed needs of the learner seems preferable to following a script to which the learner is supposed to conform.

And why have I chosen this topic for the main News Blog article? Two reasons:

First, it shows how an idea may gain purchase in society with little empirical support, and we should be ever on our guard – the Guardian lived up to its name in this respect!

Second, because health workers are educators; we teach the next generation and we teach our peers. Also, patient communication has an undoubted educational component (see our previous main blog [4]). So we should keep abreast of general educational theory. Many CLAHRC WM projects have a strong educational dimension.

— Richard Lilford, CLAHRC WM Director

References:

  1. Hood B, Howard-Jones P, Laurillard D, et al. No Evidence to Back Idea of Learning Styles. The Guardian. 12 March 2017.
  2. Pashler H, McDaniel M, Rohrer D, Bjork R. Learning Styles: Concepts and Evidence. Psychol Sci Public Interest. 2008; 9(3): 105-19.
  3. Lilford RJ. Education Update. NIHR CLAHRC West Midlands News Blog. 2 September 2016.
  4. Lilford RJ. Doctor-Patient Communication in the NHS. NIHR CLAHRC West Midlands News Blog. 24 March 2017.

Doctor-Patient Communication in the NHS

Andrew McDonald (former Chief Executive of Independent Parliamentary Standards Authority) was recently asked by the Marie Curie charity to examine the quality of doctor-patient communication in the NHS, as discussed on BBC Radio 4’s Today programme on 13 March 2017 (you can listen online). His report concluded that communication was woefully inadequate and that patients were not getting the clear and thorough counselling that they needed in order to understand their condition and make informed choices about options in their care. Patients need to understand what is likely to happen to them, and not all patients with the same condition will want to make the same choice(s). Indeed my own work [1] is part of a large body of research, which shows that better information leads to better knowledge, which in turn affects the choices that patients make. Evidence that the medical and caring professions do not communicate in an informative and compassionate way is therefore a matter of great concern.

However, there is a paradox – feedback from patients, that communication should lie at the heart of their care, has not gone unheard. For instance, current medical training is replete with “communication skills” instruction. Why then do patients still feel dissatisfied; why have matters not improved radically? My diagnosis is that good communication is not mainly a technical matter. Contrary to what many people think, the essence of good communication does not lie in avoiding jargon or following a set of techniques – a point often emphasised by my University of Birmingham colleague John Skelton. These technical matters should not be ignored – but they are not the nub of the problem.

In my view good communication requires effort, and poor communication reflects an unwillingness to make that effort; it is mostly a question of attitude. Good communication is like good teaching. A good communicator has to take time to listen and to tailor their responses to the needs of the individual patient. These needs may be expressed verbally or non-verbally, but either way a good communicator needs to be alive to them, and to respond in the appropriate way. Sometimes this will involve rephrasing an explanation, but in other cases the good communicator will respond to emotional cues. For example a sensitive doctor will notice if, in the course of a technical explanation, a patient looks upset – the good doctor will not ignore this cue, but will acknowledge the emotion, invite the patient to discuss his or her feelings, and be ready to deal with the flood of emotion that may result. The good doctor has to do emotional work, for example showing sympathy, not just in what is said, but also in how it is said. I am afraid to say that sometimes the busyness of the doctor is simply used as an excuse to avoid interactive engagements at a deeper emotional level. Yes, bringing feelings to the surface can be uncomfortable, but enduring the discomfort is part of professional life. In fact, recent research carried out by Gill Combes in CLAHRC WM showed that doctors are reticent in bringing psychological issues into the open.[2] Deliberately ignoring emotional clues and keeping things at a superficial level is deeply unsatisfying to patients. Glossing over feelings also impedes communication regarding more technical issues, as it is very hard for a person to assimilate medical information when they are feeling emotional, or nursing bruised feelings. In the long run such a technical approach to communication impoverishes a doctors professional life.

Doctors sometimes say that they should stick to the technical and that the often lengthy business of counselling should be carried out by other health professions, such as nurses. I have argued before that this is a blatant and unforgivable abrogation of responsibility; it vitiates values that lie (and always will lie) at the heart of good medical practice.[3] The huge responsibilities that doctors carry to make the right diagnosis and prescribe the correct treatment entail a psychological intimacy, which is almost unique to medical practice and which cannot easily be delegated. The purchase that a doctor has on a patient’s psyche should not be squandered. It is a kind of power, and like all power it may be wasted, misused or used to excellent effect.

The concept I have tried to explicate is that good communication is a function of ethical practice, professional behaviour and the medical ethos. It lies at the heart of the craft of medicine. If this point is accepted, it has an important corollary – the onus for teaching communication skills lies with medical practitioners rather than with psychologists or educationalists. Doctors must be the role models for other doctors. I was fortunate in my medical school in Johannesburg to be taught by professors of Oslerian ability who inspired me in the art of practice and the synthesis of technical skill and human compassion. Some people have a particular gift for communication with patients, but the rest of us must learn and copy, be honest with ourselves when we have fallen short, and always try to do better. The most important thing a medical school must do is to nourish and reinforce the attitudes that brought the students into medicine in the first place.

— Richard Lilford, CLAHRC WM Director

References:

  1. Wragg JA, Robinson EJ, Lilford RJ. Information presentation and decisions to enter clinical trials: a hypothetical trial of hormone replacement therapy. Soc Sci Med. 2000; 51(3): 453-62.
  2. Combes G, Allen K, Sein K, Girling A, Lilford R. Taking hospital treatments home: a mixed methods case study looking at the barriers and success factors for home dialysis treatment and the influence of a target on uptake rates. Implement Sci. 2015; 10: 148.
  3. Lilford RJ. Two Ideas of What It Is to be a Doctor. NIHR CLAHRC West Midlands News Blog. August 14, 2015.

Scientists Should Not Be Held Accountable For Ensuring the Impact of Their Research

It has become more and more de rigour to expect researchers to be the disseminators of their own work. Every grant application requires the applicant to fill in a section on dissemination. We were recently asked to describe our dissemination plans as part of the editorial review process for a paper submitted to the BMJ. Only tact stopped us from responding, “To publish our paper in the BMJ”! Certainly when I started out on my scientific career it was generally accepted that the sciences should make discoveries and journals should disseminate them. The current fashion for asking researchers to take responsibility for dissemination of their work emanates, at least in part, from the empirical finding that journal articles by themselves may fail to change practice even when the evidence is strong. Furthermore, it could be argued that researchers are ideal conduits for dissemination. They have a vested interest in uptake of their findings, an intimate understanding of the research topic, and are in touch with networks of relevant practitioners. However, there are dangers in a policy where the producers of knowledge are also held accountable for its dissemination. I can think of three arguments against policies making scientists the vehicle for dissemination and uptake of their own results – scientists may not be good at it; they may be conflicted; and the idea is based on a fallacious understanding of the normative and practical link between research and action.

1. Talent for Communication
There is no good reason to think that researchers are naturally gifted in dissemination, or that this is where their inclination lies. Editors, journalists, and I suppose blog writers, clearly have such an interest. However, an inclination to communicate is not a necessary condition for becoming an excellent researcher. Specialisation is the basis for economic progress, and there is an argument that the benefits of specialisation apply to the production and communication of knowledge.

2. Objectivity
Pressurising researchers to market their own work may create perverse incentives. Researchers may be tempted to overstate their findings, or over interpret the implications for practice. There is also a fine line to be drawn between dissemination (drawing attention to findings) and advocacy (persuading people to take action based on findings). It is along the slippery slope between dissemination and advocacy that the dangers of auto-dissemination reside. The vested interest that scientists have in the uptake of their results should serve as a word of caution for those who militantly maintain that scientists should be the main promotors of their own work. The climate change scientific fraternity has been stigmatised by overzealous scientific advocacy. Expecting scientists to be the bandleader for their own product, and requiring them to demonstrate impact, has created perverse incentives.

3. Research Findings and Research Implications
With some noble exceptions, it is rare for a single piece of primary research to be sufficiently powerful to drive a change in practice. In fact replication is one of the core tenets of scientific practice. The pathway from research to change of practice should go as follows:

  1. Primary researcher conducts study and publishes results.
  2. Research results replicated.
  3. Secondary researcher conducts systematic review.
  4. Stakeholder committee develops guidelines according to established principles.
  5. Local service providers remove barriers to change in practice.
  6. Clinicians adapt a new method.

The ‘actors’ at these different stages can surely overlap, but this process nevertheless provides a necessary degree of detachment between scientific results and the actions that should follow, and it makes use of different specialisms and perspectives in translating knowledge into practice.

We would be interested to hear contrary views, but be careful to note that I am not arguing that a scientist should never be involved in dissemination of their own work, merely that this should not be a requirement or expectation.

— Richard Lilford, CLAHRC WM Director

Evaluating Interventions to Improve the Integration of Care (Among Multiple Providers and Across Multiple Sites)

Typically healthcare improvement programmes have been institution specific examining, for example hospitals, general practices or care homes. While such solipsistic quality improvement initiatives obviously have their place, they also have severe limitations for the patient of today who typically has many complex conditions and whose care is therefore fragmented across many different care providers working in different places. Such patients perceive, and are sometimes the victims of, gaps in the system. Recent attention has therefore turned to approaches to close these gaps, and I am leading an NIHR programme development grant specifically for this purpose (Improving clinical decisions and teamwork for patients with multimorbidity in primary care through multidisciplinary education and facilitation). There are many different approaches to closing these gaps in care and the Nobel Prize winner Elinor Ostrom has featured previously in this News Blog for her seminal work on barriers and facilitators to institution collaboration [1]; while my colleague, CLAHRC WM Deputy Director Graeme Currie, has approached this issue from a management science perspective.

The problem for a researcher is to measure the effectiveness of initiatives to improve care across centres. This is not natural territory for cluster RCTs since it would be necessary to randomise whole ‘health economies’ rather than just organisations such as hospitals or general practices. Furthermore, many of the outcomes that might be observed in such studies, such as standardised mortality rates, are notoriously insensitive to change.[2] The ESTHER Project in Sweden is famous for closing gaps in care across the hospital/community nexus.[3] The evaluation, however, consists of little more than stakeholder interviews where people seem to recite the perceived wisdom of the day as evidence of effectiveness. While I think it is eminently plausible that the intervention was effective, and while the statements made during the qualitative interviews may have a certain verisimilitude, this all seems very weak evidence of effectiveness. It lacks any quantification, such as could be used in a health economic model. Is there a halfway house between a cluster RCT with hard outputs like mortality on the one hand, and ‘how was it for you?’ research on the other?

While it is not easy to come up with a measurement system, there is one person who perceives the entire pathway and that is the patient. The patient is really the only person who can provide an assessment of care quality across multiple providers. There are many patient measures. Some relate to outcome, for instance health and social care related quality of life (EQ-5DL, ASCOT SCT4 and OPOQL-brief [4]). Such measures should be used in service delivery studies, but may be insensitive to change, as stated above. It is therefore important to measure patient perception of the quality of their care. However, such measurements tend to either be non-specific (e.g. LTC-6 [5]) or look at only one aspect of care, such as continuity (PPCMC),[6] treatment burden [7] or person contentedness.[8] We propose a single quality of integrated care tool incorporating dimensions that have been shown to be important to patients and are collaborating with PenCLAHRC who are working on such a tool. Constructs that should be considered include conflicting information from different caregivers; contradicting forms of treatment (such as one clinician countermanding a prescription from another caregiver); duplication or redundancy of advice and information; satisfaction with care overall and with duration of contacts. We suspect that most patients would prefer fewer, more in-depth, contacts to a larger number of rushed contacts.

It might also be possible to design more imaginative qualitative research that goes beyond simply asking questions and uses method to elicit some of their deeper feelings, by prompting their memory. One such method is photo-voice where patients are asked to take photos in various points in their care, and use these as a basis for discussion. We have used such naturalistic settings in our CLAHRC.[9] Such methods could be harnessed in the co-design of services where patients / carers are not just asked how they perceive services, but are actively involved in designing solutions.

Salient quantitative measurements as may be obtained from NHS data systems. Hospital admission and readmission rates should be measured in studies of system-wide change. An effective intervention would result in more satisfied patients with lower rates of hospital admission. What about quantifying physical health? Adverse events in general and mortality in particular have poor sensitivity, such that signal, even after risk adjustment, would only emerge from noise in an extremely large study, or in a very high-risk client group – see ‘More on Integrated Care’ in this News Blog. Adverse events and death can be consolidated into generic health measurements (QALYs/DALYs), but, again, these are insensitive for reasons given above. Evaluating methods to improve the integration of care may be an ‘inconvenient truth scenario’ [10] where it is necessary to rely on process measures and other proxies for clinical / welfare out. Since our CLAHRC is actively exploring the evaluation of service interventions to improve integration of care, we would be very interested to hear from others and explore approaches to evaluating care across care boundaries.

— Richard Lilford, CLAHRC WM Director

References:

  1. Ostrom E. Beyond Markets and States: Polycentric Governance of Complex Economic Systems. Am Econ Rev. 2010; 100(3): 641-72.
  2. Girling AJ, Hofer TP, Wu J, et al. Case-mix adjusted hospital mortality is a poor proxy for preventable mortality: a modelling study. BMJ Qual Saf. 2012; 21(12): 1052-6.
  3. Institute for Healthcare Improvement. Improving Patient Flow: The Esther Project in Sweden. Boston, MA: Institute for Healthcare Improvement, 2011.
  4. Bowling A, Hankins M, Windle G, Bilotta C, Grant R. A short measure of quality of life in older age: the performance of the brief Older People’s Quality of Life questionnaire (OPQOL-brief). Arch Gerontol Geriatr. 2013; 56: 181-7.
  5. Glasgow RE, Wagner EH, Schaefer J, Mahoney LD, Reid RJ, Greene SM. Development and validation of the Patient Assessment of Chronic Illness Care (PACIC). Med Care. 2005; 43(5): 436-44.
  6. Haggerty JL, Robergr D, Freeman GK, Beaulieu C, Breton M. Validation of a generic measure of continuity of care: When patients encounter several clinicians. Ann Fam Med. 2012; 10: 443-51.
  7. Tran VT, Harrington M, Montori VM, Barnes C, Wicks P, Ravaud P. Adaptation and validation of the Treatment Burden Questionnaire (TBQ) in English using an internet platform. BMC Medicine. 2014; 12: 109.
  8. Mercer SW, Scottish Executive. Care Measure. Scottish Executive 2004
  9. Redwood S, Gale N, Greenfield S. ‘You give us rangoli, we give you talk’ – Using an art-based activity to elicit data from a seldom heard group. BMC Medl Res Methodol. 2012; 12: 7.
  10. Lilford RJ. Integrated Care. NIHR CLAHRC West Midlands News Blog. 19 June 2015.

Between Policy and Practice – the Importance of Health Service Research in Low- and Middle-Income Countries

There is a large and growing literature on disease and its causes in low- and middle-income countries (LMICs) – not only infectious disease, but also non-communicable diseases. Endless studies are published on disease incidence and prevalence, for example. There is also a substantial literature on policy / health systems,[1] much captured in the Health Systems Evidence database.[2] This deals with topics such as general taxation vs. contributory insurance, financial incentives for providers, and use of private providers to extend coverage.

However, how to provide health services given general policy and a certain profile of disease is less well studied. Issues such as skill mix (e.g. who should do what), distribution of services (e.g. hospital vs. clinic vs. home) and coverage (e.g. how many nurses or clinics are needed per head of population) are less well studied. For example, there have been calls for Africa to increase the capacity of Community Health Workers (CHW) to one million, but no-one knows the optimal mix of CHWs to nurses to medical officers to doctors, for example. Likewise, the mix of outreach services (e.g. CHWs), clinics, pharmacies, private facilities, and traditional healers that can best serve populations is very unclear according to a recent Lancet commission.[3] The situation in slums is positively chaotic. One could sit in an arm chair and propose a service configuration for slum environments of 10,000 people that looks like this:

071-dcb-figure-1

The role of CHWs could be narrow (vaccination, child malnutrition), intermediate (vaccination, child malnutrition, sexual and reproductive health), or broad (all of the above, plus hypertension, obesity prevention, adherence to treatment, detection of depression, etc.). HIV and TB screening and treatment maintenance could be separate or included in the above, and so on.

Note that decisions about workforce and how and where the workforce is deployed have to be made irrespective of how care is financed, or whether financial or other incentives are used – decisions are still needed about who is to be incentivised to do what. And people do not appear overnight, so training (and the associated costs) must be included in cost and economic models. Of course, the range of possibilities according to per capita wealth in a country is large, but we do not know what good looks like in countries of approximately equal wealth. Here is the rub – it is much easier to study a diseases and its determinants than to study health services. Yet another study to link pollution to illness is easy to write as an applicant and understand as a reviewer. But talk about skill mix and eyes glaze over. Yet there is little point in measuring disease ever more precisely if there is no service to do anything about it.

— Richard Lilford, CLAHRC WM Director

References:

  1. Mills A. Health Care Systems in Low- and Middle-Income Countries. New Engl J Med. 2014; 370: 552-7.
  2. McMaster University. Health systems evidence. Hamilton, Canada: McMaster University. 2017.
  3. McPake B, & Hanson K. Managing the public–private mix to achieve universal health coverage. Lancet. 2016; 388: 622-30.

Protocols for Database Studies

We introduce a new innovation for the CLAHRC WM News Blog – the online protocol for database studies. Here we introduce the approach. Later in the blog we enclose a pre-protocol for a study we propose to carry out in ‘real life’. We would value your opinion on this approach.

Large, prospective studies, such as randomised clinical trials, have formal protocols that can be accessed by reviewers and readers. Publication of protocols increases transparency, thereby reducing many types of ‘dissemination bias.’ First, it reduces the risk of undetected publication bias. The risk that the literature will be skewed towards studies with positive results. Second, it reduces data-driven hypotheses masquerading as prior hypotheses – reviewers can determine whether what was planned was done, and whether what was done was planned. In many cases these protocols are published in recognised journals, most often online journals, such as Trials or BMJ Open. Increasingly, publication of protocols – placing them in the public domain – is seen as a tenet of good practice.[1]

Database studies are at particularly high risk of publication bias and other types of ‘dissemination bias’, such as selectively publishing significant findings or performing numerous correlations and selecting only those with ‘positive’ results, a topic of previous posts on p-hacking.[2] [3] Moreover, modern clinical service delivery research increasingly relies on such database studies; ‘big data’ is all the rage. This is not a criticism of such studies; CLAHRC WM is a proponent of database studies, and we have reviewed some iconic data-linkage studies, such as the study that unravelled the ‘Muslim mortality paradox’.[4] All the more important then, to guard against dissemination bias. Barriers to entry are low for database studies. That is to say, they can often be done without grant funding (and hence without the requirement to submit a protocol). Unlike trials, there is no requirement or expectation that a protocol will be filed (registered in the public domain). Anyone with access to the data can sit down at the computer and ‘play’. As they do so, new ideas may occur to them, or a finding may prompt further exploration of the data. That would be fine if all results were reported, but the risk here is that positive results are reported, while the denominator – the number of correlations from which published correlations are drawn – is unknown. Even the investigator might not know how many correlations are performed, because s/he may not have kept a tally. More risky still, routinely collected data may be analysed for ‘quality control’ purposes and the idea of publishing the findings may arise only when interest is piqued by a positive result. It is through this biased process that so called “quality improvement reports” arise. Inevitably, these are a highly skewed sample of quality improvement initiatives.

The obvious response to this risk is to mandate pre-publication of a protocol for database studies and then insist that people stick to it, as one would for a large clinical trial. But this is heavy-handed. First, people often interrogate databases for service reasons, with no intent to publish. Should they be muzzled if they chance upon an interesting and important finding – say concerning a putative negative side effect of treatment? Second, it is often necessary to refine a search as one proceeds – one may realise that the same condition can be recorded under many different sub-categories, for example. A surprise finding may prompt subsidiary questions that can be answered from the database – for example, finding an increased cancer incidence in association with some exposure begs the question, which cancers?

What can be recommended so that, on the one hand research is not stifled, but on the other hand the risk of dissemination bias is mitigated? We propose a half-way house between excessive straight-jacketing of database searches, and encouraging a free-for-all with the risk of p-hacking and selective reporting with consequent false positive study results. The epistemological principles we hew to are:

  1. Sharp separation question formulation / study of design and data collection.
  2. Transparency.

Our point of departure from the most stylised and formal processes, such as those properly followed in RCTs, is acceptance that there can be rapid iterations:

Design -> Data collection -> Design of subsidiary study -> Further data collection, and so on

Such a proposal is entirely compatible with philosophical argument on subjectivity and objectivity in science [5] and with Fichte’s proposition [6] of:

Thesis -> Anti-thesis -> Synthesis -> Thesis, and so on

Only one problem remains – how to operationalise such a process, i.e. how to maintain the required separation and transparency while iteratively refining questions to ask of databases. Computers to the rescue! We propose that an original protocol is filed with a view to recording and dating amendments prior to each subsequent database search. Rather than just ‘fly a kite’ we will provide a living demonstration in the pages of the News Blog. In this issue we ‘file’ a pre-protocol. This will then be sent to the data hub where Felicity Evison and colleagues will try to ‘operationalise’ the search and will populate the database with specific searchable terms for concepts such as ‘peritonsilar abscesses.’ We may meet or telephone, but no search will be done until we have agreed the updated search protocol, which will then be filed in the News Blog alongside the pre-protocol. Such iterations will continue until we send the manuscript for publication. At this point reviewers (and future readers) will have full access to the protocol through all stages of its evolution. We hope you enjoy this ‘real time’ story as it unfolds in the pages of your News Blog. Readers are invited to contribute to the enclosed pre-protocol (and to its future evolution) and contributions that lead to a change in the protocol will be acknowledged in the final paper. We welcome contributions from patients and the public.

— Richard Lilford, CLAHRC WM Director

References:

  1. Chan A-W, Tetzlaff JM, Gøtzsche PC, Altman DG, Mann H, Berlin JA, et al. SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials. BMJ 2013; 346: e7586.
  2. Lilford RJ. Look out for ‘P-Hacking’. NIHR CLAHRC West Midlands News Blog. 11 September 2015.
  3. Lilford RJ. More on ‘P-Hacking’. NIHR CLAHRC West Midlands News Blog. 18 December 2015.
  4. Lilford RJ. The Most Important Applied Research Paper This Year? Perhaps Any Year? NIHR CLAHRC West Midlands News Blog. 19 September 2014.
  5. Lilford RJ. Objectivity in Service Delivery Research. NIHR CLAHRC West Midlands. 19 June 2015.
  6. Fichte J. Early Philosophical Writings. Trans. and ed. Breazeale D. Ithaca, NY: Cornell University Press, 1988.

I Know that Cracks in Care Between Institutions Undermine Patient Safety, but How Can I Rectify the Problem?

Cracks between institutions

It is well known that danger arises when care is fragmented over many organisations (hospital, general practice, community care, social services, care home, etc.). With the rise in the proportion of patients with chronic and multiple diseases, fragmented care may have become the number one safety issue in modern health care. Confusion of responsibility, silo thinking, contradictory instructions, and over and under treatment are all heightened risks when care is shared between multiple providers – patients will tell you that. The risk is clearly identified, but how can it be mitigated? There is a limit to what can be achieved by structural change – Accountable Care Organisations featured in a recent blog, for instance.[1] Irrespective of the way care is structured, front line staff need to learn how to function in a multidisciplinary, inter-agency setting so that they can properly care for people with complex needs. Simply studying different ways of organising care, as recommended by NICE,[2] does not get to the heart of the problem in our view. The business aphorism “culture eats strategy for breakfast” applies equally to inter-sectoral working in health and social care. Studying how care givers in different places can better work in teams to provide integrated care is hard, but the need to do so cannot be ignored; we must try. We propose first, a method to enhance performance at the sharp end of care, and second, a system to sustain the improvement.

Improving performance

Improving performance of clinicians who need to work as a team, when the members of the team are scattered across different places, and patients have different, complex needs, is a challenge. For a start, there is no fixed syllabus based on ‘proverbial knowledge’. Guidelines deal with conditions one at a time.[3] There can be no set of guidelines that reconciles all possible combinations of disease-specific guidelines for patients suffering from many diseases.[4] [5] Everything is a matter of balance – the need to avoid giving patients more medicines than they can cope with is in tension with the need to provide evidence-based medicines for each condition. The greater the number of medicines prescribed, the lower is the adherence rate to each prescribed medicine, but it is not possible to pre-specify where the optimal prescribing threshold lies.[6] The lack of a specifiable syllabus does not mean performance cannot be enhanced – it is not just proverbial knowledge that can be enhanced through education, tacit knowledge can be too.[7-9] There is an extensive theoretical and empirical literature concerning the teaching of tacit skills; the central idea is for people to work together in solving the kinds of problems they will encounter in the real world.[10] In the process some, previously tacit, knowledge may be abstracted from deliberations to become proverbial (for an example, see box). Management is a topic that is hard to codify. So (highly remunerated) business schools use case studies as the basis for discussion in the expectation that tacit knowledge will be enhanced. We plan to build on theory and experience to implement learning in facilitated groups to help clinical staff provide better integrated care – we will create opportunities for staff of different types to work through scenarios from real life in facilitated groups. We will use published case studies [11] as a template for further scenario development. Group deliberations will be informed by published guidelines that aim to enhance care of patients with multi-morbidity (although these have been written to guide individual consultations rather than to assist management across sectors).[11-13] In the process group members will gain tacit knowledge (and perhaps some proverbial knowledge will emerge as in the example in the box). CLAHRC WM is implementing this method in a study funded by an NIHR Programme Development grant.[14] But how can it be made sustainable?

Box: Hypothetical Scenario Where Proverbial Knowledge Emerges from Discussion of a Complex Topic

The topic of conflicting information came up in a facilitated work group. A general practitioner argued that this was a difficult problem to avoid, since a practitioner could not know what a patient may have been told by another of their many care-givers. One of the patient participants observed that contradictory advice was not just confusing, but distressing. A community physiotherapist said that he usually elicited previous advice from patients so that he would not inadvertently contradict, or appear to contradict, previous advice. The group deliberated the point and concluded that finding out what advice a patient had received was a good idea, and should be included as a default tenet of good practice.

Sustainability

Again we turn to management theory – there are lots to choose from, but they embody similar ideas. We will take for Ferlie and Shortell.[15] To make a method stick, three organisational levels must be synchronised:

  1. Practitioners at the sharp end who must implement change. They will be invited to join multi-disciplinary groupings and participate in the proposed work groups, as above.
  2. The middle level of management who can facilitate or frustrate a new initiative must make staff development an on-going  priority, for example by scheduling team-building activities in time tables. Our CLAHRC is conducting a project on making care safer in care homes, where much can be done to reduce risk at interfaces in care.
  3. The highest levels of management, who can commit resources and drive culture change by force of personality and the authority of high office, must be engaged. This includes hospitals at board levels and local authorities. Patients have a big role to play – they are the only people who experience the entire care pathway and hence who are experts in it. They can campaign for change and for buy-in from top managers.

CLAHRC WM has deep commitment from major participating hospitals in the West Midlands, from Clinical Commissioning Groups, and local authorities. These organisations are all actively engaged in improving interfaces in care, and the draft Sustainability and Transferability Partnership strategy for Birmingham and Solihull includes plans to better integrate care. We will build on these changes to promote and sustain bottom-up education, supported by the Behavioural Psychology group at Warwick Business School, to drive forward this most challenging but important of all initiatives – improving safety across interfaces in care.

— Richard Lilford, CLAHRC WM Director

References:

  1. Lilford RJ. Accountable Care Organisations. NIHR CLAHRC West Midlands. 11 November 2016.
  2. National Institute for Health and Care Excellence. Multimorbidity: clinical assessment and management. London, UK: NICE, 2016.
  3. Wyatt KD, Stuart LM, Brito JP, et al. Out of Context: Clinical Practice Guidelines and Patients with Multiple Chronic Conditions. A Systematic Review. Med Care. 2014; 52 (3s2): s92-100.
  4. Lilford RJ. Multi-morbidity. NIHR CLAHRC West Midlands. 20 November 2015.
  5. Boyd CM, Darer J, Boult C, et al. Clinical Practice Guidelines and Quality of Care for Older Patients With Multiple Comorbid Diseases: Implications for Pay for Performance. JAMA. 2005; 294(6): 716-24.
  6. Tinetti ME, Bogardus ST, Agostini JV. Potential Pitfalls of Disease-Specific Guidelines for Patients with Multiple Conditions. New Engl J Med. 2014; 351: 2870-4.
  7. Patel V, Arocha J, Kaufman D. Expertise and tacit knowledge in medicine. In: Tact knowledge in professional practice: researcher and practitioner perspectives. Sternberg RJ (ed). Mahwah, NJ: Lawrence Erlbaum Associates, 1999.
  8. Nonaka I, von Krogh G. Tacit knowledge and knowledge conversion: Controversy and advancement in organizational knowledge creation theory. Organ Sci. 2009; 23(3): 635-52.
  9. Eraut M. Non-formal learning and tacit knowledge in professional work. Brit J Ed Psychol. 2000; 70: 113-36.
  10. Lilford RJ. Tacit and Explicit Knowledge in Health Care. NIHR CLAHRC West Midlands. 14 August 2015.
  11. American Geriatrics Society Expert Panel on the Care of Older Adults with Multimorbidity. Guiding Principles for the Care of Older Adults with Multimorbidity: An Approach for Clinicians. J Am Geriatr Soc. 2012; 60(10): E1-25.
  12. Muth C, van den Akker M, Blom JW, et al. The Ariadne principles: how to handle multimorbidity in primary care consultations. BMC Medicine. 2014; 12: 223.
  13. American Geriatrics Society Expert Panel on the Care of Older Adults with Diabetes Mellitus. Guidelines Abstracted from the American Geriatrics Society Guidelines for Improving the Care of Older Adults with Diabetes Mellitus: 2013 Update. J Am Geriatr Soc. 2013; 61(11): 2020-6.
  14. Lilford, Combes, Taylor, Mallan, Mendelsohn. Improving clinical decisions and teamwork for patients with multimorbidity in primary care through multidisciplinary education and facilitation. NIHR Programme Grant. 2016-2017.
  15. Ferlie E, & Shortell S. Improving the quality of health care in the United Kingdom and the United States: a framework for change. Milbank Quart. 2001;79(2): 281-315.

Future Trends in NHS

The future of health care is often conceptualised in terms of improved treatments emerging from the bio-medical science base – for instance increasing the precision with which particular therapies can be targeted. Many of these advances in the effectiveness of care will have supply side consequences in terms of cost and some will require service re-configuration – regenerative medicine and bed-side diagnostics, for example. However the larger challenges are likely to originate from increased demand. The service will have to adapt to these supply and demand side changes. This blog considers the role of applied research in informing these adaptations in order to improve the overall effectiveness and efficiency of services.

We discern three trends which, absent a major perturbation such as international conflict, will alter demand over the medium to long term. The time horizon for our analysis is the next quarter century, given that the longer the time horizon the wider the variance in any predictions.

The trends are as follows:

  1. The population demographic will continue towards higher proportions of elderly people.
  2. The dependency ratio (ratio of working age to young and retired people) will become increasingly adverse.
  3. Demand for services per capita will increase.

None of these assumptions is unarguable as they involve outcomes that have not yet been observed. They are ordered from least to most contentious.

  1. That the population will continue to age is almost a given, but the rate at which it will do is less certain. Some predict that over a third of children alive now will reach a century. However, the rate of increase in life expectancy may slow as the large reductions in smoking related deaths are absorbed into the base-line. Immigration could affect population projections in ways that are hard to predict. The recent sudden increase in mortality among white middle-aged males in the USA,[1] but improvement in survival of low socio-economic group children in the same country,[2] shows how difficult projections can be. A recent demonstration of trends over two decades suggests that age-specific prevalence of dementias are reducing, arguably because risk factors for cardiovascular disease are also risk factors for dementia. This will not reduce the total prevalence of dementia, of course, if life expectancy continues to increase.[3] [4]
  2. The worsening of the dependency ratio is almost a corollary of an ageing society, but again the extent to which this happens is less certain as the work force gradually internalises the notion that 65 years of age is not a biological watershed but a social convention.[5] But delayed retirement will not solve the problem of a deteriorating dependency ratio; absent a method to delay ageing, many types of work, such as aviation and mining, are simply not suitable for older people. In addition, as people work longer at the end of life; so policies are encouraging longer leaves of absence from work outside the home to care for young children. So, all things considered, the dependency ratio will become more adverse as a function of increased longevity. Note, Britain appears to be at an earlier stage in this transition than many other high-income countries, such as Japan and Germany, and the opportunity for immigration to mitigate the tendency is likely to be accentuated given recent events.
  3. Demand for services contingent on an ageing population is somewhat controversial. A reasonable planning assumption is that people will be healthier at a given age but this will not completely mitigate the frailty of older people at a given age. In that case we must assume a rise in demand as the population ages, even if age-specific morbidity declines to some extent.

Implications for the NHS flow from the above. Demand for services will increase relative to resources. That is to say there will be more old people relative to working age people and there will be more frail people relative to the population and demand will outpace economic growth. All of this may be compounded by a tendency for old people to live in remote areas at a distance from major conurbations where health services are concentrated. However, this problem will be less acute than in most other countries.

There are many possible mitigations and the NIHR has a role in all of them; these are listed in the table below.

Factors to help the service cope with increasing demand.

                  Mitigating factor How it might work Caveats Potential impact
Major technical advances that might affect demand. A ‘cure’ or prevention for dementia would both improve the economy (and hence supply) while supressing demand. Probably lies outside our 25 year time horizon. Will prolong life and hence increase the proportion of frail elderly people. Potentially very high but out of scope. Medical advances more generally likely to increase demand by increasing longevity.
Self-care An ‘extreme’ form of skill substitution. Unlike other mitigations there is an extensive research literature. Beneficial for capable patients minimal impact on global demand. The correct answer to improving care, reducing demand will require development of interventions and further research.
Information technology Can make care safer and supply more efficient. Full electronic notes disrupt patient communication in their current form. A lot more needs to be learned about the design and implementation of this deceptively complex technology. Huge benefits in prospect but the socio-technical aspects require extensive development and research.
Robotics May substitute for expensive/scarce human resources.[6] Humans require the care and attention of other humans. Moderate. Likely to assist rather than replace clinical input.
Skill substitution Less expensive staff (physician’s assistants) substitute for more expensive (doctors). Increasingly feasible as health care increasingly codified. Limited by the complexity of decision making in patients with many diseases. Very hard to say without more research. May be modest.
Pro-active community services Prevent deterioration to improve health and decrease admissions. Existing research disappointing – may actually increase demand by identifying self- correcting illness. Potentially great but we are in the foothills of discovery.

Mitigating demand is not easy in the face of the demographic factors mentioned above. It is often argued, even in official enquiries, that prevention is the key to reducing demand. While prevention may reduce demand arising from particular diseases, such as diabetes, survivors go on to develop further diseases on their trajectory to death.[7] It is therefore not at all clear that prevention will reduce total demand and it may even be the case that deferred demand is augmented demand. There are some potential mitigating possibilities. A prevention or cure for Alzheimer’s disease would make a massive difference. Less distant is an ‘artificial pancreas’ that might massively simplify diabetes care. Methods to make people independent, such as home telemetry, have had nugatory impact on demand to date,[8] but this may change in the future. Patient self-care is beneficial in improving healthcare and satisfaction,[9] but effects on total demand have been modest.

If supply side measures might help services cope with the consequences and demand continues to rise, then two points should be noticed. First, efficiency gains are notoriously difficult to achieve in service industries. Second, the likely increasingly adverse dependency ratio is likely to limit expansion in skilled staff. Partial solutions may lie in manufacturing, including robotics and information technology. Skill substitution is a future area where it may be possible to improve efficiency.[10] In particular, physicians assistants may reduce costs overall.[11] The research for skills or system substitution is not entirely positive – for example, substituting nurses for doctors may not improve efficiency because consultation times had to increase.[12] There is an international trend to provide more care at ‘grass roots’ by means of Community Health Workers (CHWs) – an area where high-income countries are learning from low- and middle-income countries.[13] CHWs have a large potential role in improving care – helping patients to adhere to medications, providing preventative services, identifying deteriorating patients. Their effect on reducing demand is less certain, and on occasion they may actually increase it.[14]

Readers may think that the CLAHRC WM Director can be rather pessimistic, even nihilistic. Not so, CLAHRC WM has recently conducted an overview (umbrella review) across 50 systematic reviews of different methods to integrate care across hospitals and communities.[15] Discharge planning with post-discharge support is highly effective. Multi-skill teams are much more effective if they include hospital outreach than if they are entirely community-based. Self-management is effective but mainly for single diseases. Case management is of minimal value. Across all intervention types, length of stay was reduced in over half, emergency admissions were reduced in half, and readmissions were reduced in nearly half. In almost no case did the intervention make any of the above outcomes worse. Costs to the service were reduced in over a third of intervention types, but the quality of evidence is poor on this point – a topic that is being addressed across all CLAHRCs. And here is the CLAHRC WM Director’s point; there are no quick wins and no silver bullets. And the solutions are not self-evident. Only by patiently trying out new things and evaluating them methodologically can things improve. It may sound self-serving, but that does not mean it is incorrect – CLAHRCs have an immense contribution to make to improve the effectiveness and cost-effectiveness of health services.

— Richard Lilford, CLAHRC WM Director

I acknowledge advice from Prof Peter Jones (University of Cambridge), Director of CLAHRC East of England, but the views expressed are entirely my own.

References:

  1. Deaton A, Lubotsky D. Mortality, inequality and race in American cities and states. Soc Sci Med. 2003;56(6):1139-53.
  2. Chetty R HN, Katz LF. The Effects of Exposure to Better Neighbourhoods on Children: New Evidence from the Moving to Opportunity Experiment. Am Econ Rev. 2016.
  3. Matthews FE, Stephan BC, Robinson L, Jagger C, Barnes LE, Arthur A, Brayne C; Cognitive Function and Ageing Studies (CFAS) Collaboration. A two decade dementia incidence comparison from the Cognitive Function and Ageing Studies I and II. Nat Commun. 2016; 7: 11398.
  4. Matthews FE, Arthur A, Barnes LE, Bond J, Jagger C, Robinson L, Brayne C; Medical Research Council Cognitive Function and Ageing Collaboration. A two-decade comparison of prevalence of dementia in individuals aged 65 years and older from three geographical areas of England: results of the Cognitive Function and Ageing Study I and II. Lancet. 2013; 382(9902): 1405-12.
  5. Lilford R. Robotic hotels today – nursing homes tomorrow? NIHR CLAHRC West Midlands News Blog. March 6 2015.
  6. Lilford R. Medical Technology – Separating the Wheat from the Chaff. NIHR CLAHRC West Midlands News Blog. February 26 2016.
  7. Lilford R. Improving Diabetes Care. NIHR CLAHRC West Midlands News Blog. November 11 2016.
  8. Henderson C, Knapp M, Fernández J-L, Beecham J, Hirani SP, Cartwright M, et al. Cost effectiveness of telehealth for patients with long term conditions (Whole Systems Demonstrator telehealth questionnaire study): nested economic evaluation in a pragmatic, cluster randomised controlled trial. BMJ. 2013; 346: f1035.
  9. Tricco AC, Ivers NM, Grimshaw JM, Moher D, Turner L, Galipeau J, et al. Effectiveness of quality improvement strategies on the management of diabetes: a systematic review and meta-analysis. Lancet. 2012; 379: 2252–61.
  10. Lilford R. The Future of Medicine. NIHR CLAHRC West Midlands News Blog. October 23 2015.
  11. Lilford R. Improving Hospital Care: Not easy when budgets are pressed. NIHR CLAHRC West Midlands News Blog. January 23 2015.
  12. Laurant M, Reeves D, Hermens R, Braspenning J, Grol R, Sibbald B. Substitution of doctors by nurses in primary care. Cochrane Database Syst Rev. 2005; 2(2).
  13. Lilford R. Lay Community Health Workers. NIHR CLAHRC West Midlands News Blog. April 10 2015.
  14. Roland M, Abel G. Reducing emergency admissions: are we on the right track? BMJ. 2012; 345: e6017.
  15. Damery S, Flanagan S, Combes G. Does integrated care reduce hospital activity for patients with chronic diseases? An umbrella review of systematic reviews. BMJ Open. 2016; 6: e011952.

Let the Second Sanitary Revolution Begin

Despite the gains in recent years, far too many children still die before their fifth birthday. Childhood mortality in low income countries is 76 per thousand live births compared with 7 per thousand in high income countries.[1] Now that pneumococcal vaccine is in a widespread use we may expect diarrhoea to take over from pneumonia as the number one killer of children. Certainly in slums – soon to be home to over 1 billion people – diarrhoea is the greatest threat not just to life, but also to child health. Diarrhoea predisposes to chronic enteropathy, especially in malnourished children, which in turn predisposes to stunting and perhaps reduced cognitive development.[2]

But it does not have to be this way. The first ‘sanitary revolution’ in the second half of the 19th century in Europe and North America yielded massive gains in child survival.[3] Less than 4% of all development assistance has been allocated to urban water and sanitation improvement over the last few decades, according to Prof David Satterthwaite. Moreover, it is not as though Europe and America were awash with money; the per capita GDP of Britain in the 1860s ($703.1)[4] was roughly equivalent to that of Rwanda today ($697.3).[5] This suggests that a lack of political will is also to blame for poor sewage and water installations in modern day slums. And the pitiful state of sanitation in modern slums has been thoroughly documented.[6] Hardly surprisingly, improving sanitation is the number one priority for people who live in slums.[7] Water and sanitation is not a middle-class concern foisted on slum dwellers; it is a critically important issue that results in millions of child deaths and that local people want tackled.

There are of course barriers to tackling this problem relating to relative powerlessness of people in slums, poor local governance, immature financial markets, and so on. But there is another problem that is created entirely by a certain type of armchair academic – this is the pernicious idea that nothing can be done pending improvements in local and national governance. Such people argue that it is first necessary to improve security of tenure, functioning financial markets, and so on. An extension of this argument, for which empirical support is absent, is that water and sanitation is not enough; it must be part of an improvement in the whole slum ‘nexus’ to include solid waste disposal, street drainage, home improvement, etc. We cannot wait for extractive elites to disappear, the judiciary to be made independent, or every slum holder to achieve title before acting; Paris famously installed a functioning sewage system during the dictatorship of Napoleon the third following his coup d’état. Fortunately water and sanitation was prioritised at a recent WHO Technical Working Group on “Addressing Urban Health Equity Through Slum Upgrading” attended by the CLAHRC WM Director.

So, let the water and sanitation revolution begin. Let it be driven by political and social zeal but do not let it be undisciplined, and let us never forget that water and sanitation is a socio-technical innovation – it needs to be supported (ideally initiated) by local people themselves. Ensuring proper use and maintenance of sanitary facilities requires alignment of supply and demand.

A number of international organisation promote water and sanitation in low- and middle-income countries, for example the UN-HABITATs Water and Sanitation Trust Fund. But good intentions are not enough when it comes to sanitation – even where sanitation and water have been improved, the benefits on health are often nugatory.[8] [9] This is because the installations are inadequate, and/or because facilities are underused or poorly maintained. It is thus crucially important that interventions meet local needs, that they can be maintained, and that their effects in reducing exposure to infection and improving health are evaluated. Installation of improved water and sanitation utilities needs to be accompanied by research into how to make this socio-technical intervention work well and also summative evaluation of the effects on health and well-being.

— Richard Lilford, CLAHRC WM Director

References:

  1. World Health Organization. Under-five mortality. WHO, 2016.
  2. Grantham-McGregor S, Cheung YB, Cueto S, Glewwe P, Richter L, Strupp B. Developmental potential in the first 5 years for children in developing countries. Lancet 2007; 369: 60–70.
  3. Szreter S. The Population Health Approach in Historical Perspective. Am J Public Health. 2003; 93(3): 421-31.
  4. Broadberry S, Campbell B, Klein A, Overton M, van Leeuwen B. British economic growth and the business cycle, 1700-1870. 2011. Working Paper.
  5. The World Bank. GDP per capita (current US$). 2016.
  6. Ezah A, Oyebode O, Satterthwaite D, et al. The history, geography, and sociology of slums and the health problems of people who live in slums. Lancet. 2016. [ePub].
  7. Parikh P, Parikh H, McRobie A. The role of infrastructure in improving human settlements. Urban Design Planning, 2012; 166; 101-18.
  8. Wolf J, Prüss-Ustün A, Cumming O, et al. Assessing the impact of drinking water and sanitation on diarrhoeal disease in low- and middle-income settings: systematic review and meta-regression. Trop Med Int Health. 2014; 19(8): 928-42.
  9. Fewtrell L, Kaufmann RB, Kay D, Enanoria W, Haller L, Colford JM, Jr. Water, sanitation, and hygiene interventions to reduce diarrhoea in less developed countries: a systematic review and meta-analysis. Lancet Infect Dis. 2005; 5(1): 42-52.