We have previously discussed research where a service manager decides that an intervention should be studied prospectively. We have made the point that applied research centres, such as CLAHRCs/ARCs, should be responsive to requests for such prospective evaluation. Indeed, the request or suggestion from a service manager to evaluate their intervention provides a rich opportunity for scientific discovery since the intervention is a charge to the service not to the research funder. In some cases many of the outcomes of interest may be collected from routine data systems. In such circumstances the research can be carried out at a fraction of the usual cost for prospective evaluations. Nor should it be assumed that research quality must suffer. We give two examples below where randomised designs were possible, one where individual staff members were randomised to different methods to encourage uptake of seasonal influenza vaccine and the other where a step wedge cluster design was used to evaluate roll out of a community health worker programme across a heath facility catchment area in Malawi. Data from these studies has been collected and is being analysed.
(1) Improvement Project Around Staffs’ Influenza Vaccine Uptake 
At the time of this study staff at University Hospitals Birmingham NHS Foundation Trust were invited to take up the Influenza vaccination every September, and then reminded regularly. This study involved staff being sent one of four randomised letters to see if it would directly influence vaccination uptake. One factor of the letters emphasised being invited by an authority figure; the other factor emphasised vaccination rates in peer hospitals.
(2) Evaluating the impact of a CHW programme… in Malawi 
This study estimated the effect a CHW programme had on a number of health outcomes, including retention in care for patients with chronic non-communicable diseases, and uptake of women’s health services. Eleven health centres / hospitals were arranged into six clusters, which were then randomised to receive the intervention programme at various, staggered points. Each cluster crossed over from being a control group to an intervention group until all received the intervention.
In previous articles  we have examined the practical problems that can be encountered in obtaining ethical approvals and registering demand-led studies. These problems arise because of the implicit assumption that researchers, not service managers, are responsible for the interventions that are the subject of study. In particular we have criticised the Ottawa declaration on the ethics of cluster studies for making this assumption. We have pointed out the harm that rigid adherence to the tenets of this declaration could do by limiting the value that society could reap from evaluations of the large number of natural experiments that are all around us.
However, demand-led research is not homogeneous and so the demands on service manager and researcher vary from case to case. The purpose of this news blog article is to attempt a taxonomy of demand-led research. Since we are unlikely to get this right on our first attempt, we invite readers to comment further.
We discern two dimensions along which demand-led research may vary. First, the urgency dimension and second a dimension to describe the extent, if any, to which the researcher may have participated in the design of the intervention.
As a general rule, demand-led research is done under pressure of time. If there was no time pressure, then the research could be commissioned in the usual way through organisations such as the NIHR Service Delivery and Organisation Programme and the US Agency for Health Quality Research. Demand-led research is done under shorter lead times that are incompatible with the lengthy research cycle. However, permissible lead times for demand-led research vary from virtually no time to many months. In both of the studies above the possibility of the research was mooted only four or five months before roll-out of the index intervention was scheduled. We had to ‘scramble’ to develop protocols, obtain ethics approvals, and register the studies, as required for an experimental design, before roll-out ensued.
The second manner in which demand-led research may vary is in the extent of researcher involvement in design of the intervention itself. If the intervention is designed solely by the researcher or is co-produced, but under the researcher initiative, then this cannot be classified as demand-led. However, the intervention may be designed entirely by the service provider or it may be initiated by the service provider but with some input from the researcher. The vaccination intervention described in the box was initiated by the service who wished to include an incentive as part of a package of measures but they sought advice over the nature of the incentive from behavioural economists in our CLAHRC. On the other hand the intervention to train and deploy community health workers in Malawi was designed entirely by the service team with no input from the evaluation team whatsoever.
Contribution to research dominates because if the researcher makes no contribution to the intervention, then the researcher has little or no responsibility – full argument provided elsewhere.
— Richard Lilford, CLAHRC WM Director
- Lilford R, Schmidtke KA, Vlaev I, et al. Improvement Project Around Staffs’ Influenza Vaccine Uptake. Clinicaltrials.gov. NCT03637036. 2018.
- Dunbar EL, Wroe EB, Nhlema B, et al. Evaluating the impact of a community health worker programme on non-communicable disease, malnutrition, tuberculosis, family planning and antenatal care in Neno, Malawi: protocol for a stepped-wedge, cluster randomised controlled trial. BMJ Open. 2018; 8(7): e019473.
- Lilford RJ. Demand-Led Research. NIHR CLAHRC West Midlands News Blog. 18 January 2019.
- Watson S, Dixon-Woods M, Taylor CA, Wore EB, Dunbar EL, Chilton PJ, Lilford RJ. Revising ethical guidance for the evaluation of programmes and interventions not initiated by researchers. J Med Ethics. [In Press].