Can Ministers and Policy Makers learn anything from CLAHRCs?

My book club recently suggested that we should read ‘The Blunders of Our Governments’ by Anthony King and Ivor Crewe. I opposed their choice on the grounds that it sounded like a sensationalist polemic and, having served as a civil servant, I did not fancy a crude caricature of my erstwhile colleagues. However, I was glad I was over-ruled – the book is nuanced and makes an excellent read. It chronicles blunders made by both Labour and Conservative administrations over more than two decades. And let’s be clear – a blunder is not a bad policy in the sense that the objective is not a worthy one, but a policy that fails on its own terms. A good example is the creation of the Child Support Agency under a Conservative administration, which was then radically amended, but to no good effect, by Tony Blair’s government. Its aims were to improve child welfare by compelling absent fathers to contribute to their children’s maintenance – a laudable goal. It was for this reason that it survived successive governments of different ideology before it was finally superseded. In a nutshell, it failed through the ‘devil in the detail’. How did such a policy survive the scrutiny process in government and in both houses of parliament?

A change in policy is an example of a ‘complex intervention’ – that is to say an intervention which has many components interacting within a complex system, so that its effects are hard to predict. Good intentions are simply not enough. Scrutiny afforded the above child protection policy was simply too distant; too focussed on worthy objectives and not sufficiently diligent in scrutinising how the intervention might be propagated in a complex system. An important function of NIHR CLAHRCs is to provide such scrutiny (although they generally concern themselves with health services rather than the system / government policy level [1]). So CLAHRCs may be able to speak ‘truth unto power’ in recommending that ministers and their advisors follow a systematic process of intervention development that broadly adheres to the following steps:

  1. Ensure that your proposal is informed by a careful overview of the research literature so as to learn from the experience of others. Such a step may have warned the John Major government they were stepping into a ‘minefield’ in creating the Agency.
  2. Draw a conceptual map to explain how the intervention is supposed to work and also how it may fail or do harm, and speak to people who work in the service and who can identify ‘barriers and facilitators’. In the case of child support, such an exercise may have brought to the surface the difficulty in finding a solution that was at once workable and fair. In the event, the policy was by turns too rigid and too complicated; eventually the algorithm to calculate the amount of money a father must pay ran to over 140 pages of algebra, reflecting the difficulty in providing a just solution for people living in very different circumstances.
  3. Building on identified barriers and facilitators, an intervention should be designed with the help of the kind of people who will have to implement it and who will be affected by it. This is usually managed through a series of facilitated workshops to provide opportunities for deliberation as successive versions of a detailed implementation plan are thrashed out. Ideally two independent design teams should be deployed to see whether they come up with the same solution.[2] Such diligence may have mitigated the group thinking which led to the unrealistic requirements that the Child Support Agency should be up and running within a year.
  4. The intervention should be ‘road tested’ in a sample of sites before being rolled out (in modified form if necessary) or abandoned. Such a pilot study might have been tricky in the case of absent fathers scattered over the country and politicians may be understandably leery about testing an intervention in one region after the public relations disaster associated with piloting the poll tax in Scotland. In such a situation we recommend a whole-scale simulation of the intervention – a type of ‘alpha testing’.

Okay, this may sound pusillanimous to politicians who might be impatient to have something to show for their time in office. But the Child Support Agency collected only £15 million of additional money at a cost of £137 million in its first year of operation. It was almost universally loathed by fathers and mothers alike. The Oxbridge-educated ministers who designed the child protection policy were famously cerebral, prompting a colleague to jibe “it took exceptionally intelligent people to design a system so stupid!”

— Richard Lilford, CLAHRC WM Director


  1. Lilford RJ, Chilton PJ, Hemming K, Girling AJ, Taylor CA, Barach P. Evaluating policy and service interventions: framework to guide selection and interpretation of study end points. BMJ. 2010. 341:c4413.
  2. Litchfield IJ, Bentham LM, Lilford RJ, Greenfield SM. Test result communication in primary care: clinical and office staff perspectives. Fam Pract. 2014; 31(5): 592-7.



One thought on “Can Ministers and Policy Makers learn anything from CLAHRCs?”

  1. The policy-making blunders is an issue that finds resonance globally. The very useful ‘guide/tips’ also helps us as researchers/academics to consider how we engage with the policy environment, – what kinds of information we bring to the table, if asked to do so, and how might we support the process of ‘option appraisal’ and ‘scenario setting’. A similar book reflecting on policy successes would be equally useful, as we can learn as much from what went right as we can from what went wrong.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s