Publication bias means that null results do not make it into the public domain. Assessing publication bias is straightforward in subjects where all studies have to be registered in advance – clinical trials for example. But there is little evidence on publication bias in service delivery / health services research. The CLAHRC WM Director suspects that this lack of evidence arises because much social science literature is observational rather than experimental, and it is so hard to collect convincing evidence on publication bias among such studies. There is no registry of studies; the original hypothesis may not correspond to comparisons reported; many studies might not be written up; and the investigators may evaluate a large number of associations so that results do not neatly dichotomise into significant or null. In addition, the famous funnel plot may be less likely to signal bias than is the case for much clinical research. This is because the association between sample size and risk of publication bias is less likely to hold when the size of the sample is limited more by the size of the database than the cost of recruiting individual participants. These problems were overcome in an interesting article that studied the destiny of 249 grant-funded (peer review) studies conducted within a single ongoing data collection survey over a ten year period. Most of the studies consisted of an evaluation of modifications of the survey instrument (questionnaire) used to populate the survey database. The results show a massive effect. Studies with a positive result (as judged by the author) were much more likely to be written up and, if written up, much more likely to be published. The fact that the source studies were all based on a single database removes (or at least strongly mitigates) bias due to interaction between study topic and probability of a positive result.
These results reinforce the CLAHRC WM Director’s weariness to accept positive results of association studies, such as those that relate patient perception of care to standardised morality rates. Such results feed into the prevailing meta-narrative, in this case that organisational culture determines the quality of the full range of front line services. A null result is less likely to survive peer review under such circumstances. The paper cited here interviewed holders of grants based in the database, and found that they were disheartened by null results and often did not bother to submit them, anticipating that they would be rejected. They are right to be pessimistic since null results were less likely to be accepted when submitted, in keeping with the natural human tendency to reject studies that do not fit with prevailing or preconceived ideas. 
What do we recommend? Only studies where the protocol has been published should be considered for publication, and they should all be published provided the protocol was adhered to. The clinical research world has tightened up its act. It is high time for the service delivery world to stop claiming scientific exceptionalism and adhere to the standard tenets of good scientific practice that hark back to Francis Bacon.
— Richard Lilford, CLAHRC WM Director
- Franco A, Malhotra N, Simonovits G. Publication bias in the social science. Science. 2014; 345(6203): 1502-5.
- Lord CG, Ross L, Lepper MR. Biased Assimilation and Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence. J Pers Soc Psychol. 1979; 37(11): 2098-109.
- Kaptchuk TJ. Effect of interpretive bias on research evidence. BMJ. 2003; 326: 1453-5.
- Bacon F. Novum Organum. 1620.