I thank my friend and colleague Alan Girling for drawing my attention to a recent issue of the statistical journal ‘Significance’.
This issue of the journal followed close on the heels of the UK general election and so, not surprisingly, the failure of polls to predict the outcome provided the topic for the feature article. But it was another article on representing significance in the psychological and medical literature that Alan suggested I should read. The author, Andrew Gelman, discusses small studies with statistically significant results. Such studies tend to exaggerate effect sizes when the signal is associated with a lot of noise, and in the usual situation where true positive effects are of modest magnitude. Throw in a little publication bias and the literature becomes yet more severely distorted. I entirely agreed with the statement that use of confidence intervals “will not solve any problems: checking whether a 95% interval excludes zero is mathematically equivalent to checking whether p<0.05.” Gelman goes on to say that the problem of exaggerated claims is worse in psychology than medicine because there are fewer obstacles to carrying out small studies and, arguably, because the signals, relating as they do to latent mental constructs, tend to be more ‘noisy’ than those in medicine. Gelman comes down strongly in favour of the Bayesian approach, which brings sobriety to bear through the ‘prior’ probability density (see previous post).
— Richard Lilford, CLAHRC WM Director
- Gelman A. Working through some issues. Significance. 2015; 12(3): 33-5.