Journal of Optometry Journal of Optometry
J Optom 2018;11:67-8 - Vol. 11 Num.2 DOI: 10.1016/j.optom.2018.03.001
Publication bias and the chase for statistical significance
Sesgo en publicación y la búsqueda de significancia estadística
Iván Marín-Franch
Department of Ophthalmology, University of Alabama at Birmingham School of Medicine, Birmingham, AL, USA

Most published research findings are false according to Ioannidis.1 As social animals, we are attracted, sometimes irresistibly, towards accepting sensational positive results and inclined to dismiss the negative ones — which may be just as important. We have also come to believe that the reliability of a result in medical research, including optometry and ophthalmology, should be expressed solely in terms of p-values.1 As a consequence, studies with statistically significant results are not only more likely to be published,2 they are more likely to be cited and promoted,3 a trend that seems to be alive and well in current eye research.4

Since getting a significant result serves researchers and journals better than a negative one, there is a bias (unconscious or otherwise) towards cherry-picking findings. To this end, statistics can be misused to manipulate data and analyses until significant effects are extracted. This form of scientific misconduct is surprisingly common and is known as p-hacking or data dredging or p-value fishing. It was poetically described as the fourth circle of Scientific Hell5:

Those who tried every statistical test in the book until they got a p value less than .05 find themselves here, in an enormous lake of murky water. Sinners sit on boats and must fish for their food. Fortunately, they have a huge selection of different fishing rods and nets (brand names include Bayes, Student, Spearman, and many more). Unfortunately, only one in 20 fish are edible, so the sinners in this circle are constantly hungry.

These unhappy strategies have led to a spurious excess of statistically significant results in the literature6 and a crisis in reproducibility. In a survey of 1500 experimenters,7 about 70% of researchers who attempted to reproduce someone else's experiment failed; even more worryingly, more than 50% failed to reproduce their own experiments. To compound the problem, unsuccessful replications were about half as likely to be published as successful ones, presumably reflecting the existing publication bias. The two most common explanations offered by the scientists surveyed were selective reporting and pressure to publish.

Publication bias and selective reporting lead to an overestimation of the effects of treatment in medical research.2 In conjunction with citation bias and transmutations (where hypotheses are converted into facts through citation), unfounded authority of claims are created.3 Add the predatory behavior of some emerging journals with indifferent peer review along with the misconduct of some researchers concerned more with journal impact than honesty or service,8 and Ioannidis’ claim1 no longer seems an exaggeration. The outcome is an increasing mistrust of medical research, including optometry and ophthalmology, and an environment in which studies of dubious scientific merit8,9 are likely to be more and more common.

Science is a self-correcting process, with the ability to recognize and address its problems as they emerge.1–3,6–9 Measures are being introduced, albeit slowly, to prevent publication bias and avoid publication of p-hacked results. One such measure is pre-registration, whereby scientists submit hypotheses and plans for data analysis to a third party before performing experiments.7 Another important step has been taken by the American Statistical Association in releasing a statement on the professional use of p-values6 and inferential statistics. Additionally, some journals such as PLoS ONE10 and Scientific Reports11 have editorial policies that explicitly welcome papers with negative results and requests reviewers to assess methodological and analytical merit alone, leaving the research community to judge importance and significance after publication. Unfortunately, all these reforms will take time to influence the larger community.

Publication in high-impact journals often seems to be an end in itself, rather than a means to help advance our field. Although journal publishers, funding organizations, and institutions12 are working to mitigate the destructive effects of a publish-or-perish culture, we researchers remain the key: resisting the temptation to cut corners, promoting codes of ethical conduct, and adopting high-quality standards.12 In the long term, it is to the benefit of us all.

J.P.A. Ioannidis
Why most published research findings are false
PLoS Med, 2 (2005),
P.J. Easterbrook,J.A. Berlin,R. Gopalan,D.R. Matthews
Publication bias in clinical research
Lancet, 337 (1991), pp. 867-872
S.A. Greenberg
How citation distortions create unfounded authority: analysis of a citation network
Br Med J, 339 (2009), pp. 1-14
M. Mimouni,M. Krauthammer,A. Gershoni,F. Mimouni,R. Nesher
Positive results bias and impact factor in ophthalmology
Curr Eye Res, 40 (2015), pp. 858-861
The nine circles of Scientific Hell
Perspect Psychol Sci, 7 (2012), pp. 643-644
[This article is adapted from a post originally published on the Neuroskeptic blog on November 649, 2010:] Accessed
R.L. Wasserstein,N.A. Lazar
The ASA's statement on p-values: context, process, and purpose
Am Stat, (2016),
M. Baker
1,500 scientists lift the lid on reproducibility
Nature, 533 (2016), pp. 452-454
D.P. Piñero
Scientific information overload in vision: what is behind?
J.M. González-Méijome
Science, pseudoscience, evidence-based practice and post truth
PLoS ONE Guidelines for Reviewers. Accessed 02.03.18.
Scientific Reports Guide to Referees. Accessed 02.03.18.
The Findings of a Series of Engagement Activities Exploring the Culture of Scientific Research in the UK
Nuffield Council on Bioethics, (2014)
Copyright © 2018. Spanish General Council of Optometry
J Optom 2018;11:67-8 - Vol. 11 Num.2 DOI: 10.1016/j.optom.2018.03.001