Journal Information
Vol. 11. Issue 2.
Pages 67-68 (April - June 2018)
Share
Share
Download PDF
More article options
Vol. 11. Issue 2.
Pages 67-68 (April - June 2018)
Editorial
Open Access
Publication bias and the chase for statistical significance
Sesgo en publicación y la búsqueda de significancia estadística
Visits
5146
Iván Marín-Franch
Department of Ophthalmology, University of Alabama at Birmingham School of Medicine, Birmingham, AL, USA
This item has received

Under a Creative Commons license
Article information
Full Text
Bibliography
Download PDF
Statistics
Full Text

Most published research findings are false according to Ioannidis.1 As social animals, we are attracted, sometimes irresistibly, towards accepting sensational positive results and inclined to dismiss the negative ones — which may be just as important. We have also come to believe that the reliability of a result in medical research, including optometry and ophthalmology, should be expressed solely in terms of p-values.1 As a consequence, studies with statistically significant results are not only more likely to be published,2 they are more likely to be cited and promoted,3 a trend that seems to be alive and well in current eye research.4

Since getting a significant result serves researchers and journals better than a negative one, there is a bias (unconscious or otherwise) towards cherry-picking findings. To this end, statistics can be misused to manipulate data and analyses until significant effects are extracted. This form of scientific misconduct is surprisingly common and is known as p-hacking or data dredging or p-value fishing. It was poetically described as the fourth circle of Scientific Hell5:

Those who tried every statistical test in the book until they got a p value less than .05 find themselves here, in an enormous lake of murky water. Sinners sit on boats and must fish for their food. Fortunately, they have a huge selection of different fishing rods and nets (brand names include Bayes, Student, Spearman, and many more). Unfortunately, only one in 20 fish are edible, so the sinners in this circle are constantly hungry.

These unhappy strategies have led to a spurious excess of statistically significant results in the literature6 and a crisis in reproducibility. In a survey of 1500 experimenters,7 about 70% of researchers who attempted to reproduce someone else's experiment failed; even more worryingly, more than 50% failed to reproduce their own experiments. To compound the problem, unsuccessful replications were about half as likely to be published as successful ones, presumably reflecting the existing publication bias. The two most common explanations offered by the scientists surveyed were selective reporting and pressure to publish.

Publication bias and selective reporting lead to an overestimation of the effects of treatment in medical research.2 In conjunction with citation bias and transmutations (where hypotheses are converted into facts through citation), unfounded authority of claims are created.3 Add the predatory behavior of some emerging journals with indifferent peer review along with the misconduct of some researchers concerned more with journal impact than honesty or service,8 and Ioannidis’ claim1 no longer seems an exaggeration. The outcome is an increasing mistrust of medical research, including optometry and ophthalmology, and an environment in which studies of dubious scientific merit8,9 are likely to be more and more common.

Science is a self-correcting process, with the ability to recognize and address its problems as they emerge.1–3,6–9 Measures are being introduced, albeit slowly, to prevent publication bias and avoid publication of p-hacked results. One such measure is pre-registration, whereby scientists submit hypotheses and plans for data analysis to a third party before performing experiments.7 Another important step has been taken by the American Statistical Association in releasing a statement on the professional use of p-values6 and inferential statistics. Additionally, some journals such as PLoS ONE10 and Scientific Reports11 have editorial policies that explicitly welcome papers with negative results and requests reviewers to assess methodological and analytical merit alone, leaving the research community to judge importance and significance after publication. Unfortunately, all these reforms will take time to influence the larger community.

Publication in high-impact journals often seems to be an end in itself, rather than a means to help advance our field. Although journal publishers, funding organizations, and institutions12 are working to mitigate the destructive effects of a publish-or-perish culture, we researchers remain the key: resisting the temptation to cut corners, promoting codes of ethical conduct, and adopting high-quality standards.12 In the long term, it is to the benefit of us all.

References
[1]
J.P.A. Ioannidis.
Why most published research findings are false.
PLoS Med, 2 (2005),
e124:696–701
[2]
P.J. Easterbrook, J.A. Berlin, R. Gopalan, D.R. Matthews.
Publication bias in clinical research.
Lancet, 337 (1991), pp. 867-872
[3]
S.A. Greenberg.
How citation distortions create unfounded authority: analysis of a citation network.
Br Med J, 339 (2009), pp. 1-14
[4]
M. Mimouni, M. Krauthammer, A. Gershoni, F. Mimouni, R. Nesher.
Positive results bias and impact factor in ophthalmology.
Curr Eye Res, 40 (2015), pp. 858-861
[5]
Neuroskeptic.
The nine circles of Scientific Hell.
Perspect Psychol Sci, 7 (2012), pp. 643-644
[This article is adapted from a post originally published on the Neuroskeptic blog on November 649, 2010: http://neuroskeptic.blogspot.com/2010/2011/2019-circles-of-scientific-hell.html] Accessed 02.03.18.12
[6]
R.L. Wasserstein, N.A. Lazar.
The ASA's statement on p-values: context, process, and purpose.
Am Stat, (2016),
[7]
M. Baker.
1,500 scientists lift the lid on reproducibility.
Nature, 533 (2016), pp. 452-454
[8]
D.P. Piñero.
Scientific information overload in vision: what is behind?.
[9]
J.M. González-Méijome.
Science, pseudoscience, evidence-based practice and post truth.
J Optom, 10 (2017), pp. 203-204
[10]
PLoS ONE Guidelines for Reviewers. http://journals.plos.org/plosone/s/reviewer-guidelines Accessed 02.03.18.
[11]
Scientific Reports Guide to Referees. https://www.nature.com/srep/journal-policies/referees Accessed 02.03.18.
[12]
The Findings of a Series of Engagement Activities Exploring the Culture of Scientific Research in the UK.
Nuffield Council on Bioethics, (2014),
Copyright © 2018. Spanish General Council of Optometry
Journal of Optometry
Article options
Tools

Are you a health professional able to prescribe or dispense drugs?