Student Representative Column

Opening the file-drawer: Scientific transparency through study preregistration

Using study preregistration to improve the reliability of psychological findings.

By Justin Strickland

An increasing number of reports in the scientific and popular press have questioned the reproducibility and reliability of published psychological findings (Ioannidis, 20051; Open Science Collaboration, 20152). For example, a recent effort led by the Open Source Collaboration attempted to replicate 100 experimental and correlation findings from top tier psychological journals (Open Science Collaboration, 20152). Of these 100 replication attempts, only 39 percent were unequivocally successful. Although some have questioned the power and the possibility of type II error rate inflation in replication studies (Maxwell et al., 20153), it is clear that reproducibility is a priority in the psychological sciences.

Problems with reproducibility result, in part, from an overemphasis on publishing statistically significant results. This emphasis leads to the use (albeit often unintentionally) of questionable research analysis practices, such as post hoc hypothesis generation (“HARKing”, or hypothesizing after the results are known) and the pursuit of significant effects through statistical manipulations (“p-hacking”). The stress of developing an established and extensive publication record throughout graduate school and postdoctoral training so as to remain competitive for future faculty and grant applications makes this publish or perish mentality particularly troubling for early career psychologists. Understanding and recognizing a reproducibility problem is hard in its own right, but developing and applying solutions may prove even more challenging.

One way to help improve research transparency and decrease the significant-finding biased research literature is through study preregistration (also known as registered reports in some journals). With study preregistration, a research group may submit a study rationale, hypothesis, design and analytic strategy for peer review at a chosen journal prior to the start of the study. A study determined to have sufficient theoretical and methodological quality is, in principle, accepted to the journal. When the completed study is submitted, it undergoes peer review again but cannot be rejected for nonsignificant results. Instead, the study may only be rejected for other reasons, such as protocol deviation without sufficient justification or unfounded interpretation of the data. The promise of preregistration is an increased focus on the theoretical and methodological rigor in the scientific literature, a decreased bias against null findings and a reduced number of false-positive outcomes. Indeed, the practice of preregistration is not uncommon for the broader research field and has long been a requirement or common practice in large-scale clinical trials.

The number of journals accepting preregistered studies as a publication option is steadily increasing (a notable early adopter of the format is Drug and Alcohol Dependence; see more journals). This expansion provides an exciting new opportunity for psychologists at any point in their career. The hope is that with preregistration, we as a research field can help to improve the scientific rigor and conduct of psychological science. Preregistration is clearly not the only solution for poor reproducibility. However, attending to the strengths of scientific methodology and theory over the significance of the findings will help be a part of the overall goal increasing research transparency and the reliability of research outcomes.

References

1 Ioannidis, J. P. A. (2005). Why most published research findings are false. PLOS Medicine, 2, e124. Retrieved from http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124

2Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349 (6251), aac4716. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/26315443

3Maxwell, S. E., Lau, M. Y., & Howard, G. S. (2015) Is psychology suffering from a replication crisis? What does “failure to replicate” really mean? The American Psychologist, 70 (6), 487-498. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/26348332