From Researcher Degrees of Freedom to Questionable Research Practices

Researcher Degrees of Freedom

Throughout the course of model development and analysis researchers make numerous decisions. These alternative decisions are termed ‘researcher degrees of freedom’ (Wicherts et al. 2016; Simmons, Nelson, and Simonsohn 2011). These decisions are usually subjective and arbitrary yet methodologically and substantively acceptable (Wicherts et al. 2016), however may collectively significantly influence the final ‘analysis strategy’ (sensu Hoffmann et al. 2021) or ‘specification’ (sensu Simonsohn, Simmons, and Nelson 2020), and consequently the research findings and conclusions drawn from that analysis (Desbureaux 2021).

Alternative decisions about model assumptions and implementation can lead researchers down very different analytical paths that result in ‘parallel’ or ‘adjacent’ worlds with different interpretations, and potentially conflicting conclusions originating from a single underlying dataset (Young and Stewart 2021; Liu, Althoff, and Heer 2020). The set of all (plausible or reasonable) possible decisions that a researcher could make over the entire analysis from beginning to end has been termed collectively as the ‘garden of forking paths’ (Gelman and Loken 2013) or the ‘multiverse’, which consists of multiple universes of plausible analysis strategies (Steegen et al. 2016).

Researcher degrees of freedom are not inherently problematic, but rather are a product of the uncertainty that besets model development and analysis, whether due to incomplete theoretical understanding of underlying ecological processes, or because modelling is an act of simplification and abstraction; there is always some degree of choice about which ecological processes and phenomena should be included in the model, how they should be represented (Babel, Vinck, and Karssenberg 2019; Getz et al. 2017). Further uncertainty and multiplicity of analysis strategies stems from uncertainty in data preprocessing, model, parameter, and method uncertainty (Hoffmann et al. 2021).

Questionable Research Practices

When researchers are faced with ambiguous decisions where there is no single decision or strategy acceptable according to scientific standards and consensus, we are likely to select the decision that results in the most ‘favourable’ finding with “convincing self-justification” (Simmons, Nelson, and Simonsohn 2011; Hoffmann et al. 2021). Researchers routinely execute and explore alternative analysis pathways in service of identifying the most favourable (i.e. publishable) result, selectively reporting only one or several preferred results (Liu, Althoff, and Heer 2020; Simmons, Nelson, and Simonsohn 2011).

Importantly, as Gelman and Loken note (Gelman and Loken 2013) this behaviour is not a consciously nefarious ‘fishing expedition’, rather, because researchers are motivated actors in a ‘publish or perish’ culture who are susceptible to cognitive biases and are willing to change their modelling assumptions when the results conflict with desired outcomes – sincerely and self-convincingly believing that the more favourable model is superior (Young and Stewart 2021).

Thus researcher degrees of freedom precipitate questionable research practices when they remain both undisclosed and exploited opportunistically (Bakker et al. 2018). Undisclosed and opportunistic use of researcher degrees of freedom artificially increases the probability of false positive results, inflates effect size estimates and ultimately reduces the replicability of published findings while encouraging overconfidence in the precision of results and exascerbating the risk of accepting and propagating false facts (Wicherts et al. 2016; Hoffmann et al. 2021; Roettger 2019; Nissen et al. 2016). Emerging evidence suggests that rates of QRPs in ecology and evolutionary biology are similar to rates observed in other disciplines beset by reproducibility crises (Fraser et al. 2018).

Preregistration mitigates against researcher degrees of freedom

Preregistration is an open-science practice adopted in scientific disciplines that have begun to confront the ‘reproducibility crisis’, and aims to improve research transparency and mitigate questionable research practices within a study by distinguishing between what is a genuine a priori planned analysis, and what is not (Parker, Fraser, and Nakagawa 2019). Preregistration requires the methods and analysis plan of a future study are registered in a secure and publicly accessible platform prior to any data collection, and after submission it can no longer be altered.

Despite enthusiastic uptake of preregistration in some scientific disciplines, there has been little uptake of pre-registration in ecology and conservation, and many modellers and researchers engaged in non-hypothesis testing research both within and outside of ecology have eschewed preregistration on the grounds that preregistration and existing templates are irrelevant because they are focused on null-hypothesis significance testing (e.g. McDermott et al. 2021; Prosperi et al. 2019).

Instead of throwing out the baby with the bathwater, this guide is premised on the idea that preregistration can and should be used in model-based research in ecology and conservation, but its particular form should reflect the norms and practice of the research context in which it is applied, in order to adequately restrict researcher degrees of freedom that may uniquely emerge within a domain-specific methodology. The work presented in this guide aims to translate preregistration into model-based research within ecology and conservation by proposing an expanded view of preregistration we term Adaptive Preregistration. We also present a template for preregistering model-based research in ecology and conservation.

Back to top

References

Babel, Lucie, Dominique Vinck, and Derek Karssenberg. 2019. “Decision-Making in Model Construction: Unveiling Habits.” Environmental Modelling & Software.
Bakker, Marjan, Coosje Lisabet Sterre Veldkamp, Marcel A. L. M. van Assen, Elise Anne Victoire Crompvoets, How Hwee Ong, Brian A. Nosek, Courtney K. Soderberg, David Thomas Mellor, and Jelte M. Wicherts. 2018. “Ensuring the Quality and Specificity of Preregistrations.” PsyArXiv. https://doi.org/10.31234/osf.io/cdgyh.
Desbureaux, S. 2021. “Subjective Modeling Choices and the Robustness of Impact Evaluations in Conservation Science.” Conserv Biol, March. https://doi.org/10.1111/cobi.13728.
Fraser, Hannah, Tim Parker, Shinichi Nakagawa, Ashley Barnett, and Fiona Fidler. 2018. “Questionable Research Practices in Ecology and Evolution.” Edited by Jelte M. Wicherts. PLoS One 13 (7): e0200303. https://doi.org/10.1371/journal.pone.0200303.
Gelman, Andrew, and Eric Loken. 2013. “The Garden of Forking Paths: Why Multiple Comparisons Can Be a Problem, Even When There Is No Fishing Expedition or p-Hacking and the Research Hypothesis Was Posited Ahead of Time.” Department of Statistics, Columbia University.
Getz, Wayne M., Charles R. Marshall, Colin J. Carlson, Luca Giuggioli, Sadie J. Ryan, Stephanie S. Romañach, Carl Boettiger, et al. 2017. “Making Ecological Models Adequate.” Edited by Tim Coulson. Ecology Letters 21 (2): 153–66. https://doi.org/10.1111/ele.12893.
Hoffmann, S., F. Schönbrodt, R. Elsas, R. Wilson, U. Strasser, and A. L. Boulesteix. 2021. “The Multiplicity of Analysis Strategies Jeopardizes Replicability: Lessons Learned Across Disciplines.” R Soc Open Sci 8 (4): 201925. https://doi.org/10.1098/rsos.201925.
Liu, Yang, Tim Althoff, and Jeffrey Heer. 2020. “Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems.” In. ACM. https://doi.org/10.1145/3313831.3376533.
McDermott, M. B. A., S. Wang, N. Marinsek, R. Ranganath, L. Foschini, and M. Ghassemi. 2021. “Reproducibility in machine learning for health research: Still a ways to go.” Sci Transl Med 13 (586). https://doi.org/10.1126/scitranslmed.abb1655.
Nissen, Silas Boye, Tali Magidson, Kevin Gross, and Carl T. Bergstrom. 2016. “Publication Bias and the Canonization of False Facts.” Elife 5: e21451. https://doi.org/10.7554/eLife.21451.001.
Parker, T., H. Fraser, and S. Nakagawa. 2019. “Making Conservation Science More Reliable with Preregistration and Registered Reports.” May.
Prosperi, Mattia, Jiang Bian, Iain E. Buchan, James S. Koopman, Matthew Sperrin, and Mo Wang. 2019. “Raiders of the Lost HARK: A Reproducible Inference Framework for Big Data Science.” Palgrave Communications 5 (1). https://doi.org/10.1057/s41599-019-0340-8.
Roettger, Timo B. 2019. “Researcher Degrees of Freedom in Phonetic Research.” Laboratory Phonology: Journal of the Association for Laboratory Phonology 10 (1). https://doi.org/10.5334/labphon.147.
Simmons, Joseph P., Leif D. Nelson, and Uri Simonsohn. 2011. “False-Positive Psychology.” Psychological Science 22 (11): 1359–66. https://doi.org/10.1177/0956797611417632.
Simonsohn, U., J. P. Simmons, and L. D. Nelson. 2020. “Specification Curve Analysis.” Nat Hum Behav, July. https://doi.org/10.1038/s41562-020-0912-z.
Steegen, S., F. Tuerlinckx, A. Gelman, and W. Vanpaemel. 2016. “Increasing Transparency Through a Multiverse Analysis.” Perspect Psychol Sci 11 (5): 702–12. https://doi.org/10.1177/1745691616658637.
Wicherts, J. M., C. L. Veldkamp, H. E. Augusteijn, M. Bakker, R. C. van Aert, and M. A. van Assen. 2016. “Degrees of Freedom in Planning, Running, Analyzing, and Reporting Psychological Studies: A Checklist to Avoid p-Hacking.”
Young, Cristobal, and Sheridan A. Stewart. 2021. “Multiverse Analysis: Advancements in Functional Form Robustness.” Cornell University, Ithaca, NY. Unpublished Working Paper.