| tags: [ reproducibility metaresearch publication bias transparency open science ] categories: [reading ]

Fidler 2017 Metaresearch in ecology

Fidler, F., Chee, Y. E., Wintle, B. A., Burgman, M. A., McCarthy, M. A., Gordon, A. (2017) Metaresearch for Evaluating Reproducibility in Ecology and Evolution. BioScience doi: 10.1093/biosci/biw159.

Demonstrate that ecology and evolution as disciplines are at risk of a having low rates of reproducibility, aka a ‘reproducibility crisis’ as others have termed it. The paper sets out to identify the different ways in which ecology is likely to have a reproducibility problem. Likely rather than does because there needs to be more research to quantify the extent of the reproducibility problem in ecology.

  1. Publication bias
  2. QRP’s… confounded by a “publish or perish” culture.
  3. Incomplete reporting of methods and analyses
  4. Insufficient incentives to share materials, data and code (but this is changing, especially with the TOP guidelines).

The authors then call for four types of “metaresearch projects” in ecology and evolution that aim to take indicator measures of the likely reproducibility in ecology and evolution research.

  1. re-analysis projects
  2. quantifying publication bias
  3. measuring questionable research practices in ecology and evolution
  4. assessing the completeness and transparency of methodological and statistical reporting in journals

I think that my systematic review should speak to this last category.

Page 7 We propose extensive journal surveys: systematically recording statistical practices and methodology descriptions in published journal articles (substantially extending and updating Fidler et al. 2006) and also documenting the sharing and reusability of materials, codes, and data. Incomplete reporting is a barrier not only to direct replication and meta-analysis but also to direct re-analysis projects (in which no new data are collected but a published study’s data are subjected to independent statistical analysis following original protocols). Some aspects of statistical reporting accuracy can now be checked using automated procedures, such as statcheck (Nuijten et al. 2015). Such projects would help highlight the areas of journal’s statistical reporting policies that are most in need of attention.

Parallel offences in decision support

Although the paper is largely focused on reproducibility issues for hypothesis testing, in particular NHST, the authors note that there are other reproducibility issues to consider. For example:

Page 5 Outside the domain of hypothesis testing (in either its Bayesian or Frequentist form), there are other types of reproducibility issues to consider. Conservation science, for example, can involve elements of decision theory, cost-effectiveness analysis, optimization, and scientific computing methods. Computational reproducibility (see box 1; Stodden 2015) of such research is equally crucial for detecting errors, testing software reliability, and verifying its fitness for reuse (Ince et al. 2012).

I would add expert judgment to this list above.

So a task for me is to think ahout parallel offences in Decision Support frameworks. Non-NHST reproducibility issues need further treatment given the prevalence of non-NHST frameworks in conservation science / applied ecology toolboxes.