Have been trying to determine the scope of the project. Want to look at the reproducibility of decision support tools
what elements of existing reproducibility literature apply to this context?
do we need a new set of criteria? Much of the repro literature especially in ecology seems to be focused on NHST, but this statistical tool is rarely used in decision science, instead it might often rely on the outputs of other studies that utilise them.
But is focusing on DSTs in ecology even warranted, what distinguishes DSTs from ecological models in general?
Are some problems more reproducible than others? more ‘robust’? My hypothesis: if lots of time spent on specifying objectives, values, preferences, and developing the conceptual model of the system, then decision outcome will be more robust. I.e. the overarching decision framework in which the DST development is embedded is important, and leads to more transparent and reproducible decisions.
Hence, question to test: are DSTs developed in a decision framework, such as SDM, more reproducible than those that aren’t?
And what is the COST of reproducibility? tradeoff between time and resources, and reproducibility.
First chapter idea:
data-based / quantitative review of reproducibility issues in the context of DSTs for ecology.
- identify the magnitude of the reproducibility crisis for DSTs in ecology
- Are DSTs developed in a decision framework more reproducible? (How to test)
- Generate criteria for reproducibility
- Really nut out how issues of reproducibility are unique to DSTs in ecology context
- Investigate systematic review methods: PRISMA, Cochrane reviews, and Campbel collaboration (social science )
Fiona: Victoria’s paper. expert elicitation. And her coding list. Look at what people are reporting in journals is an easier place to start. PRISMA - systematic reviews. double coding. Cochrane reviews also another goldstandard. Campbel collaboration -> equivalent social science. check their guidelines.