| tags: [ reproducibility ecology decision tools ] categories: [meeting ]

Meeting Hannah

See Steve’s guidelines around meta-analysis from medicine. Hannah’s criteria. Conservation Biology guidelines. Guidelines / checkpoints are just around transparency. No actual checking of reproducibility.

How do decisions vs. models differ? What is particular about my problem context? values, preferences. Process vs. decision model - problem is often split into two. Decision tools as models more complex, involve not just a model of the system / domain (including decision elements), but capturing objectives, alternatives, eliciting expert judgment, involving many stakeholders.

What facet of reproducibility do I want to test? 1. Computational / analytical reproducibility? In theory this should be easy - Different team, trying to reproduce the result using the exact methods and procedures of the original study. 2. Same problem, different team, do you get the same decision outcome recommended? Even if the tool/model is different?

How reproducibile they are might depend on the decision context at hand. Some decisions will be inherently more robust to the tool used than others. Some not. Some the differences will be more marginal.

Need to come up with own set of criteria for not just reproducibility in ecology, but for decision tools / modelling.

Hannah’s work on questionable research practices in ecology. Issue of people trying a model, and then using another one because the first one is crap. But some people do this to check whether the result is robust to different models. Similar to detectability studies, you could get multiple groups of experts addressing the same problem, is it an ‘all-roads-lead-to-rome’ situation? My hypothesis, if you nut out the conceptual model correctly, then it shouldn’t matter what model / tool you use to find the decision outcome (unless you use the tool / method out of context and the assumptions don’t match). BUT that step isn’t really given much time / thought in the literature. Especially because in the decision science / ecology literature, most papers are proof-of-concept papers. And a lot of the actual applications exist in the grey literature.