| tags: [ meeting ] categories: [meeting ]

meeting Feb 15 2018


-> working towards the research proposal -> also on first chapter

Reading: - trying to identify difficulties / issues in modelling applied ecological problems in general - e.g. projecting performance of conservation actions, building the evidence base in restoration ecology as new techniques emerge.

-> aims, trying to identify why reproducibility is important for ecological models / DSTs

defining reproducibility for ecological models (still need to justify this)

  • Often an issue where can’t obtain original data in papers (becoming less of an issue in ecology)
  • Consequently, modeller must make a whole bunch of assumptions / do a whole bunch of data analyses when this data isn’t available….
  • even if somebody else repeated / replicated this AND made the same model… how do we know these assumptions are correct? What if they’re just plain wrong, and therefore the decisions the models are recommending are sub-optimal / at worst, even having negative outputs?
  • what good is having a reproducible model if the decision outcomes are sub-optimal / negative.
  • but maybe the fact that something is reproducible / replicatable lends further evidence to the accuracy of the causal assumptions / parameterisation underlyign some particular model?

First chapter: - have an action plan (see post “first chapter”)

my target audience: modellers / analysts?


  • Am i examining just models… or the entire decision-making process… from specification of objectives, for example.. My feeling is that there are a lot of papers focusing on good / robust modelling practices within ecology / conservation in general, as well as articulating why models are necessary in decision-making.
  • I think it could be useful to consider the entire process to developing the decision tool as a whole because aspects of the whole process may influence the final recommended decision.. For example, Law et al. [-@Law:2017ia] argue that decisions are sensitive to the scenarios selected during scenario analysis.Important that scenarios are plausible to avoid highly postiive / negative outcomes that may be unlikely.

  • A. studies / models that just highlight the risk / probability / uncertainty of different actions, and then highlighting where activities should be implemented (e.g. semper-pascual, 2018, mapping extinction debt). Often give general heuristic rules based on their analyses. Common in land-use / conservation planning.

  • B. vs. fully blown decision-models? models / tools that simulate / predict a system under different management alternatives

Question: - how common are “full-blown decision models” in applied ecology? - A. definitely more common, particularly in landuse / conservation planning. Decisions might depend on their outputs / recommended heuristics, however the full decision-making process isn’t reproducible… no way of formally evaluating the decisions based on such studies.

end-user vs. analyst / modeller. Of course the modeller is thinking about the end-user as well.

broader process / approach for deriving the tool vs. technical issues around tool development. Mostly proof-of-concept papers (decision-support-tools). what about applications? mostly done outside of the published literature.

First paper: here’s the problem, and a solution… Want to be solution focussed. question: are DSTs that are embedded in a broader process for their development more reproducible / better than those that aren’t? What are my criteria? Start thinking about hypotheses, and explanations behind them? My theory is that YES they are more reproducible. Because the objectives are clearly articulated, and so is the problem context, alternatives carefully considered, which means that the causal understanding of the system is also more clearly specified. Processes could include SDM, decision-theory, risk analysis ISO, decision analysis, or others. Start searching decision-support tools.

See paper on which Angela is a first author and libby is a co-author. SDM for restoration.

Cost of reproducibility: diminishing returns. Goal should be for others to reproduce your results using your data, code AND with a little help from you in the form of documentation (Matt Pennell: https://methodsblog.wordpress.com/2016/08/02/reproducibility/).

Overall Thesis: 1. is there a problem with reproducibility of ecological models for decision-support? what is the magnitude of that problem? 2. What are the pinch-points, the technical issues? 3. What are solutions for these issues identified in 2.

3-month mark: have criteria ready for what constitues a reproducible DST. Some initial work under way.