| tags: [ systematic review decision support ecology conservation ] categories: [systematic review ]

systematic review methodologies for conservation and ecology

Systematic Review Methods

Nakagawa and Poulin [-@Nakagawa:2012fl] recommend following the PRISMA statement, at least for meta-analyses in ecology and evolution.

PRISMA

Checklist here: http://www.prisma-statement.org/documents/PRISMA%202009%20checklist.pdf

Cochrane Review

Manual broken down into two phases of the review: 1. Developing Protocol of the review 2. Conducting Review

Writing review protocol - Formulating review questions, predefining objectives. - Medicine focused, emphasis on reporting adverse effects, on consumer needs - Choosing ‘outcomes’of interest (Mandatory). Assessing risk of bias of studies included in the review. This is for randomized studies. What would this look like for DST’s? Lack of overarching decision process framework? - Assessing statistical heterogeneity (Mandatory) - again how to apply to the results of DST’s?

I don’t think this is going to be appropriate for my review. Might be okay if you’re reviewing ecological studies focusing on the dynamics / understanding of some ecosystem. But not so useful for applied ecological work.

Campbell Systemtaic Review

Examples of Systematic Reviews of Decision Support Tools

PULLIN, A. S., STEWART, G. B. (2006) Guidelines for Systematic Review in Conservation and Environmental Management. Conservation Biology. 20, 1647–1656.

Present a summary of newly developed guidelines for use specifically in systematic review and dissemination in conservation and environmental management. www.cebc.bham.ac.uk. Based their guidel,ines on existing models (e.g. Corhane reviews), tested them and modified them for use in this context. Divide the review process into three phases:

  1. Question Formulation
  2. Conducting the Review
  3. Reporting and Dissemination of results

  4. Question formulation Emphasis on developing practice / policy relevant question, and advocate for consultation with decision-makers / stakeholders. Think this is less relevant to me, because we’re not looking at a single decision problem? E.g. Effectiveness of Rhododendron control methodologies in Europe.

Break the question down into sub-elements: a. Subject: unit of study (ecosystem, habitat, species) b. Intervention: proposed management regime, policy, action. c. outcome: e.g. proposed objectives of the management intervention, and their performance measures. d. comparator: comparing intervention with no intervention, or are alternative interventions being compared

Developing the protocol:

  • Develop a document that guides the review. Make available for scrutiny and comment at an early stage
  • Search strategy - constructed from search terms extracted from the subject, intervention and outcome elements of the question. Some good guidance here, need high-sensitivity at the expense of specificity in searches, because “ecology lacks mesh-heading indexes and integrated databases” such as those in medicine and public health. (Why ecology should start working towards semantic / annotated informatics!!).
  • Will therefore typically see large numbers of references rejected in ecology.
  • ensure strategy is documented such that it is repeatable and transparent, ensuring the validity can be judged by readers.
  1. Conducting the review

a. searching

WIDE net. Want to minimise publication bias, therefore include published and unpublished data. HOW? Hand-searching!! of specific sources. Local databases with a regional focus. MUST ensure the repeatability of search methods, however.

b. selecting relevant data

Conservative approach: “retain data if there is reasonable doubt over its relevance””

  1. examine title first
  2. title and abstract (employing second reviewer, on a random sub-sample, ensuring decisions are comparable “by performing a kappa analysis, which adjusts the proportion of records for which there was agreement by the amount of agreement expected by chance alone. If comparability not achieved, then the criteria should be further developed and the process repeated”).
  3. Remaining articles to be viewed in full to determine whether they contain relevant and usable data. Record whether full-text able to be obtained or not. Repeat the independent checking of a subsample by kappa analysis. Make short lists of articles and data sets available for stakeholders and subject experts. They should be invited to identify relevant data sources they believe are missing from the list.

c. assessing quality

What level of confidence should be placed in datasets? Aim to reduce systematic errors or bias.Used “quality hierarchy” / “hierarchy of methodology”. Involves assessing four sources of systematic bias: 1, selection bias, 2, performance bias, 3 detection bias, 4, attrition bias. But how do these apply to decision support tools?

d. data extraction

Narrative synthesis Tables of study / population characteristics, data quality, relevant outcomes (all defined a priori).

quantitative analysis Extracting variables… Well what variables would I extract??

  1. Reporting and dissemination

A number of systematic reviews exist in the Medicine literature, investigating decision support tools in a clinical / surgical context.

[@Witteman2015] Witteman, H. O., Dansokho, S. C., Colquhoun, H., Coulter, A., Dugas, M., Fagerlin, A., Giguere, A. M., Glouberman, S., Haslett, L., Hoffman, A., Ivers, N., Légaré, F., Légaré, J., Levin, C., Lopez, K., Montori, V. M., Provencher, T., Renaud, J.-S., Sparling, K., Stacey, D., et al. (2015) User-centered design and the development of patient decision aids: protocol for a systematic review. Syst Rev. 4, 11.

I think this is some pre-registration paper describing their planned review. They developed their own protocol for a systematic review on development of patient decision aids, specifically they were looking for empirical evidence for including patients in developing the tools. Used a combination of Cochrane Handbook guidelines, and followed the PRISMA methods for reporting. Developed their research questions and data extraction plan by turning to the user-centred design conceptual framework.

Inclusion / Exclusion Criteria: Clustered their articles into three groups. 1. articles describing development of a DT in this context 2. articles explicitly describing the user/human-centred approach (this is arguably akin to my question about DT’s developed within an overarching DS framework) 3. Articles describing evaluation of a tool - which development practices are associated with better outcomes.

Assessing the quality of each article: followed mixed-methods review guidlines.

Evidence synthesis and analysis: question 1: descriptive statistics, i.e. frequency of use of different practice used in development question 2: as above but for those articles focusing on user-centred design question 3: Used some descriptive statistics, but also tried to develop their own measures of “better outcomes”.

References