| tags: [ reproducibility expert judgment decision analysis ] categories: [reading ]

French 2012 meta-analysis for expert judgment

French, S. (2012) Expert Judgment, Meta-analysis, and Participatory Risk Analysis. Decision Analysis. 9, 119–127.

Problem / background and paper goals

Page 2

[@French:2012di] describes three types of expert elicitation problems. The third forms the focus of the paper:

  1. The expert problem: where the decision-maker does not have the domain knowledge and elicits judgments from a group of experts.
  2. The group decision problem: where the expert group itself is jointly responseible for the decision.
  3. The textbook problem: give judgments for others to use in the future, for as yet undefined circumstances, without the focus of a specific-decision problem. In this instance, the issue is not just that there is no decisio-context, but that the judgments will be used for other, as yet undefined decision-contexts.

The group is simply required to give their judgments for others to use in future, as yet undefined circumstances. Thus, the emphasis here is on reporting their judgments in a manner that offers the greatest potential for future use.

At present, there is no agreed robust methodology for the text-book solution. French explores the following two themes in the paper:

  1. the need to report expert studies in a “scientific” manner,
  2. the technical issues faced by metaanalyses specifically designed to draw together previously published expert studies

There are established methodologies for meta-analyses, but these focus on drawing out inferences from several publisehd empirical studies. There are no established methodologies for meta-analyses of expert judgment before this review.

these meta-analytic techniques for expert studies should differ considerably from those developed for combining empirical studies, because expert judgment data and empirical data have very different qualities.

Page 3

An important element of the text-book problem to consider, is that:

Reports and expert judgments from studies for earlier decisions may be reused in later analyses to support a different decision, a factor that may confound some of the issues in dealing with the expert and group decision problems with those relating to the textbook problem.

I think this is a pretty common feature of decision-support tools, also. We use, either empirical data, or expert judgmetns from earlier decisions, or even other studies to support a different decision. Is there an inherent problem with this?


Existing standards / methodologies for reporting expert judgments

At the time of the review, the below standards are the only significant guidance on how this might be done.

• Scrutability/accountability. All data, including experts’ names and assessments, and all processing tools should be open to peer review, and results must be reproducible by competent reviewers. • Empirical control. Quantitative expert assessments should be subject to empirical quality controls. • Neutrality. The method for combining and evaluating expert opinion should encourage experts to state their true opinions and must not bias results. • Fairness. Experts should not be prejudged prior to processing the results of their assessments.

In contrast, the research community and scientific journals have developed and enforced a wide range of principles to govern the peer review, publication, and use of empirical studies, alongside which has grown a recognition of the importance of evidence-based decision making (Pfeffer and Sutton 2006, Shemilt et al. 2010). The latter developments began within medicine, but the imperatives of basing decisions on evidence are now changing thinking in many domains.


Methodologies of meta-analysis for expert judgment

Methodologies of meta-analysis are established for drawing out inferences from multiple empirical studies. However, methods for expert judgment data are not yet considered.

French discusses that argument that evidence-based decision making should draw on supposedly “objective”, i.e. empirical data. Although for some, expert opinion on its own is not considered a reliable source of evidence, yet, the concurrence of experts is a “recognised form of model validation”.

Moreover, expert input is common in model selection and in providing parameter values, in economic decision models, at least. See Cooper et al. (2007). This is an important consideration for our context - exactly where in the decision model development process does expert judgment ‘creep’ in?

Page 4

Following my Bayesian leanings, I would argue that expert judgment should inform a meta-analysis providing that the relative quality of the expert input to that of empirical data is appropriately assessed. And therein lays the rub: How do we assess that relative quality? We need meta-analytic techniques that draw together both empirical and expert judgmental data.

However, French argues that given the differences between meta-analyses for expert judgment and empirical studies (Table 1), almost all of the meta-analytic techniques for empirical data are of questionable value for expert judgment data. Mostly because of the correllation between experts (Table 1). Excepting forest plots, and Excalibr software used during expert elicitation.

Page 5

On questioning the notion that expert evidence is not “objective”:

We have discussed expert judgment studies as if they are based solely on expert judgment. This is not always the case. In some cases, experts consult their computer models or do some back of the envelope calculations before giving an opinion. Either way, their probability statements may be the complex result of combining their personal opinions about parameters, the validity of models, and modeling error. Moreover, the models are a complex expression of the experts’ selection of human knowledge.

What is expert knowledge? A complex synthesis of different sources of knowledge.


Case studies:

Issues when performing expert judgment elicitaiton in the textbook problem:

  • lack of provenance for probability estimates Page 6

Page 7

All subpanels of the IPCC have standards for their reporting, involving:

  • uncertainty modelling
  • elicitation
  • communication

However:

The guidance material is, however, almost entirely silent on aggregating expert judgments per se. Aggregation is achieved via ensembles (mixtures) of models.

Expert judgment may be used to provide probability distributions encoding uncertainties in each model, and then models that predict the same quantities are drawn into ensembles, which in this context may be viewed as an approach to metaanalysis.

Could you describe a decision tool as a form of meta-analysis then?


Future Direction for reporting and meta-analyses of expert judgments:

Page 8

Indeed, I might argue that the absence of any serious debate about the validity of Cooke’s (1991) principles suggests that as a community of risk and decision analysts we do not care. Nor do we know how to learn from several earlier studies.

Thus, two strands of development are needed: 1. reporting standards for expert judgment studies that allow them to be audited and evaluated, 2. and meta-analytic methodologies for expert judgment data.

1. Reporting Standards

How do we create an archive for expert judgment studies?

In case of empirical studies: there are plenty of dubious ones out there. But in the case of empirical studies there are also peer-reviewed, quality-assured journals that are much more trustworthy, carefully indexed, and backed up by archives of the underlying data sets. We need to create the same sort of archive for expert judgment studies.

Cooke’s principles provide a starting point… but:

His ideas need deep discussion, possible modification and extension, and then adoption by us all in our protocols for designing, running, and reporting expert judgment studies.

In establishing the principles we need recognize that it must be possible to audit studies against them so that peer-review methodologies can be defined and implemented. The principles must be operational. We also need to establish one or more archives in which studies can be deposited: Cooke and Goossens (2007) provided a prototype, but it may need modification, and there certainly needs to be some organizational ownership of such archives so that their future existence has some assurance.

Meta-analytic methods

Turning to the other strand of developments, we need to develop techniques that, in a specific context, allow us to select relevant expert studies and then produce a meta-analysis that establishes what may reasonably be learned from these. Table 1 indicates that we cannot simply adopt standard meta-analytic techniques, but will need to develop some afresh

And these procedures will need authority, perhaps by some organization as the Cochrane society.


Page 9

We might also look to current developments in combining scenario planning approaches with decision analysis (Wright and Goodwin 1999; Montibeller et al. 2006; Stewart et al. 2010, 2012).

We might construct scenarios such that each reflects the import of one of the selected expert judgment studies. This would avoid the complex issue of aggregating judgments across several studies, albeit at the cost of placing the onus of learning from the several scenarios on the intuition and understanding of the decision makers.


The text-book problem or the archive problem? On terming the issue:

Maybe the “archive problem” would be a better term. It would recognize that the issues that we are discussing relate to formal expert opinion and how it should be carefully documented, archived, and subsequently used.


Takeaway messages and importance to my research:

Synthesising empirical and expert data, extrapolating to new decision contexts

See section above on methodologies for meta-analyses of expert judgment.

  • Expert input often used during model selection and also on estimation of parameter values.
  • Providing probability distributions encoding uncertainties in each model (e.g. to parameterise a Baye’s Net)

The need to draw in many pieces of evidence derived from previous expert judgment is most likely to occur in the:

  • “formulation phase”
  • defining “prior probability distributions” Page 8, (French et al. 2009).

Expert-judgment creep

As decision-analysts, often we use expert judgment without:

a) explicitly reporting or even acknowledging so, b) and therefore without structured / formal accepted procedures for doing so.

When does this even occur? I believe it occurs often during the development of the conceptual model / structural form of the causal model, and also when there is missing knowledge in the literature, particularly when empirical evidence / data is lacking, or in the ‘wrong shape’ (AND THIS IS WHY SEMANTIC ANNOTATION OF DATASETS WOULD BE USEFUL… AND A SEMANTIC WEB INTERFACE / SEARCH ENGINE FOR PEER-REVIEWED LITERATURE).