Ranking & Communicating Levels of Confidence to Policy & Media Audiences
This workshop centered on refining and standardizing practices for ranking confidence in scientific findings. A variety of groups was included in the discussion from decision analysts, statisticians, and media analysts to IPCC lead authors. The objective was to improve consistency in reporting uncertainty, as well as to improve public communications about uncertainty in findings. Ideas from the workshop contributed to the characterization of uncertainty and confidence developed by the IPCC.
Keywords: communication; uncertainty; human interactions, climate
Disciplines:
Integrated Assessment Modeling; Science, Technology, and Society Experts; Contrarian Views; Decision Analysis; Media; Land/Use Cover; Policy Analysis; Resilience and Vulnerability.
Overview:
The major purpose of scientific assessments is to communicate to decisionmakers and lay persons the state of scientific and technical knowledge regarding critical issues facing society. In addition to the findings themselves, an important part of what needs to be conveyed is the collective judgment of the scientific community regarding the level of confidence that can be placed in particular results. In some cases, findings are supported by wide agreement among participating authors, based on multiple results in several independent lines of investigation. In other cases, there may be growing evidence regarding a conclusion, but the conclusion has not yet survived serious counter-attack by other views or serious efforts to ‘confirm’ by independent evidence. In some cases, objective confidence limits can be placed on a finding through statistical means; in others, the degree of confidence can only be determined more subjectively. It is important for members of the policy community and other target audiences to have information about the types of statements and levels of confidence that can be placed in a finding (e.g. the degree to which it is based on objective or subjective probabilities), as well as to understand the origins and significance of remaining uncertainties, so that they can understand how to interpret and use the results of assessments.
Uncertainty, or more generally, debate about the level of certainty required to reach a “firm” conclusion, is a perennial issue in science. The difficulties of explaining uncertainty become increasingly salient as society seeks policy prescriptions to deal with global environmental change. How can science be most useful to society when evidence is incomplete or ambiguous, the subjective judgments of experts about the likelihood of outcomes vary, and policymakers seek guidance and justification for courses of action that could cause significant societal changes? How can scientists improve their characterization of uncertainties so that areas of slight disagreement do not become equated with purely speculative concerns, and how can individual subjective judgments be aggregated into group positions? And then, how can policymakers and the public come to understand this input and apply it in deciding upon appropriate actions? In short, how can the scientific content of public policy debates be fairly and openly assessed?
Relevance:
The Case of the IPCC Second Assessment Report
Interest in climate change, potential impacts, and adaptation /mitigation policy options increased dramatically during 1995-96. While there are many explanations for this, one contributing factor was the conclusion, reached by the Intergovernmental Panel on Climate Change (IPCC) in its Second Assessment Report (SAR)(IPCC 1996a-c) that, even considering remaining uncertainties, “the balance of evidence suggests that there is a discernible human influence on global climate.” This conclusion, which was negotiated over a period of a year by hundreds of scientists and policymakers, acknowledged the strong belief of most experts that human modification of atmospheric composition has led to noticeable climatic effects and likely significant climate change in the decades ahead. Though not especially controversial in the scientific community, the statement created a maelstrom of both support and criticism from a variety of interest groups who seemed confused by the different ways in which uncertainties and knowns were explained in the technical chapters and the Summary for Policymakers ( e. g., see Science, Nature, 1996).
The most recent IPCC report and its predecessors provided “best estimates” of possible climatic futures, as well as a broad range of plausible outcomes, including possible “outlier” events. The implications encompass consequences of climate change that range from mildly beneficial to potentially catastrophic changes for ecosystems and human activities such as water management, development of coastal margins, and agriculture. Although more confidence was naturally expressed in outcomes near the centers of those wide ranges between the high and low outliers, some interest groups understandably focused on possible extreme outcomes, which sharpened the debate and created substantial contention.
The purpose of the IPCC and other assessments of scientific research is to convey to interested publics, including decision-makers, advisors, the media, private-sector businesses, and environmental /community groups, the most up-to-date information available. One of the major challenges is that the assessments necessarily must present a snapshot of information which is continuously evolving. At the time of preparation of the SAR, the uncertainties included, for example, the possibilities of large and/or abrupt climate changes and/or technological breakthroughs that could radically reduce emissions abatement costs in the future. Given the use of the IPCC reports in policy making, and the need of decision-makers to determine their response to the risks of climate change before all uncertainties can be resolved (even in principle) to the satisfaction of every interest group, the available information, imperfect as is may be, must be synthesized and evaluated at periodic intervals.
Thus, a great deal of importance is attached to the need to assess and explicitly distinguish which aspects of technical controversies that affect our understanding of these uncertainties are well understood and enjoy strong consensual support, which aspects are somewhat understood, and which are highly speculative. Unfortunately, in the media and political debates, such degrees of certainty and uncertainty often become blurred. As a result, the nuanced characterization of uncertainty that might occur in professional assessment is often mis -translated into the appearance of scientific cacophony in the public arena.
At the same time, scientists themselves struggle with the highly subjective and qualitative nature of the assessment process, preferring, by tradition, to study individual components of problems that can be tested, rather than the necessarily more difficult synthesizing of these components of relevance to decision-makers and the public. Qualitative descriptions of the level of certainty attached to a particular finding terms such as “almost certain,” “probable,” “likely,” “possible,” “unlikely,” “improbable,” “doubtful,” “almost impossible” mean different things to different people and hence are not precise descriptors, and they are sometimes used rather inconsistently or uncritically in assessments (let alone by the general public and the media). Individuals and groups often use simplifying assumptions or heuristic procedures for making judgments about uncertainty. The consequence of this can be a tendency towards overconfidence in the likelihood of median outcomes and a tendency to underestimate the probability of outlier events or surprises.
Content:
To address this challenge, the Aspen Global Change Institute convened an international workshop of researchers and analysts to review past practice and develop proposals for a consistent set of terms or standards for ranking confidence in findings. The group included independent scholars, statisticians, decision analysts, media and policy analysts, and a number of Lead Authors from all three working groups of the IPCC SAR, including researchers from the physical, biological and social sciences. .Our objective of refining and standardizing the approach was to help decision makers and the public to better interpret confidence levels, as well as to provide a clearer perspective on the nature of remaining uncertainties. This entailed exploring how to address these same difficulties in communicating various types of uncertainty in research results to other target audiences, principally the media and NGOs active in global change issues.
Discussions were facilitated on four basic sets of questions:
i. What approaches to establishing uncertainty ranges and confidence levels were used in the preparation process for the SAR (IPCC 1996)? How did these approaches and other factors affect the conclusions and ranges presented?
ii. What approaches could be used to represent the center, the body, and the range of informed technical opinion in future assessments (including quantification of uncertainties)?
iii. How do uncertainty and the presentation of levels of confidence translate into media coverage and policy debates? What approaches are used in other fields or could be developed to communicate more effectively the nuances of uncertainty as understood by climate experts, impacts specialists and policy analysts, to the media, policymakers and the public at large?
iv. What recommendations for improving future IPCC or other international or national assessments could be developed by authors and experts?
The workshop addressed multiple issues, including:
• The subjective nature of different individuals’ levels of confidence in the same findings;
• Methods and protocols to facilitate reaching a consistent group judgment in uncertainty assessment;
• The implications of individual subjectivity for collective rankings by groups of authors;
• The multiple sources of uncertainty in assigning levels of confidence, including poor data and incomplete understanding of key natural/ecological processes and uncertainties associated with subjective assumptions (e.g., particular climate scenarios, projections of population growth);
• The different impacts on confidence rankings of variation in the quantity/quality of data and the degree of consensus which exists.
Workshop Goals
1. Provide an overview of the sources and categories of uncertainty;
2. Explore methods for establishing confidence levels from the natural and social sciences;
3. Differentiate what can be objectively assessed from what requires subjective approaches to assessing probability;
4. Explore the uses of decision-analytic tools and protocols for more consistent subjective probability assessments;
5. Invite lead authors from the IPCC and other assessments to reflect on the processes they used, to explore alternatives, to identify what works, what doesn’t work, and what is needed;
6. Explore methods to better characterize outlier events (i.e. “surprises”);
7. Propose how to better communicate the kinds and assessments of uncertainty in global change science to target audiences including the media;
8. Explore the possibility for consistency in method(s) of uncertainty assessment across topics like climate change, biodiversity, and ozone.
Topics/Cases
WG I: Climate sensitivity, sea-level rise, detection, extreme events.
WG II: Agriculture, ecosystems, water resources management, health.
WG III: Mitigation technologies and cost studies (To involve: 1) engineering/technological approaches; 2) industry perspectives; 3) economic approaches).
Workshop Outcomes
-
Characterizing and Communicating Scientific Uncertainty
View PDF
Agenda
Expand to see available videos and presentations
9:00 am WGII/III Topics
11:30 am Issues for Authors – Implicit Differences That Give Rise to Disagreement
12:00 pm Agriculture and Food Systems: Projecting Impacts on Food Security
1:30 pm Assessments of Mitigation Options and “Optional Strategies”
5:30 pm Walter Orr Roberts Memorial Public Lecture: In a Democracy The Climate Problem is Whatever The Public Believes It Is
9:00 am Methodological Perspectives on Characterizing Confidence
9:45 am Bayesian Approaches
1:30 pm Working Groups Form and Outline Tasks
4:00 pm Overview of Confidence Issues
9:00 am Quick Summary of First Week Session Presentations and Discussions Followed by Response from First Week Participants and New Participants
2:30 pm Integrated Assessment: Can it Bound the Problem?
3:00 pm Integrated Assessment: How Current Assessments are Treating Uncertainty
4:30 pm Therefore What?
9:00 am Challenges and Approaches to Reporting of Assessment Results
12:45 pm When We Don’t Know the Costs or the Benefits: Adaptive Strategies for Climate Change (What Do You Do When You Don’t Know Who’s Right?)
1:30 pm Effects of Uncertainty Assessments on Management & on Policy Making in the Political Caldron
2:15 pm The Use and Misuse of Uncertainties by Political Interests
3:30 pm Formation of New Working Groups of New Participants in Already Formed Working Groups
9:00 am Quick Progress Report on Working Groups
9:30 am Working Groups
1:30 pm Working Groups Progress Reports and Group Feedback
3:30 pm Working Groups Complete Reports
Organizers
Attendees
The attendee list and participant profiles are regularly updated. For information on participant affiliation at the time of workshop, please refer to the historical roster. If you are aware of updates needed to participant or workshop records, please notify AGCI’s workshops team.