Once again, the United Nations Intergovernmental Panel on Climate Change (IPCC) has come forth with its Summary for Policymakers (SPM), a document that ostensibly summarizes the first volume in a humongous three-volume series of what amounts to massive literature reviews on everything there is to know about climate change. These assessment reports are issued about every six years and have their own special acronyms. Since the latest report is the fifth such assessment report to be issued, it is going to be known as AR5. I have written about these SPM’s for over a decade now, beginning with the Third Assessment Report, or TAR. Generally, the first volume — which summarizes the science of climate change — gets the most attention and generates the scariest headlines.
One of the things I have focused on when writing about the SPMs is how they describe their levels of certainty and confidence in their findings. That has evolved over the years, with the most recent report claiming 95 percent confidence in their finding that human beings are responsible for more than half of the observed climate change since 1950.
The media tends to take this at face value, but what does it mean, exactly, to have 95 percent confidence in a selective analysis of a vast body of literature that is too abstruse for even the authors of the reports to understand entirely? Is it some statistical value derived from a supercomputer? Is it the product of the statistical validity estimates of all the underlying studies that are cited? Well, no. And for good reason — such estimates would be impossible to produce. Instead, what we have is basically “expert judgment” in which the authors assess how confident they are in their own contributions to the assessment reports.
How the IPCC has explained this has varied over the years:
From the SPM for the Third Assessment Report:
In this Summary for Policymakers, the
following words have been used where appropriate to indicate judgmental
estimates of confidence (based upon the collective judgment of the
authors using the observational evidence, modeling results, and theory
that they have examined): very high (95% or greater), high (67-95%), medium (33-67%), low (5-33%), and very low
(5% or less). In other instances, a qualitative scale to gauge the
level of scientific understanding is used: well established,
established-but-incomplete, competing explanations, and speculative.
(Italics in the original)
At least for the Third Assessment Report, they were pretty clear
about the fact that their confidence is based on the collective judgment
of the writers themselves.Here's the language from the SPM for the Fourth Assessment Report:
In general, uncertainty ranges for results
given in this Summary for Policymakers are 90% uncertainty intervals
unless stated otherwise, that is, there is an estimated 5% likelihood
that the value could be above the range given in square brackets and 5%
likelihood that the value could be below that range. Best estimates are
given where available. Assessed uncertainty intervals are not always
symmetric about the corresponding best estimate. Note that a number of
uncertainty ranges in the Working Group I TAR corresponded to 2 standard
deviations (95%), often using expert judgment. . . .
In this Summary for Policymakers, the
following terms have been used to indicate the assessed likelihood,
using expert judgment, of an outcome or a result: Virtually certain > 99% probability of occurrence, Extremely likely > 95%, Very likely > 90%, Likely > 66%, More likely than not > 50%, Unlikely < 33%, Very unlikely < 10%, Extremely unlikely < 5% (see Box TS.1 for more details). . . .
In this Summary for Policymakers the
following levels of confidence have been used to express expert
judgments on the correctness of the underlying science: very high confidence represents at least a 9 out of 10 chance of being correct; high confidence represents about an 8 out of 10 chance of being correct (see Box TS.1). (Italics in the original)
Note the reference to Box TS.1, of the full technical report. What’s
interesting about this is that the IPCC does not release the technical
reports for days (formerly weeks or months) after they release the
Summary for Policymakers. That is, the media does not get to see the
details of how certainty is handled for some time after they have had to
interpret the SPM for their readers. Also note that we have lost some
of the more honest language of the TAR over the years — no more
“collective judgment” here, it’s “expert judgment” throughout the Fourth
Assessment Report SPM. And the language in the first paragraph, which
talks about “uncertainty intervals,” strongly implies that they are
statistically derived. It’s only at the end of that paragraph that we
see the “uncertainty intervals” are often set using “expert judgment.”And here's what the SPM for AR5 says:
The degree of certainty in key findings in this assessment is based on the author teams’
evaluations of underlying scientific understanding and is expressed as a qualitative level of confidence (from very low to very high) and, when possible, probabilistically with a quantified likelihood (from exceptionally unlikely to virtually certain). Confidence in the validity of a finding is based on the type, amount, quality, and consistency of evidence (e.g., data, mechanistic understanding, theory, models, expert judgment) and the degree of agreement. Probabilistic estimates of quantified measures of uncertainty in a finding are based on statistical analysis of observations or model results, or both, and expert judgment. Where appropriate, findings are also formulated as statements of fact without using uncertainty qualifiers. (See Chapter 1 and Box TS.1 for more details about the specific language the IPCC uses to communicate uncertainty). (Italics in the original)
And what about the dreaded Box TS.1? When we look there, we find further evidence of just how subjective the report is:evaluations of underlying scientific understanding and is expressed as a qualitative level of confidence (from very low to very high) and, when possible, probabilistically with a quantified likelihood (from exceptionally unlikely to virtually certain). Confidence in the validity of a finding is based on the type, amount, quality, and consistency of evidence (e.g., data, mechanistic understanding, theory, models, expert judgment) and the degree of agreement. Probabilistic estimates of quantified measures of uncertainty in a finding are based on statistical analysis of observations or model results, or both, and expert judgment. Where appropriate, findings are also formulated as statements of fact without using uncertainty qualifiers. (See Chapter 1 and Box TS.1 for more details about the specific language the IPCC uses to communicate uncertainty). (Italics in the original)
Each key finding is based on an author
team’s evaluation of associated evidence and agreement. The confidence
metric provides a qualitative synthesis of an author team’s judgment
about the validity of a finding, as determined through evaluation of
evidence and agreement. If uncertainties can be quantified
probabilistically, an author team can characterize a finding using the
calibrated likelihood language or a more precise presentation of
probability. Unless otherwise indicated, high or very high confidence is
associated with findings for which an author team has assigned a
likelihood term.
Further in Box TS.1, we learn that one cannot really compare the
confidence/certainty assessments of AR5 with previous reports, so it’s
not safe to say that things have “grown more certain” despite what
various news and interest-group reports claim. As TS.1 explains:
Direct comparisons between assessment of
uncertainties in findings in this report and those in the IPCC Fourth
Assessment Report and the IPCC Special Report on Managing the Risk of
Extreme Events and Disasters to Advance Climate Change Adaptation (SREX)
are difficult, because of the application of the revised guidance note
on uncertainties, as well as the availability of new information,
improved scientific understanding, continued analyses of data and
models, and specific differences in methodologies applied in the
assessed studies. For some climate variables, different aspects have
been assessed and therefore a direct comparison would be inappropriate.
As one can see, the reality of the IPCC’s certainty is that it’s
really just a best guess, a self-assessment by authors with a vested
interest in having their work believed by policymakers.
Burying the Pause
Every time an SPM is released, there’s buzz about what exactly will make it in — will the hockey stick graph
(the now infamous graph showing a relatively stable climate record with
a sharp spike starting in the late-19th century) return? Will there be a
new hockey stick?
Will the IPCC achieve consensus on the SPM, or will negotiations have
to go into overnight sessions? Will the IPCC announce an end to
assessment reports and international junkets? And there’s virtually
always a “leak” of not only the SPM, but of technical reports as well
(not to mention the climategate emails, but that’s a different story).
It’s all part of the buzz of the IPCC process. This year, the buzz was
over something that found its way into a leaked draft version of the SPM
in which the authors acknowledged that climate models could not explain
the past 16 years, during which the climate did not change even while
greenhouse gas emissions rose sharply (sometimes known as “the pause”).
Here’s what Fox reported was in the leaked (September) draft SPM:
Models do not generally reproduce the observed reduction in surface warming trend over the last 10–15 years.
Magically, however, in only a month, an explanation developed. Here’s how the AR5 SPM handles the pause:
The observed reduction in surface warming
trend over the period 1998–2012 as compared to the period 1951–2012, is
due in roughly equal measure to a reduced trend in radiative forcing and
a cooling contribution from internal variability, which includes a
possible redistribution of heat within the ocean (medium confidence).
The reduced trend in radiative forcing is primarily due to volcanic
eruptions and the timing of the downward phase of the 11-year solar
cycle. However, there is low confidence in quantifying the role of changes in radiative forcing in causing the reduced warming trend. There is medium confidence
that internal decadal variability causes to a substantial degree the
difference between observations and the simulations; the latter are not
expected to reproduce the timing of internal variability. There may also
be a contribution from forcing inadequacies and, in some models, an
overestimate of the response to increasing greenhouse gas and other
anthropogenic forcing (dominated by the effects of aerosols). (Italics
in the original)
VoilĂ ! The pause is explained! Pay special attention to that last
sentence; this is the IPCC covering its rear in the event that the pause
continues. They’ll say, “Well, we admitted that models might be
overestimating how sensitive the atmosphere is to greenhouse gas
emissions.”Elsewhere in the SPM we’re also told not to sweat the small stuff, because 15 years isn’t a meaningful length of time:
In addition to robust multi-decadal
warming, global mean surface temperature exhibits substantial decadal
and interannual variability. Due to natural variability, trends based on
short records are very sensitive to the beginning and end dates and do
not in general reflect long-term climate trends.
So, 15 years in a climate record going back some 800,000 years
isn’t significant, I can see that. But how, exactly, is a trend that’s
63 years long (from 1950-2013, ostensibly the period of anthropogenic
climate change) any more significant?
Artists at Work
Of course, spinning facts into narrative is a regular feature of IPCC
reports. The SPMs show amazing artistry in the choices that are made
with regard to how data is presented. In some cases you get absolute
values; in others, percentage values are used. Date ranges seem selected
to tell whatever narrative the IPCC wants to tell. Chart scales are
stretched or shrunk to emphasize or de-emphasize trends and changes. And
an endless list of climatic changes are thrown out without context.So, for example, with regard to ice loss, the SPM for AR5 tells us that:
The average rate of ice loss from glaciers around the world, excluding glaciers on the periphery of the ice sheets, was very likely 226 [91 to 361] Gt yr-1 over the period 1971-2009, and very likely 275 [140 to 410] Gt yr-1 over the period 1993-2009. (Italics in the original)
A “Gt” is a Gigatonne, or one billion metric tons. Gosh, that
certainly sounds like a lot of ice! But what share is that of the
world’s ice? According to the National Snow and Ice Data Center, the Antarctic has 30 million cubic kilometers of ice, which is 90 percent of the Earth’s ice. So, figure that the world has some 33 million cubic kilometers of ice. Because ice is less dense than water, one cubic meter of ice weighs about 916 kilograms (a cubic meter of water would weigh one metric tonne, or 1,000 kilograms).
So we can calculate that the world has about 30 million Gigatonnes of
ice. That 275 Gt per year is less than 0.001 percent of the world’s ice.
Yes, less than a thousandth of 1 percent per year. This is what has a weatherman abandoning air travel and considering a vasectomy?Another example of how the IPCC spins things is with their choice of terminology. For some time now, the specter of “ocean acidification” has reared its head as a key risk of climate change. Ocean acidification is the phenomenon where carbon dioxide dissolved in seawater converts to carbonic acid, lowering the pH of the ocean:
Ocean acidification is quantified by
decreases in pH. The pH of ocean surface water has decreased by 0.1
since the beginning of the industrial era (high confidence), corresponding to a 26% increase in hydrogen ion concentration. (Italics in the original)
Wow, a 26 percent increase in hydrogen ion concentration. The oceans
must be turning into stomach acid! Er, perhaps not: stomach acid has a
pH of 1.5 to 3.0.
The oceans have a pH of about eight (and seven is neutral). That is,
the oceans are somewhat basic, or alkaline. The term “acidification” is
misleading. At best, you could argue that the ocean is slowly
“neutralizing,” but that’s not nearly as scary sounding. Moreover, as recent studies have shown,
the ocean’s pH fluctuates quite significantly all by itself, and the
ocean’s critters thrive at a wide range of pH conditions in both fresh
and sea water.
Spin Doctors, Spun Science
Every six years, the IPCC spins out another Assessment Report along
with its accompanying Summary for Policymakers, and “spin” is the
operative term. Dutifully, the mainstream media carries the findings of
the report at face value, simply accepting that the IPCC is playing it
straight with what they know and what they understand about the climate
system and manmade climate change. The media treats the claims in the
SPM as though they were the results of rigorous studies subject to
statistical testing, rather than bother readers with the reality that
this is really just a large, highly selective literature review in which
subjective evaluations by a government-picked group of climate experts
rule the day. Decent journalists should know better: the Summary for
Policymakers is, and always has been, a document intended to tell the
narrative that the United Nations and other groups who promote
catastrophic climate change have wanted it to tell. Nothing more, and
nothing less.Kenneth P. Green is Senior Director, Natural Resource Studies at The Fraser Institute, and was formerly a Resident Scholar with AEI. He is based in Calgary, Alberta.
No comments:
Post a Comment