“It is impossible as I state it, and therefore I must in some respect have stated it wrong.”

Sherlock Holmes

If I have gained any reputation at all on this website, I would like to think that it is for being a person who has been rightfully critical of the IPCC’s treatment and understanding of risk and uncertainty. This subject concerns me because my professional background required that I develop a firm grasp of the conceptual framework for risk and uncertainty, and I happen to believe that this is equally important if one is to present a case for or against the actions proposed to address climate change. I have made the point on more than one occasion that the analysis and management of risk and uncertainty does not necessarily fall within the range of expertise that may be assumed for the average climate scientist, nor indeed for the vast majority who profess on their behalf. In that important respect, we should not be looking to the IPCC as an expert authority.

On the eve of the publication of the Sixth Assessment Report, Working Group 1 (AR6 WG1), I wrote a series of articles drawing attention to the fact that the IPCC had already outlined how risk perception can and should be manipulated in order to facilitate public acceptance of climate change policies (ref. AR5, WG3, Chapter 2). In particular, the exploitation of the availability heuristic was openly advocated for this purpose, resulting in a greater focus upon extreme weather event attribution studies. I suggested in my closing remarks that much more of this could be expected in AR6, and I was not to be disappointed. In particular, I was not at all surprised to see the re-framing of risk as being predominantly an issue of low likelihood, high impact events. Even so, I wasn’t quite prepared for the profound subject-matter ignorance that was often on show, and so I am left with the inescapable conclusion that a collection of risk management amateurs is in charge of framing one of the world’s most important risk management challenges. No one should be taking any encouragement from this.

The interval between the ears

The IPCC’s new approach to risk assessment is introduced very early in the document and is heralded using suitably grandiose phrases such as ‘unified framework of climate risk’ and ‘systematic risk framing’. I’ve no idea what they mean by this, but there can be no doubt regarding their intentions when they say that the AR6 framework is ‘supported by an increased focus in WGI on low-likelihood, high-impact events.’ This has been a battle ground for some time now, where many have argued that the ‘true’ risk lies in that domain. That may be so, but it is equally true to say that the greater uncertainties also lie in that domain, and uncertainty has the nasty habit of distorting the perception of risk, normally in the direction of overstatement. If the IPCC is turning its attention to the low likelihood, high impact (LLHI) end of the risk curve, one would hope that they know what they are talking about. Unfortunately, that hope is dashed very early on, once the reader has encountered the following in section 1.2.3.1:

“Further, even though it is objectively more probable that wide uncertainty intervals will encompass true values, wide intervals were interpreted by lay people as implying subjective uncertainty or lack of knowledge on the part of scientists (Løhre et al., 2019).”

Everything that is wrong, dangerous and stupid about the IPCC’s treatment of risk and uncertainty is encapsulated in that one statement. Firstly, you will note the author’s arrogant assumption that uncertainty is only properly understood by scientists – contrast the foolish lay person. And yet it is clear that the author hasn’t got the first clue how objectivity and subjectivity work in an uncertain world. It is Gleick’s folly, but on steroids. Whether or not a wide interval is an expression of subjective or objective uncertainty is entirely down to the relative contributions made by aleatory and epistemic components. If a ‘lay person’, as they put it, interprets a wide interval as implying subjective uncertainty it is more than likely that this is perfectly well justified. In fact, a probability distribution that is a representation of pure aleatory (objective) uncertainty is relatively rare in the real world. The suggestion that the lay populace fails to understand how uncertainty works because they can’t see the objective reality (a wide range of possibilities) that results from subjective ignorance is just errant nonsense.

I am not sure why the statement has been made but I suspect it stems from the notion of uncertainty serving as ‘actionable knowledge’, as Stephan Lewandowsky would put it. When a scientist hasn’t got a clue what the ‘true’ value is, then, ‘objectively’, any value is still possible; and it seems to the IPCC that the possibility is all that matters. The author(s) would wish the reader to think that scientists are always objective, and that only a lay person would make the mistake of misinterpreting their subjective ignorance as a sign of non-objectivity.

I was trained as a scientist and that training left me believing that I knew all there was to know about risk and uncertainty. It wasn’t until I entered the domains of engineering and quality management, and left the purity of theoretical physics behind me, that I came to realize that I had known diddly squat. There is so much more to it than probability distributions and Monte Carlo methods. I’m afraid that the IPCC’s statement is just the sort of asinine remark that I might have come out with back in the day. It’s disappointing to see that such ignorance and arrogance is still informing attitudes at the centre of the IPCC.

Being precise

I’d like to say that the above quote is a one-off that poorly represents the remainder of the document but, unfortunately, there is this to be found in section 1.2.3.2:

“When uncertainty is large, researchers may choose to report a wide range as ‘very likely’, even though it is less informative about potential consequences. By contrast, high-likelihood statements about a narrower range may be more informative, yet also prove less reliable if new evidence later emerges that widens the range. Furthermore, the difference between narrower and wider uncertainty intervals has been shown to be confusing to lay readers, who often interpret wider intervals as less certain (Løhre et al., 2019).”

It is a basic tenet of sampling theory that, for a given sample size, either imprecise statements can be made with confidence or precise statements can be made with incertitude – there is only so much that can be gleaned without increasing the sample size. But that does not appear to be what the author is saying here. He seems to be contrasting high likelihood, imprecise statements with high likelihood, precise statements. You can only go from the former to the latter by obtaining more information. And yet the author chooses to portray the latter, better informed statement as less reliable and more liable to change in the light of additional information! He seems to be trying to argue that, for example, the wide range of values that are stated for ECS is a good thing because it is, at least, a reliable statement. And then, of course, there is the repeated accusation that lay people simply don’t understand how to interpret wide intervals. Shockingly, lay people think that they imply uncertainty.

Actually, they would be right. This uncertainty is sometimes referred to as probabilistic discord and the relevant formula is:

H = – Σ p * loge(p)

This is not the first time I have presented this formula here at Cliscep, and it is hardly unfamiliar to aficionados of uncertainty analysis. Surely, it isn’t asking too much of an IPCC lead author to have heard of it. Even so, one has to be careful here, because it is in the nature of epistemic uncertainty that probabilistic discord can increase as the epistemic uncertainty decreases, i.e. the curve can widen as understanding increases. Maybe that is the detail that the IPCC, in its high-handedness, assumes that a ‘lay person’ can’t understand.

Learning the lingo

None of AR6’s garbled explanation of the relationship between uncertainty, interval width and subjectivity inspires confidence. Nevertheless, it is clear from the general tone of the report that confidence is exactly what the reader is expected to be brimming with. After all, as breathlessly explained in section 1.1:

“IPCC reports undergo one of the most comprehensive, open, and transparent review and revision processes ever employed for science assessments.”

Moreover, when it comes down to the communication of uncertainty, AR6 WG1 is supposed to benefit from the application of the ‘IPCC calibrated uncertainty language’, first developed by Mastrandrea et al. in 2010. For the sake of those who are not already familiar, the details of the process by which uncertainties are agreed and communicated are repeated in Box 1.1 of AR6 WG1.

Fortunately, I am already familiar with the IPCC’s calibrated uncertainty language. Unfortunately, all I can say is that it was fundamentally flawed back in 2010 and it remains fundamentally flawed today. As I have explained before, it conflates levels of agreement with evidential weight in a way that not only double-counts confidence, it also fails to acknowledge the methods by which consensus can be illegitimately formed in the absence of a narrow uncertainty range (e.g. through groupthink). AR6 WG1 crows about how its application by all authors has led to a consistent approach, from which I can only assume the report is now at least consistently wrong.

The sceptic’s only cause for reassurance regarding the over-hyped IPCC calibrated uncertainty language is that even the report’s authors concede that its application has not had the desired effect, since even if the language of uncertainty is being used consistently by the scientists, the way it is read by that damned ignorant lay person has remained stubbornly perverse. As the report explains:

“…lay readers systematically misunderstood IPCC likelihood statements. When presented with a ‘high likelihood’ statement, they understood it as indicating a lower likelihood than intended by the IPCC authors. Conversely, they interpreted ‘low likelihood’ statements as indicating a higher likelihood than intended.”

Even without such lay perversity, there is already enough wrong with the idea to justify dropping it altogether:

“Specific concerns include, for example, the transparency and traceability of expert judgements underlying the assessment conclusions (Oppenheimer et al., 2016) and the context-dependent representations and interpretations of probability terms (Budescu et al., 2009, 2012; Janzwood, 2020).”

Nevertheless, the IPCC soldiers on gamely, satisfied that, even if the calibration of their language is founded upon false assumptions and is patently failing in its purpose, “a consistent and systematic approach across Working Groups to communicate the assessment outcomes is an important characteristic of the IPCC”.

Another story to tell

Arrogance and ignorance make a heady cocktail that we have all imbibed from time to time. However, one has good reason to expect that an organisation such as the IPCC would have in place the required processes to ensure that such drunkenness does not get out of hand. One would certainly hope that it would not institutionalise a drinking culture. Part of that sobriety would entail the realisation that if one is failing to properly explain oneself, it could very well be because one has not thoroughly understood the subject. So when I see a lead author struggling with basic subject-matter whilst making sweeping statements regarding the shortcomings of the ‘lay person’, I am inclined to lose confidence in the whole set up. Such confidence is important, because, in promoting the importance of low likelihood, high impact events, AR6 has chosen to turn its focus to a subject area that demands a sound grasp of the underlying principles of risk and uncertainty analysis. I will be saying a lot more about that in my next article. For the time being, however, I will leave you with another nugget taken from AR6 WG1, which suggests there may very well be something rotten in the state of Denmark. It is a statement made with regard to what the IPCC is calling the ‘storylines’ approach to risk assessment. Amongst other benefits attributed to this approach, the report says of storylines that they:

“…can also help in assessing risks associated with [Low Likelihood High Impact] LLHI events (Weitzman, 2011; Sutton, 2018), because they consider the ‘physically self-consistent unfolding of past events, or of plausible future events or pathways’ (Shepherd et al., 2018b), which would be masked in a probabilistic approach.”

Risk assessment necessitates the determination of scale for both likelihood and impact. Quantifying probabilities (or employing an alternative means of evaluating likelihood) is therefore an essential part of assessment. One might then ask how the IPCC arrived at the conclusion that ‘a probabilistic approach’ could be masking anything. To answer that question, it helps to understand what is wrong with the way climate science handles the probabilistic approach. However, it is even more important to understand how a new approach peddled by a small group of individuals operating on the periphery of Detection & Attribution studies, and bitterly rejected by the vast majority of practitioners, could have, nevertheless, become the central idea behind AR6’s approach to risk assessment. One has to wonder what this says about the supposed ‘openness’ and ‘transparency’ of the IPCC’s self-proclaimed, world-beating review process. More of this next time…

15 Comments

  1. John,

    Thank you for taking the time and trouble to read through the dross, and to explain it, so that we lay people don’t have to read it. I was tempted to give it a go, but you’re making a strong case for me not bothering. I wonder if the alarmists at the BBC and the Guardian have read it either?

    Liked by 1 person

  2. Great post.

    “But that does not appear to be what the author is saying here.”

    That paragraph is so illogical / impossible to parse, I struggle to believe that there isn’t some gross typo in there. Apart from aiming squarely at dastardly lay readers, it’s hard to know what the rest was really trying to convey. Your interpretation (and so consequent issues with that expression) may well be on the mark, but if so it seems bizarre to me. I can’t think of any alternate meaning that is less bizarre though. Hence my grasping at the straw of typo or basic misstatement.

    Liked by 1 person

  3. Mark,

    Fortunately, the introduction provides a good roadmap for the rest of the document. That way, I only needed to skim read the majority and focus in on the sections that were likely to be of greatest interest to me. Even so, I can’t say it was a joy to read.

    Andy,

    Yes, it was the downright illogicality of some of the stuff that inspired me to start with the Sherlock Holmes quote. I don’t think it is just a case of typos (although I accept that I am citing a draft that says ‘do not cite’). I think the thinking is befuddled and no amount of further review is going to sort that out. And I do get annoyed when they start pontificating upon how the lay person doesn’t understand these things. They don’t seem to appreciate that, as far as uncertainty and risk is concerned, they are talking about themselves.

    Liked by 1 person

  4. Dfhunter,

    By providing the link to the Lohre paper you made me go and read it 😦

    As I suspected, he is trying to suggest that the confidence with which a statement is made is the more important expression of uncertainty, but the IPCC’s preference for only making statements that it can be confident in is backfiring because it then means that they can only make imprecise statements. Strangely, the paper thinks that this is leading the lay reader to think that there is subjective uncertainty were there is none. For example, the statement, “we are certain that we have very little clue” is being misread as a statement of subjective uncertainty, because the reader is instead supposed to be most impressed by the use of the word ‘certain’!

    The fact is that the uncertainty is a function of both the confidence level and the imprecision. But more to the point, this is Gleick’s folly, since Lohre doesn’t seem to appreciate that probability distributions and confidence intervals are the mathematical apparatus used to analyse aleatory uncertainty and yet he is trying to use it to reflect upon all uncertainty, including the epistemic. To do that one has also to consider the evidential weight behind the shape of a distribution.

    It speaks volumes that the IPCC cites a bunch of social scientists as experts on this subject.

    Like

  5. Oh I wish we could end as the song does:

    Wise at last, my eyes at last,
    Are cutting you down to your size at last
    Bewitched, bothered and bewildered – no more

    Liked by 1 person

  6. I’m feeling in a conciliatory mood this morning, so I think I will backtrack slightly from my earlier comment. To be fair, I don’t think Lohre is making out that confident (wide interval) statements are more important than precise ones but he is saying that they are more objective:

    “With interval predictions, a wider interval allows for a greater degree of objective certainty (more hits and fewer misses). Even if the exact number of hits versus misses can be assessed only retrospectively, after the outcomes are known, this general relationship can be claimed prospectively on purely logical grounds. Subjective certainty, however, might not increase with interval width.”

    Actually, on purely logical grounds, that’s a load of bunkum. Hits versus misses is an entirely aleatory concept.

    Like

  7. John – “It speaks volumes that the IPCC cites a bunch of social scientists as experts”

    did you also notice who they (3 Department of Psychology types) regard as “lay people” ?

    TABLE 1. Demographics for the samples used in the different experiments.
    Expt no. n Sample Mean age (SD) Female Male
    1 81 University of Essex students 24.0 (6.5) 80.2% 19.8%
    2 201 Amazon Mechanical Turk 37.9 (12.0) 51.7% 48.3%
    3 238 Amazon Mechanical Turk 37.7 (11.2) 47.9% 52.1%
    4 302 Amazon Mechanical Turk 34.6 (10.4) 44.4% 55.6%
    5 101 University of Essex, snowball sampling 28.0 (13.1) 36.6% 62.4%
    6 105 University of Oslo students 23.1 (4.9) 76.2% 23.8%

    seems us old codgers as excluded as usual !!!

    (had to look up “snowball sampling” – 1st hit – “Snowball sampling is a recruitment technique in which research participants are asked to assist researchers in identifying other potential subjects”)

    Like

  8. Dfhunter,

    It is interesting that the (psychology) student sample should be so heavily skewed towards female, given that the conclusion is not claimed to be gender-specific. Also, I see that the snowball sampling is heavily skewed towards male, presumably as a result of the students’ partners being co-opted. The other thing worth noting is that psychology students are considered to be lay people in understanding how uncertainty works and yet the psychology professors who conducted the experiment see themselves as experts. All the evidence that I have seen suggests that, when it comes to the field of psychology, both professor and student alike fall into the lay person camp.

    Anyway, at the end of the day this is just another attempt made by psychologists to construct the evidence for a cognitive bias. The claim seems to be that the uncertainty that is inherent in a wide interval is more influential to the lay person than the uncertainty that is inherent in a statement made with incertitude. This may or may not be true, but it does not alter the fact that the quote appearing in AR6 is a load of rubbish because it claims that lay people (only) misinterpret confident vague statements as being subjectively uncertain when the important thing is that they are, in fact, objectively certain.

    Like

  9. you have to wonder how China will fit into the report –

    “In another study, British lay readers interpreted uncertainty somewhat differently from IPCC guidance, but Chinese lay people reading the same uncertainty translated into Chinese differed much more in their interpretations (Harris et al., 2013).

    never read further because every page has “Do Not Cite, Quote or Distribute” at the bottom – so I just made up that bit “honest gov, it’s old news from 2013 anyway”

    Like

  10. ok – in for a penny in for a pound.

    further reading & in – Final Government Distribution Chapter 1 IPCC AR6 WGI

    No 10 – These are supported by key institutional values, including openness, ‘organized scepticism,’ and objectivity

    see, we do get a seat at the table – ‘organized scepticism,’ – I nominate errr, somebody on cliscep.

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.