Immaturity is the inability to use one’s own understanding without direction from another. This immaturity is self-incurred if its cause is not lack of understanding, but lack of resolve and courage to use it without another’s guidance.”  —  Immanuel Kant

What do you believe gives you confidence in your confidence? For a lot of people, the answer lies in the number of individuals who share their confidence. But can you be confident that this is a good basis for confidence in confidence? Would you be more confident that this was a good basis if enough people agreed with you that it was? Well, if you were an author for one of the IPCC’s Assessment Reports, you certainly would be – or you would if you took any notice of the IPCC’s guidance on this matter, the confidently titled “IPCC AR5 guidance note on consistent treatment of uncertainties: a common approach across the working groups”.

The IPCC’s guidance note is a document of central importance since it explains the process by which the AR authors are to arrive at the attributions of uncertainty that you read in the IPCC’s various proclamations – attributions such as: “Past emissions alone are unlikely to raise global-mean temperature to 1.5°C above pre-industrial levels but past emissions do commit to other changes, such as further sea level rise (high confidence)”. And, given how important the note is, I thought it might be worthwhile for me to take you through its main points, commenting along the way upon its various deficiencies and errors, and thereby explain to you why you should never be confident in the IPCC’s confidence.

The train not arriving at platform one

So let us start, as one always should, by defining ones terms. Except, here we have a big problem, because the one thing that the AR5 guidance note signally fails to do is define its terminology. Yes, there are plenty of references to variables such as ‘confidence’, ‘risk’, ‘likelihood’ and ‘uncertainty’ but nowhere are these terms defined – even though the guidance note has the sole purpose of standardising upon how levels of said variables are to be described. But let us not allow ourselves to be put off by a little detail such as total lack of definition. Let’s crack on regardless. You have a long way to go yet, dear reader, so please don’t give up on me now when we haven’t even left the station.

In the absence of clear definition, we must instead start with what the guidance note has to say regarding the metrics for uncertainty:

The AR5 will rely on two metrics for communicating the degree of certainty in key findings:

Confidence in the validity of a finding, based on the type, amount, quality, and consistency of evidence (e.g., mechanistic understanding, theory, data, models, expert judgment) and the degree of agreement. Confidence is expressed qualitatively.

Quantified measures of uncertainty in a finding expressed probabilistically (based on statistical analysis of observations or model results, or expert judgment).”

This may seem innocuous enough, but let me say straight away that there is no reason why confidence in the validity of a finding cannot be quantitatively assessed (I’ll explain how later) and there are plenty of non-probabilistic methods available to evaluate uncertainty (although I say this whilst labouring under the disadvantage of not knowing what the guidance note means exactly by ‘uncertainty’). Furthermore, there is here the first hint of a big problem: Why is the ‘degree of agreement’ deemed important as a metric for uncertainty?

We must agree to agree

After a short diversion into risk management issues, such as the importance of focusing upon high impact consequences that have low probabilities, the note gets down to the business of describing the process by which uncertainties are to be considered, evaluated and communicated. It is a guidance that includes a good deal of sensible advice, such as:

Determine the areas in your chapter where a range of views may need to be described, and those where the author team may need to develop a finding representing a collective view. Agree on a moderated and balanced process for doing this in advance of confronting these issues in a specific context.”

Just how this fits in with the IPCC’s determination to establish consensus at all costs1 is unclear but, rather than get bogged down by such matters, let us quickly move on to the heart of the matter: the definition of a so-called ‘calibrated language’ and how it should be used to convey levels of confidence. The following guidance is provided:

Use the following dimensions to evaluate the validity of a finding: the type, amount, quality, and consistency of evidence (summary terms: ‘limited’, ‘medium’, or ‘robust’), and the degree of agreement (summary terms: ‘low’, ‘medium’, or ‘high’).”

These two dimensions (evidential weight and levels of agreement), are then used to construct a 3×3 matrix, with the bottom left cell representing the combination of ‘limited evidence’ and ‘low agreement’ – this is the cell corresponding to minimum confidence. The top right cell (‘robust evidence’ combined with ‘high agreement’) corresponds to maximum confidence. To standardise upon the expression of confidence, the guidance note says:

A level of confidence is expressed using five qualifiers: ‘very low’, ‘low’, ‘medium’, ‘high’, and ‘very high’. It synthesizes the author teams’ judgments about the validity of findings as determined through evaluation of evidence and agreement.”

By this stage, if not before, you should definitely be asking why the “type, amount, quality, and consistency of evidence” is insufficient in itself to determine levels of confidence. Why is the extra dimension (degree of agreement) required?

Well, there is a very good answer to this – it isn’t, and one only has to consider two implications of the IPCC’s uncertainty matrix to recognize that something has gone horribly wrong. Firstly, one has to wonder how one can trust high levels of agreement when the evidence is limited. Secondly, one has to question what legitimacy there is to having low levels of agreement in the face of robust evidence.

In reality, the level of agreement is not an orthogonal variable that can be treated separately from the robustness of evidence. If the evidence is robust then disagreement should be low, or something very odd is happening. Moreover, it is only when the data is sparse, and expert opinion starts to serve as a substitute, that levels of agreement even become relevant. That said, when the experts do agree, this agreement is factored into the assessment of evidential weight, it still can’t be treated as a second dimension in the assessment of uncertainty.

Except, we keep coming back to a fundamental problem – the guidance note fails abjectly to define what it means by ‘uncertainty’. And as failures go, this is a humdinger. It leaves one with the nagging thought that perhaps the IPCC has dredged up a definition from somewhere, for which consensus can be used as a metric applying independently of evidential weight. Actually, that is precisely what I think it has done. For my next piece of evidence, m’lud, I offer you Chapter 2 of AR5: “Integrated Risk and Uncertainty Assessment of Climate Change Response Policies”. In section 2.6.2 we are offered definitions of uncertainty that include ‘paradigmatic uncertainty’ and ‘translational uncertainty’, such that:

Paradigmatic uncertainty results from the absence of prior agreement on the framing of problems, on methods for scientifically investigating them, and on how to combine knowledge from disparate research traditions. Such uncertainties are especially common in cross-disciplinary, application-oriented research and assessment for meeting policy objectives (Gibbons, 1994; Nowotny et al, 2001).”

Translational uncertainty results from scientific findings that are incomplete or conflicting, so that they can be invoked to support divergent policy positions (Sarewitz, 2010). In such circumstances, protracted controversy often occurs, as each side challenges the methodological foundations of the other’s claims in a process called ‘experimenters’ regress’ (Collins, 1985).”

You’ll not be surprised to learn that both of the above are rather niche definitions loitering in the bowels of the sociology of science. In both instances, emphasis is placed upon the extent to which dispute exists between social, political or ideological groups, and so to call them types of uncertainty is stretching the point somewhat. If it is uncertainty, then it is of a kind that can be reduced simply by eradicating or discrediting one of the groups concerned, and that should be obvious to the IPCC. In treating such dispute as a legitimate class of uncertainty, it is unsurprising that the IPCC should then treat consensus as an appropriate metric, thereby inviting the involvement of factors that have much more to do with politics, sociology and cultural bias than they do the objective evaluation of data. The IPCC matrix of uncertainty smacks of self-incurred immaturity, in which there is too much focus upon social cohesion and not enough focus upon the evidence.

Confidence measured objectively

But, if we are not to include consensus in our calculation of uncertainty, how do we then calculate confidence values based upon evidential weight alone? I’m glad you asked. Here is how:2

The first thing to appreciate is that the world is full of possibilities, and current evidence may simultaneously support belief in any or all of them. This ambivalence leads to uncertainty, but it is not the strength of evidence for a particular possibility that matters, it is the relevant strength compared to the alternative possibilities. A possibility that is supported by several sets of evidence will be greater than a possibility that is only supported by one. As evidence is collected, uncertainty may be reduced if it further supports a promising possibility, but uncertainty will be increased if it supports a previously unsubstantiated idea.

This discordance can be analysed formally using a branch of statistics referred to as possibility theory. The range of possibilities supported by the available evidence is represented by a possibility distribution function (π), constructed by accruing evidential weighting. A possibility distribution is not to be confused with a probability density function (pdf), for which the probability for a given outcome, p(x is A), is determined by the area under the pdf for which the variable x lies within the proposed set of alternatives A. In contrast, the Possibility(x is A) is given by the highest value attained by the possibility distribution, as it covers the range for which x would lie within A. Furthermore, there is a complementarity in probability theory that does not exist in possibility theory. Whereas, in probability theory, p(x is A) + p(x is not A) = 1, the area under a possibility distribution may exceed 1 (although no single possibility value may do so). Nevertheless, a form of complementary does exist in possibility theory, albeit by introducing the concept of necessity, i.e. an evidential weighting that is calculated by only taking into account the evidence that exclusively supports the proposition. Complementary in possibility theory is then given by:

Possibility(x is A) = 1 – Necessity(x is not A)

and:

Possibility(x is not A) = 1 – Necessity(x is A)

Finally, a measure of confidence can be calculated by considering the extent to which Possibility(x is A) differs from Possibility(x is not A), i.e.:

Confidence(x is A) = Possibility(x is A) – (1 – Necessity(x is A))

Or, simplifying:

Confidence(x is A) = Possibility(x is A) + Necessity(x is A) -1

So there you go. Confidence calculated without a single opinion poll in sight.

When reflecting upon the above, I like to think of a possibility distribution as evidential terrain, in which one would like one’s proposition to be sat upon a majestically isolated peak surrounded by a flat expanse. But a word of warning here. The evidential terrain is not the territory – it is the map. Furthermore, it is a map that has been drawn up by explorers who may not have visited all areas, and so vital high-ground may be missing. It is easy to be confident in a homespun proposition if one steadfastly stays at home.

Alternatively, if you don’t feel comfortable abandoning probability theory, you may be interested to learn that you can calculate the uncertainty (H) represented by a given probability distribution using the formula:

H = – Σ p * loge(p)

It is no coincidence that this is Shannon’s equation for the calculation of entropy, since the concept of entropy is based upon the number of possible configurations that a system may adopt.

So, whether one is focused upon probability or possibility, one still has a means of quantifying confidence simply by analysing the pattern of evidence. There is absolutely no need to introduce a measure that captures the level of dispute between competing political, social or ideological groups. It isn’t the agreement of competing ideologues that matters, it is the agreement of data. If you attempt a calculation of confidence that introduces the former, you will only obscure what the latter is already ably telling you. Needless to say, none of the above methods for calculating confidence (i.e. methods based purely upon evaluation of evidence) feature in either AR5 or its guidance note.

Probability to the rescue?

I have written thus far, at some length, of the IPCC’s mishandling of the concept of confidence when it is used as a metric for uncertainty. However, you may recall that the guidance note had referred to the existence of two available metrics, the second being a quantified, probabilistic measure “based on statistical analysis of observations or model results, or expert judgment.” Unfortunately, however, at this point the guidance simply introduces more confusion, in which the concepts of uncertainty and likelihood are conflated. Consequently, rather than probability being related to uncertainty, as per Shannon’s equation, it becomes its synonym. Furthermore, in seeking a ‘calibrated language’ for the expression of probability, quantified levels become arbitrarily defined, thus:

  • 99100% probability = Virtually certain
  • 90100% probability = Very Likely
  • 66100% probability = Likely
  • 33 to 66% probability = About as likely as not
  • 033% probability = Unlikely
  • 010% probability = Very Unlikely
  • 01% probability = Exceptionally Unlikely

Nowhere in the accompanying text is there any attempt to explain how levels of likelihood are related to uncertainty. For example, there is nothing to explain that the point of maximum probabilistic indifference (‘About as likely as not’) corresponds to maximum epistemic uncertainty.3 In fact, since the table refers to the likelihood of something happening, it is risk that becomes the more relevant concept – not uncertainty. To add to the confusion, the likelihood categories are defined using extended ranges of probability (thereby introducing imprecision) that overlap (thereby introducing ambiguity). The result is a conceptual dog’s dinner that leaves the reader unable to discern whether the IPCC is advocating risk aversion or uncertainty aversion, or indeed appreciates that a distinction can be made. I’m afraid that, in the hands of the IPCC, there is no redemption to be found in quantified probability.4

Authority has the final word

It is disconcerting enough that a document that sets the standard by which uncertainty shall be evaluated and communicated by the IPCC should attribute such importance to consensus. But it is no less disconcerting to see that it even fails to make a clear distinction between the concepts of risk and uncertainty and how they are related. It is tempting to speculate that, if the authors had taken the time to define their terminology, perhaps such confusion and sleight of hand could have been avoided. The irony is that, given the IPCC’s misplaced reverence towards consensus, the guidance note has been readily adopted, complete with its illogic and conceptual confusion.

The BBC in its magnificence would do well to invite an independent expert on to one of its self-esteemed programmes to discuss the IPCC’s treatment of uncertainty, placing it under the scrutiny I have applied here. But I can’t see that happening. The very fact that the expert would be an outsider, i.e. somebody that Evan Davis knows isn’t one of the IPCC scientists, would mean that (in the eyes of the BBC) the individual speaks with no authority and so couldn’t possibly know what they were talking about. I would obviously prefer to think that I do know what I am talking about, but not because Mr Davis might agree so – may I be so immodest as to propose that it could be because the evidence suggests that I do? It’s just a shame that evidence carries so little weight nowadays.

Notes:

[1] See paragraph 10 of “Procedures Guiding IPCC Work”, which states that “In taking decisions, and approving, adopting and accepting reports, the Panel, its Working Groups and any Task Forces shall use all best endeavors to reach consensus.”

[2] A full explanation, containing several diagrams to help you visualize what I am saying, may be found in this excellent paper.

[3] In fact, the guidance note warns against treating the probabilistic discord as a measure of epistemic uncertainty, leaving the reader to speculate that perhaps the IPCC considers uncertainty in model predictions to be simply a matter of inherent variability.

[4] To save the reader from further grief, I will not pontificate upon the linguistic vagueness one always invites when using degree adjectives such as ‘likely’, and how arbitrary boundaries are just a futile attempt to militate against sorites paradox.

28 Comments

  1. I won’t say I completely understood everything in this post, but I do think I got the gist of it.

    Perhaps measuring the correlation of “evidential weight and levels of agreement” (deviation from the diagonal or corners of the matrix) could provide a way to quantify “incoherence”.

    Like

  2. I wouldn’t say I completely understood everything in this post either 🙂

    Part of the problem with the possibility theory section is that I would need to draw several diagrams to convey some of the technicalities, and I shied away from providing these diagrams because they were already available in the referenced paper. Furthermore, there are some choppy philosophical waters associated with uncertainty, so if I failed to successfully negotiate them I can only apologise. All I can say is that, by confusing myself and my readers, at least I wobeuld in good company.

    That said, your proposal that deviation from the diagonal could be used as a measure of incoherent thinking is a good one. So maybe I haven’t failed too much in getting my point across.

    And whilst I’m on here, I might as well say a little bit more regarding the tautology in the opening paragraph.

    The expression ‘confidence in confidence’ may be tautological but it is precisely what the AR5 guidance note’s two-dimensional uncertainty matrix proposes. As far as the IPCC is concerned, it is not enough that the evidential weight may support confidence — such confidence also has to be endorsed through its consensual sharing, i.e. there has to be confidence in the confidence. Not only does the guidance note propose that consensus forms a good basis for having confidence in confidence, the consensual use of it as an IPCC standard is the manifestation of its own proposition. But consensus can generate confidence out of any old crap.

    Liked by 1 person

  3. The matter of uncertainty regarding our future climate reminds me of some thoughts from renowned pholospher Mortimer Adler.

    On the Difference Between Knowledge and Opinion

    Knowledge refers to knowing the truth, that is understanding reality independent of the person and his/her ideas. By definition, there is no such thing as “false knowledge.”

    When I show you two marbles then add two more marbles and ask you how many marbles there are, the answer is not a matter of opinion. You have no freedom to assert any opinion other than the answer “four”. By the axioms of mathematics we know the true answer to this question.

    A great many other issues in human society, politics and culture are matters of opinion, and each is free to hold an opinion different from others. In such cases, the right opinion is usually determined by counting noses with the majority view ruling.

    Note that school children are taught right opinions. That is, they are told what their elders and betters have concluded are the right answers to many questions about life and the world. Those children do not yet possess knowledge, because as Socrates well demonstrated, you have knowledge when you have both the right opinion and also know why it is right. Only when you have consulted the evidence and done your own analysis does your opinion serve as knowledge for you, rather than submission to an authority.

    Summary, Five criteria for distinguishing between knowledge and opinion:

    1. Whether or not everyone must agree.
    2. Doubt and belief are relative only to opinion, never to knowledge;
    3. We can have freedom of thought only about matters of opinion, never knowledge.
    4. Consensus differentiates between knowledge and opinion; only with respect to opinion do we talk about consensus.
    5. Matters of opinion are subject to conflict, knowledge is not.

    By all criteria, global warming/climate change is a matter of opinion, not knowledge. Not surprising, since no one knows the future. Why are they so certain when they rule out any future periods of cooling? Only warming is allowed, since CO2 keeps rising and they believe it must cause higher temperatures.

    Liked by 1 person

  4. Another pertinent comment froms Cal physics professor Richard Muller:

    I like to ask scientists who “believe” in global warming what they think of the data. Do they believe hurricanes are increasing? Almost never do I get the answer “Yes, I looked at that, and they are.” Of course they don’t say that, because if they did I would show them the actual data! Do they say, “I’ve looked at the temperature record, and I agree that the variability is going up”? No. Sometimes they will say, “There was a paper by Jim Hansen that showed the variability was increasing.” To which I reply, “I’ve written to Jim Hansen about that paper, and he agrees with me that it shows no such thing. He even expressed surprise that his paper has been so misinterpreted.”

    A really good question would be: “Have you studied climate change enough that you would put your scientific credentials on the line that most of what is said in An Inconvenient Truth is based on accurate scientific results? My guess is that a large majority of the climate scientists would answer no to that question, and the true percentage of scientists who support the statement I made in the opening paragraph of this comment, that true percentage would be under 30%. That is an unscientific guestimate, based on my experience in asking many scientists about the claims of Al Gore.

    https://rclutz.wordpress.com/2017/02/15/meet-richard-muller-lukewarmist/

    Like

  5. Nice post. Indeed consensus is the result of a social process, not a scientific one, and whether or not documents exist that attempt to make the process look scientific. A loose consensus is only useful in science as a temporary marker in immature domains, for where folks think the last assault line on the mountain of uncertainty got to, always held as completely challengeable, indeed always completely sacrificial should a better route upwards be found (which routes should always be encouraged). And never to be taken as more than the necessary compromise that any summarising is subject to. But if any formality appears around consensus in science (officially or unofficially), you know the process has to a greater or lesser extent been subverted by culture. Where science is replicable, no consensus is needed.

    Like

  6. Thanks Andy,

    Quite apart from what we might agree upon, regarding the beguiling allure of consensus, I think the AR5 guidance note constitutes something of a mystery. Although I have been critical of the note, it should be appreciated that its list of ‘core authors’ constitutes an impressive parade of talent. Although most would now consider themselves to be professional climate scientists, they come with a variety of backgrounds, including: environmental science, biology, physics, economics, geography, toxicology and statistics. So it isn’t as if I could attribute the document’s deficiencies to a lack of talent or suitable background on the authors’ part. This isn’t a case of denying the right people the authority to dictate the terms. Therefore, one is left with a puzzling question: With so much talent in the room, how did they manage to produce such a flawed document?

    As a case in point, one of the authors cites the following as a key area of research interest:

    “Uncertainty propagation in complex non-linear Earth system models. Development of methodologies for the derivation of robust (climate policy) advice from coupled complex models under heterogeneous uncertainty. In that context: Generalizations of probability theory to softer (generally non-additive) measures such as possibility functions.”

    And yet, in the context of the guidance note-writing process, the individual concerned failed totally to persuade anyone in the room to consider the applicability of ‘measures such as possibility functions’ when determining levels of confidence. They had the right expert on their team and still managed to fluff their lines. Riddle me that!

    And another thing – the same expert on ‘uncertainty propagation in complex non-linear Earth system models’ seemed powerless to stop his colleagues from citing “von Storch, H. and F.W. Zwiers, 1999” as their principal source on the subject of statistical analysis, a tome which received the following review from “Computers and Geosciences” magazine:

    “The major weakness is that nonlinear methods in statistics are neglected, although the authors stress climate’s nonlinearity in the introduction.”

    Sorry Mr Davis, I’m doing your job for you again.

    Liked by 1 person

  7. John:

    ‘Although I have been critical of the note, it should be appreciated that its list of ‘core authors’ constitutes an impressive parade of talent.’

    But notwithstanding I presume a few individuals who are exceptions, probably not the right talent to address the problem at hand? And not only that, mainly talent that were already emotively convinced about the position that climate science needed to take anyway (which doesn’t mean their subconscious told their conscious about this). Group think / culture finds ways to propagate and amplify itself; if it was already established within the team assembled to write the doc, I think this may be sufficient to explain your mystery. The annoying individual with the right approach was probably just out-grouped because he’s not one of ‘us’.

    Like

  8. Andy,

    Your suggestions have some plausibility but, ultimately, one can only speculate how the group managed to produce its results if one is not a party to the process. In the case of the individual with the interest in possibility theory, it is highly likely that he could not carry it through since he would be battling against hundreds of years of orthodoxy in which the additive properties of probability are taken as sacrament. Not wishing to be too unsympathetic, but the words of Kant and ‘lack of resolve and courage’ spring to mind. With his avowed interests and beliefs, I would have not put my name to the document.

    Like

  9. John:

    ‘…hundreds of years of orthodoxy in which the additive properties of probability are taken as sacrament…”

    This is cultural too, albeit a different flavour, and indeed you describe using religious terminology 😉

    Like

  10. Absolutely, Andy. But when you see culture playing out in a specific setting it is intriguing to see how human psychology operates. One might expect intelligence and objectivity to prevail but one should always prepare to be disappointed. As George Orwell said in a different context, “One has to belong to the intelligentsia to believe things like that: no ordinary man could be such a fool.”

    Like

  11. Ron,

    I think there is a common misconception amongst CAGW sceptics that the IPCC mistakes consensus for evidence (or, as you might put it, opinion for knowledge). However, the AR5 guidance note makes it quite clear that the IPCC does not labour under such a misunderstanding, since it is explicit in treating ‘degree of agreement’ as a separate issue from evidential weight. In fact, the IPCC separates consensus and evidence to such an extent that they see nothing odd in having strong agreement in the presence of weak evidence! As their uncertainty matrix makes clear, there are two routes to increased confidence: either improve the evidence or reduce the level of disagreement over the existing evidence. Personally, I would say that the latter is a route to the sort of confidence one can find in subjugation.

    The IPCC’s mistake, therefore, is not to class consensus as evidence but to treat it as an equivalent means of reducing uncertainty, without regard for how it is related. Furthermore, let it be understood that the IPCC actually sees nothing wrong in treating disagreement as a measure of uncertainty. This is highly questionable, however. After all, some people think marmite is lovely and others disagree, but where’s the uncertainty?

    Like

  12. In matters of disagreement, it doesn’t take long before Marmite (or upside down Vegemite) is paraded about. Just like Hitler and the Nazis. There should be a law for it.

    Like

  13. Alan,

    I would be interested in your assessment of the AR5 guidance note, particularly with respect to the following:

    • Its failure to define its terms
    • Its inappropriate use of ‘degree of agreement’ as a metric for uncertainty
    • Its failure to recognise the true relationship between evidential weight and justified levels of consensus
    • Its propensity to allow political, social and ideological factors to influence what should be an evidence-based assessment of uncertainty levels
    • Its apparent disregard for all but probabilistic methods for quantifying uncertainty
    • The unjustified and arbitrary decision to treat likelihood as quantifiable but not so with confidence
    • Its simplistic treatment of likelihood as a measure of uncertainty
    • Its failure to draw attention to the distinction between aleatory and epistemic uncertainties, thereby encouraging confusion between risk aversion and uncertainty aversion
    • Its blithe acceptance of linguistic vagueness in its calibrated language
    • The arbitrary manner in which likelihood is quantified
    • The purpose and validity of introducing imprecision and ambiguity into the definition of likelihood levels
    • The note’s endorsement of a statistics reference that proposes linear statistics theory, despite the non-linear behaviour of the climate system
    • The pernicious role of authority in the establishment of standards

    I am less interested your objections to my use of an emblematic example, as I attempted to highlight one of the dangers of using levels of disagreement to measure uncertainty.

    Like

  14. John, there is another dimension to this, as Caleb Rossiter explained recently:

    A powerful publicity machine magnifies the alarm, bombarding citizens with exaggerations and claims of certainty that are proven wrong as you dig down to their underlying scientific studies:

    Public figures, news editors, and commentators make claims that are more alarmist than what individual IPCC authors say at the release of the report.
    Individual IPCC authors make claims at the release of the report that are more alarmist than what the official press release says.
    The official press release makes claims that are more alarmist than what the report’s summary for policy-makers says.
    The summary for policy-makers makes claims that are more alarmist than the various chapters of the reports.
    The chapters of the report make claims that are more alarmist than the studies they reference in the footnotes.
    The studies referenced in the footnotes are often actually peer-reviewed and generally make cautious claims about a possible trend spotted in one or a small number of locations or in a global computer model.

    Both types of studies are more speculative than definitive because, as they always acknowledge in the fine print, they are based on highly-uncertain measurements of highly-complex phenomena with many interacting causes, of which warming gasses generated by human activity are only one, and often a minor component.

    For governments to make policy on such a hierarchy of exaggeration brings to mind James Madison’s warning: “A popular Government, without popular information, or the means of acquiring it, is but a Prologue to a Farce or a Tragedy; or perhaps both.”

    https://rclutz.wordpress.com/2018/11/02/un-horror-show/

    Liked by 2 people

  15. John I acknowledge the inappropriate irreverence of my Marmite comment. To make due recompense I take on the task you set me to address questions related to your post. First, however, I must admit that, although I read it, in order to become informed, it is not a topic that interests me greatly or to which I have paid much attention in the past. So consider even my few responses ill considered.

    • Its failure to define its terms
    A fatal flaw. Surprising that this deficiency was not redressed. Surly it has been drawn to the attention of the authors?

    • Its inappropriate use of ‘degree of agreement’ as a metric for uncertainty
    I understand your objections and agree. However, commonly there is a strong correlation between the two. But is similar to estimating tropospheric temperature with an an anemometer.

    Most other questions are beyond my pay grade. I doubt that, even with considerable study I could offer an informed opinion. I was a geologist who came to use the most basic of statistical methods late in his career; most of the time I was a “stamp collecting” type of scientist. I read through many contributions here, understanding one word in twenty, struggling to comprehend. Please excuse the occasional attempt at levity, intended to lighten the load.

    Your last, somewhat loaded, question provoked some brain cells
    • The pernicious role of authority in the establishment of standards
    One might argue that all standards, almost by definition, have to be proposed and maintained by authority. I believe what you are asking is a question relating to the maintenance of standards that are questionable, and by an agency that fails to acknowledge different, and sometimes better informed, opinions.
    The Royal Society had a superb motto, before they trashed it.

    Liked by 1 person

  16. Ron,

    “hierarchy of exaggeration”

    I like it. It is certainly the case that confidence becomes more exaggerated as one moves further into the political zone. Obviously, my post is not focused upon this effect. I just thought there was something to be gained by pointing out where I believe the seeds are sown – a group of talented people put their heads together and come up with a document that attempts to clarify but (for this reader at least) succumbs instead to the imperatives of consensus. Since this was the IPCC’s declared purpose, I suppose one shouldn’t be surprised by this.

    Like

  17. Alan,

    Unfortunately, your quip was ill-timed since it arrived at a point where I was becoming increasingly frustrated by the failure of my attempts to get a lively debate going on a subject dear to my heart. With all due respect to those who had turned up, the party still lacked the buzz I was looking for and the twiglets had hardly been touched. Anyway, I appreciate your contribution, despite the subject-matter not being your particular area of expertise. My response is as follows:

    I don’t think that the authors would actually accept the criticism that they failed to define their terms. However, the problem is that they use words that are in common usage, but do so in a narrow sense. This is the perfect recipe for confusion.

    My main problem with using ‘degree of agreement’ as a metric for uncertainty is as explained in my response to Ron at 10:56am.

    I think you have understood perfectly well my concerns regarding the role of authority. Too often, it isn’t what is said but who has said it. This is how the BBC seems to operate, and this is the principle upon which the guidance note gains its acceptance. I was also quite familiar with this problem during my career. I once wrote a set of risk management procedures for my division. For 10 years my colleagues diligently ignored them. Then, out of the blue, central office issued a garbled set of procedures on the same subject. The very next day, there was a long queue at my office door, comprising colleagues desperate to have the new procedures explained to them so that they could implement them immediately. I told them all to fuck off.

    Now I am retired.

    Liked by 1 person

  18. John why not write up your conclusions about the confusion that has resulted from the absence of firm definitions – employing a few choice examples to illustrate your points? If you suspect the authors of the guidance are likely to argue that definitions are unnecessary, you could counter this ahead of time. It would make a decent technical piece over at Judie’s (where you might attract an informed discussion) but why not try Nature first.
    If you are successful, then you could hit them with another upon the inadvisability of using ‘degree of agreement’ as a metric for uncertainty, which personally I found a much more stimulating discussion. Or you could start with this. Another aspect you might consider is whether you focus entirely upon the Guidance or widen it out, the latter has the obvious advantage of your being able to reference now non- contentious examples where consensus did not equate to certainty.

    Like

  19. Alan,

    I’m flattered that you think I could find publication in such outlets but I fear that my lack of academic status, or even appropriate professional qualifications, would stymie such ambition. Besides which, I am currently of a mind to give the whole thing a rest now and use my spare time on something a little more constructive. So, for the time being, I think I will be leaving Geoff, Paul, Jaime, Brad and co to fight the good fight.

    Like

  20. John,

    I sent this part of your post-

    “You’ll not be surprised to learn that both of the above are rather niche definitions loitering in the bowels of the sociology of science. In both instances, emphasis is placed upon the extent to which dispute exists between social, political or ideological groups, and so to call them types of uncertainty is stretching the point somewhat.”

    to an associate as I thought it tied in nicely with a post I saw recently about some evils-

    L. Edwards referenced, https://twitter.com/lilianedwards, “5 Giant Evils in the information crisis” 1).
    Meant to ask you your thoughts on the big 5. Number 4 was the one I highlighed last week-

    “Irresponsibility – arises because power over meaning is held by organisations that lack a developed ethical code of responsibility and that exist outside clear lines of accountability and transparency.
    •The use and abuse of platforms is amplifying the reach of misinformation in politics, health, education and more.
    •The absence of transparent standards for moderating content and signposting quality can mean the undermining of confidence in authorities and declining public trust in science and research.”

    .
    1) http://blogs.lse.ac.uk/mediapolicyproject/2018/11/22/truth-trust-and-technology-so-whats-the-problem/

    Like

  21. Kakatoa,

    Having read your linked article on the ‘information crisis’, I can see why you see a relevance to my own post. In particular, the article’s allusions to ‘fragmentation’, in which there are ‘parallel realities and narratives’, reminds me of the disputed political and ideological narratives that the IPCC allows into its measurement of uncertainty. The fact that such dispute is seen as one of the evils enabling the information crisis is reason enough, I would have thought, to keep it out of the equation. But not so for the IPCC! There again, who said that the IPCC was only interested in objective evaluation of evidence? After all, it was set up to promote one of the ‘parallel realities and narratives’. To convince its audience, the IPCC has to present a picture in which the disputes have been resolved (remember the IPCC’s directive regarding consensus) and this, the IPCC believes, reduces the uncertainty (or, more accurately, it heightens confidence).

    Just to further underline the primacy of information management within the IPCC’s role, consider, if you will, the linguistic importance of their ‘calibrated language’. Although the guidance note claims to consider matters of uncertainty, it actually uses ‘confidence’ as its chosen terminology. This is significant, since ‘uncertainty’ is a negative term but ‘confidence’ is a positive one. The dictionary definition of confidence is ‘full trust’, so even ‘medium confidence’ still seems a good thing (as opposed to ‘medium uncertainty’ or ‘medium doubt’). In fact, when the IPCC refers to ‘very high confidence’ it is being illiterate, inasmuch as it is saying that there is ‘very high full trust’. Apparently, the question is not whether the IPCC is absolutely right, but just how absolutely right they are! It’s amazing what you can achieve with a judiciously framed question.

    Like

  22. Paul,

    Thanks for the heads up; I had just noticed it myself and I have already left a comment at Judith Curry’s blog. My comment summarises the points I made here at CliScep, lifting one or two paragraphs to ease the burden of re-writing.

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.