If there is something that the climate change debate is certainly not lacking it is ad hominem, for whilst it is universally disapproved of it is also ubiquitous to the point of being de rigueur. Take, for example, Peter Gleick’s recent critique of Michael Shellenberger’s latest book. Peter does not waste any time in accusing Michael of stooping towards ad hominem, before then joining in with the ethical limbo dancing by delivering his own ad hominem in excelsis. Furthermore, in responding to Mike Dombroski’s excellent post on Peter’s critique, I also chose to throw in my own brand of personal attack when criticising Peter’s accusations. It was something along the lines of ‘How can anyone so honoured be such an idiot?’ Whilst appearing to be a perfectly reasonable question, this was not actually a reasonable accusation. The point is, Peter, I’m so very, very sorry to have called you an idiot. It was wrong of me and I am writing this post, not only as a penance, but also to explain how such a clever person such as yourself could have said such an apparently stupid thing.

The Offending Remark

The statement that had me slapping my forehead in disbelief reads as follows:

“Shellenberger misunderstands the concept of ‘uncertainty’ in science, making the classic mistake of thinking about uncertainty in the colloquial sense of ‘We don’t know’ rather than the way scientists use it to present ‘a range of possibilities’.”

So, according to Gleick, there is scientific uncertainty, and then there is the colloquial concept of uncertainty that only cornucopians and dumb, anti-science deniers use. He says that scientific uncertainty is all about understanding the range of possibilities that nature encompasses. As such, uncertainty is the product of knowledge, and the greater the known uncertainty, the greater the imperative to act pre-emptively. Colloquial uncertainty, on the other hand, is all about stressing ignorance and the importance of not acting until one knows what one is dealing with. Fortunately, as Gleick would have it, we can ignore the colloquial argument because it isn’t scientific, and only an ignorant anti-scientist could be impressed by it.

My initial reaction to all of this was to allow my gob to be overly smacked, before then reaching out for the special keyboard I use to construct my finest invective. However, my subsequent and more considered response is to try unpicking Peter’s statement so that one might get a better idea as to where these ideas are coming from.

Foolishness in Good Company?

Firstly, it has to be conceded that if one searches on the internet for ‘uncertainty in science’, one is presented with the following headline summary:

“But uncertainty in science does not imply doubt as it does in everyday use. Scientific uncertainty is a quantitative measurement of variability in the data. In other words, uncertainty in science refers to the idea that all data have a range of expected values as opposed to a precise point value.”

This is essentially what Gleick appears to be claiming in his Shellenberger critique – almost to the extent that it is tempting to speculate that Gleick got his views by searching for ‘uncertainty in science’ on the internet. The quote actually comes from a website called ‘Visionlearning: Your insight into science’, and is to be found within an article written by a couple of PhDs. So, the first thing to conclude here is that Peter Gleick is certainly not on his own in believing in the concept of ‘scientific uncertainty’ – an uncertainty that, presumably, has to be distinguished from the unscientific variety. Having made this discovery, it behoved me to ask: ‘Where does this idea of a high-standing, scientific uncertainty come from?’ A quick read of the Visionlearning article provided the answer.

Scientific uncertainty, according to the two PhDs, is all about variability in nature and the consequent problems of accuracy and precision in the data that scientists collect and analyse in order to understand the natural world. In summary, theirs is an article on measurement theory and they are alluding to aleatory uncertainty, i.e. uncertainty that reflects natural variability. Such uncertainty is distinct from epistemic uncertainty, which reflects a level of ignorance. Aleatory uncertainty is objectively calculable, and hence supposedly scientific. Epistemic uncertainty is subjective, and so it appears in the eyes of at least some to be an uncertainty unworthy of the epithet ‘scientific’. Heaven forfend that uncertainty in the scientific mind should be interpreted as ‘we don’t know’. That interpretation, surely, would be a classic Shellenberger gaffe!

Putting the Record Straight

If it were Gleick’s view that the aleatory uncertainty underpinning measurement theory is the one true scientific uncertainty, then it would be difficult to know where to even begin criticizing him. However, let me try by first pointing out that it is rarely the case that uncertainty neatly, and obligingly, falls fully into one or other of the two categories: aleatory or epistemic. In practice, much uncertainty is a hybrid known as Knightian uncertainty. In fact, if one were lucky enough to be dealing with pure aleatory uncertainty, in which probabilities can be objectively and reliably calculated, this would mean that one could reliably calculate the risk – so much so that people would no longer talk about making a decision under uncertainty; the preferred expression becomes ‘decision-making under risk’.

The essential point to note is that, in the real world of science, data is often missing and expert opinions are rife. These are circumstances in which the probability distributions associated with aleatory uncertainty cannot hope to fully capture the ambiguities; they can’t do this because they are not reliably known. If the only true scientific uncertainty is aleatory, as Gleick appears to be suggesting, then I’m afraid it occupies a regrettably restricted domain – it is that relatively rare situation in which one can be certain just how uncertain one is, and it is the one in which notions of risk become sufficient motivators.

The IPCC’s Treatment of Uncertainty

Even if Gleick were to think that his idea of scientific uncertainty is so prevalent in the real world of science that he can accuse others of colloquialism or classic errors whenever they allude to epistemic uncertainty, then he would certainly have no excuse for failing to note that no one on the IPCC appears to agree with him. In fact, it cannot have escaped anyone’s attention (including Gleick’s) that the IPCC captures the uncertainty associated with its statements by using expressions of likelihood caveated by expressions of confidence. For example:

“Past emissions alone are unlikely to raise global-mean temperature to 1.5°C above pre-industrial levels but past emissions do commit to other changes, such as further sea level rise (high confidence).”

This is, in effect, the probability of a probability (e.g. there is less than a 33% probability of this happening and the probability that we are right about this is believed to be 80%). In so doing, the IPCC is playing the sleight of hand trick of using two concepts of probability simultaneously, i.e. they are expressing the Baconian probability of a Pascalian probability [1]. More to the point, both aleatory and epistemic concepts of uncertainty are being invoked, since the Pascalian probability relates to variability in the real world and the Baconian probability relates to subjective levels of confidence associated with evidential weight in support of a hypothesis [2].

There is nothing really wrong with this as long as everyone is aware of what is going on [3]. It simply demonstrates that (at least in the context of climate science) a properly scientific statement of uncertainty cannot be restricted to the question of the ‘range of possibilities’ that nature seems to be allowing, but must also involve considerations of evidential weight and residual ignorance. Confidence levels stemming from such considerations are far from colloquialisms, they are core to any concept of scientific uncertainty.

Finally, the readiness by which Pascalian probabilities are objectively quantified (they are, after all, supposed to be capturing nature in the raw) is no reason to put them on the scientific pedestal. Evidential weighting can also be quantified in an objective fashion, though perhaps not so successfully if one restricts oneself to probability. Fortunately, however, there are such things as evidence theories, though you would not think so listening to the IPCC [4].

So What is There to Like?

Gleick says that scientists use uncertainty to refer to ‘a range of possibilities’ and contrasts this to the supposedly unscientific notion of ‘We don’t know’. He can only, therefore, be referring to the range of possibilities afforded by the inherent variability of nature (including tipping points and other fat-tailed distribution hobgoblins). As such, he is alluding to aleatory uncertainty, as encountered in measurement theory, and declaring this to be the true scientific conception of uncertainty. I hope I have done enough to persuade the reader that this is, at best, a simplistic view and, at worst, a naïve and ill-informed one. But this does not make Gleick an idiot. On the contrary, he is actually being quite cunning in making the claim that the really scientific thing to do is to consider all the possibilities, and the really unscientific thing to do is to point to the huge epistemic uncertainties that such leaps of imagination often entail. As such, he chooses to misrepresent the concept of scientific uncertainty, not perhaps because he fails to understand it, but because it suits his advocacy of the precautionary approach. I don’t like it, but I begrudgingly admire the craftiness.

Notes:

[1] Pascalian probability is based on likelihood relative to a criterion of truth and Baconian probability is based on evidential support relative to a criterion of justified belief.

[2] The Visionlearning article also talks of confidence but theirs is entirely in keeping with their aleatory conception of uncertainty:

Confidence statements do not, as some people believe, provide a measure of how “correct” a measurement is. Instead, a confidence statement describes the probability that a measurement range will overlap the mean value of a measurement when a study is repeated.”

[3] In fact, if I have a problem with the IPCC approach it is that they use levels of consensus as a dimension relevant to the calculation of confidence levels, and this is stretching the credibility of the knowledge hypothesis to breaking point. Worse still, they compound the error by combining consensus levels with an evidential weighting that includes the strength and quality of expert opinion. As a result, the impact of consensus levels is double-counted. But I digress. The full account of these misgivings can be found here.

[4] If you look, you can find the application of evidence theories such as Dempster-Shafer theory in climate and environmental science, but they are thin on the ground. I could say a whole lot more regarding the IPCC and their failure to properly acknowledge the non-complementarity of evidence, but perhaps another day.

38 Comments

  1. Goodness, I know nothing, but then again maybe I do not: everything is so uncertain in more ways than one. Gosh.

    Like

  2. John, a further thought. You mentioned how IPCC uses consensus to generate “confidence” levels for assertions about climate change. To the point that moderate “confidence” is actually 50/50 chance of being true. This reminded me of the “Wisdom of Crowds” guessing the weight of an ox as correctly as the scale later confirmed. But that only works if the crowd is heterogeneous, ie. comprising a realistically wide range of biases. When you recruit a group according to one bias, you get a “confidence” game.

    Liked by 1 person

  3. When does IPCC ‘confidence’ get tested against reality? Or is endlessly moving the goalposts allowed under the confidence banner?

    Like

  4. “Shellenberger misunderstands the concept of ‘uncertainty’ in science, making the classic mistake of thinking about uncertainty in the colloquial sense of ‘We don’t know’ rather than the way scientists use it to present ‘a range of possibilities’.”

    What a complete jerk !

    Gleich misunderstands the basic concept of “thermodynamics” making the classic mistake of thinking we can solve “climate change” by running time backwards. Scientists know that all energy on earth is of nuclear origin.

    Liked by 1 person

  5. Gleick should read Pat Franks paper on error propagation until he understands it. The GCMs he thinks show us a range of doom to look forward to have no ability to project the future. Frank’s analysis is a scientific one and it tells us the range of uncertainty of future temperatures computed using the uncertain nature of cloud formation is so wide as to include known impossibilities. So what does Gleick do when his only tool to predict the doom he expects exhibit an uncertainty range wider than any observed or computed temperature?

    Liked by 1 person

  6. The response to Gleick’s illogical critique of Apocalypse Never is astonishingly good.
    Gleick looks even less rational and coherent than usual after reading such a dispassionate and fact based response.

    Like

  7. Beth(+b?). Are you perhaps thinking of the well-known quotation “To Gleick or not to Gleick, that is the question”?

    Like

  8. Thanks the post John

    reminded me of this – https://climateaudit.org/2007/06/10/ipcc-ar4-guidance-on-uncertainty/

    snippet – “6. Be aware of a tendency for a group to converge on an expressed view and become overconfident in it [3]. Views and estimates can also become anchored on previous versions or values to a greater extent than is justified. Recognize when individual views are adjusting as a result of group interactions and allow adequate time for such changes in viewpoint to be reviewed”

    O/T – but in weird way I wonder if the exam mess/protests can be explained by teachers who do the above.

    pps – would like to see a graph that shows the exam results from say 2000 to 2020 (2020 with teachers grades before downgrades)

    Like

  9. The general topic is about uncertainty regarding global climate, and would say uncertainty regarding future human activity {one example global wealth seems to have been increasing }.
    In regard to global climate, there seems to be little uncertainty. We in Ice Age and will continue to be in this Ice Age for much longer time then one can make public policy decision regarding whenever such global climate may cease. Rather than use the term Ice Age, one also say that Earth is in an Icehouse Climate, which is cold ocean and having polar ice caps. And would the cold ocean is main factor.
    And our cold ocean has average temperature of about 3.5 C.
    But one could say there is uncertainty regarding topics related to our global climate, such as weather.
    Or one could large volcanic activity which effect global weather. And possible solar activity have large effect upon regional or global weather. And unpredictable significant droughts, and etc. But it seems the focus is CO2 levels and it’s supposed effects upon global climate.
    I could be called a lukewarmer, and I think it’s possible increases in CO2 could increase global surface air temperature. But as I said, we living in an Ice Age and it’s my opinion that when in an ice Age, an increase in global surface air temperature is a good thing. But even if we living at time when we were in a Hothouse climate, it doesn’t seem like much of problem to increase global surface air temperature. But if one happens to living in one of coldest periods in Earth’s history, as we currently are- it’s even less of problem.
    Or it seems to me, global average surface air temperature over the century or so, has increased by about 1 C and the consequence of such warming appears to be only beneficial. And what seems would have been worse is for global average air temperature to have instead cooled by 1 C, For instance had global air temperatures decreased by 1 C, we could be having a global food shortage. Plus our enrichment of global CO2 levels, has caused “global greening” and has increased crop yields.

    Like

  10. Things difficult to believe – Peter Gleick was the launch chairman of the “new task force on scientific ethics and integrity” of the American Geophysical Union. If it weren’t in Wikipedia I would think someone was pulling my leg.

    Liked by 1 person

  11. @DFHUNTER, the early Climate Audit posts are well worth a read. Some years ago I re-read through the series where McIntyre went through all the individual proxies that made one version of the hockeystick. None were bulletproof of course.

    A classic case of uncertainty is the range used to estimate equilibrium climate sensitivity (ECS) which hasn’t really changed from +1.5 C to +4.5 C per doubling of CO2 since Charney 40 years ago. One would have thought this might have narrowed by now (it might in AR6, I haven’t seen any of the drafts). The only thing that seems sure is that net feedback is positive, but some of the high values seem to be outside the range of plausibility based on feedback exceeding the forcing change, which to my naive perspective ought to lead to unpredictable behaviour in the climate.

    Re: the wisdom of crowds, we have seen averages of CMIP5 models used to represent a best guess. Perhaps there would be some justification for this if the models were independent and built separately on different grid/timescales, without operator fudge factors (parameterisation). The analysis of this series showed that the wide range of ECS found by the models was because they could not agree on the scale, or even the sign, of the cloud feedback. It is quite some uncertainty if the sign is unknown – far more “we don’t know” than “here are the confidence intervals we expect 95% of the outcomes to lie within.”

    Perhaps a more rational approach would be an FA Cup of models, where they are eliminated one by one until the “best” is arrived at, whose predictions would be the “best”, and the others could be binned.

    Re: exam results. It ought to be very simple to check whether the overall performance of individual schools is similar to last year. The Beeb’s Breakfast on BBC1 have interviewed a moaning head at least twice, but on neither occasion did the interviewer think to ask: “has your school got worse grades than last year?”

    Like

  12. @Alan, does Wiki mention the Heartland Affair at all? (I know, I could check this myself.) Gleick went for the “modified limited hangout” of admitting was he was caught cold for but not admitting what Mosher fingered him for, forging the memo. Impersonating your way into getting incriminating documents wouild not be an ethical violation in some eyes, perhaps depending on what was found. (As I recall, this was thin gruel, necessitating the sexing up of the dossier.)

    Liked by 1 person

  13. @GBAIKIE to the sceptic, that the first 1 C has been net beneficial seems obvious, and that the next 0.5 C must be dangerous seems, to put a probability on it, impossible.

    Like

  14. dfhunter,

    To place this in a legal context, these are matters of truth and justification. A court seeks to ascertain the truth (the facts of the matter) and to do so various classes of evidence are offered for assessment. When the weight of evidence reaches the required threshold a verdict can be justified, by which the truth is deemed to be revealed (it is no coincidence that the phrase ‘burden of proof’ alludes to weight). Put another way, the epistemic uncertainty has been reduced to the point where judgement can be made regarding the aleatoric uncertainty. In the case of the schools exam results, it strikes me that the situation is pretty hopeless. The evidential weight available comes nowhere near that required to satisfy the burden of proof and there isn’t even an objective truth to be ascertained in the first place. You can show graphs of past performance, both in student attainment and teacher predictions, but no individual is going to be impressed when these are used to predict that individual’s educational future, which, courtesy of COVID-19, they were destined never to experience. Disappointment seems inevitable since we all believe we are above average and we were all bound to achieve greater things before life got in the way.

    Like

  15. Jit,

    Whenever the subject of the wisdom of crowds, model ensembles and climate sensitivity comes up, I think of two extracts taken from the following paper:

    Click to access PhDThesisJeroenvanderSluijs1997.pdf

    “Being the product of deterministic models, the 1.5°C to 4.5°C range is not a probability distribution. There have, nevertheless, been attempts to provide a ’best guess’ from the range. This has been regarded as a further useful simplification for policy-makers. However, non-specialists – such as policy-makers, journalists and other scientists – may have interpreted the range of climate sensitivity values as a virtual simulacrum of a probability distribution, the ’best guess’ becoming the ’most likely’ value.”

    And then there is:

    “According to an industrial scientist, some of the scientific organizers in the preparations for the IPCC’90 wished to go even further than providing a ’best guess’ and requested the modellers to provide a probability value for the 1.5°C to 4.5°C range. This source claims that the Chairman of IPCC WGI argued that scientists should be able to use their own intuitive judgement in providing a probability value for the range. A figure of 80% likelihood was quoted, according to this source, i.e. giving a 20% chance that the sensitivity would be out of this range. That pressures to provide subjective probability judgements were being exerted upon the experts is revealed by the following statement made by a participant modeller: ’What they were very keen for us to do at IPCC [1990], and modellers refused and we didn’t do it, was to say we’ve got this range 1.5 – 4.5°C, what are the probability limits of that? You can’t do it. It’s not the same as experimental error. The range is nothing to do with probability – it is not a normal distribution or a skewed distribution. Who knows what it is ?’”

    Like

  16. @John –
    “Disappointment seems inevitable since we all believe we are above average and we were all bound to achieve greater things before life got in the way.”

    amen to that – bit o/t but in my younger years I read many books on expanding my mental/spiritual awareness/powers.
    just got one out the book shelf as a reminder by – Dr Paul Brunton – title – quest for the overself.

    as you say “life got in the way” 😦

    Like

  17. Jit, yes the Heartland affair is indeed mentioned and Gleick’s “immense remorse” about it. What’s with your reluctance to consult Wikipedia?

    Like

  18. DFHunter. This I am aware of. I would not consult Wikipedia upon any matter even vaguely related to climate change and expect an unbiased report ( but then neither would I expect one from any climate blog – sceptical or consensual). But on other subjects it’s a valuable resource IMO. To ponder whether Wikipedia covered the Heartland “incident” would expose Jit to minimal exposure, and one could wear a mask!

    Like

  19. Guys, I do still tread those hallways. At the moment I trust Wiki when it comes to the conquests of Genghis Khan and sundry other topics where facts still matter. I was merely being lazy, knowing that Alan had just read the article. I guess I was writing as if talking to Alan in person, whereas in reality the resulting delay made the question a bit dumb.

    @John the ECS situation is extraordinary for “settled science.” It does not seem possible to plan for anything under these conditions. Unfortunately despite what he says, there are consequences for Gleickian overreaction. Especially if only woke countries are contributing to the (ahem) global effort.

    Like

  20. Jit,

    At the heart of my article is a set of important questions: Are environmental scientists, such as Gleick, making a genuine error in attempting to dismiss epistemic uncertainty as unscientific? Are they genuinely ignorant of the important distinction to be made between aleatory and epistemic uncertainty? Do they actually believe they are meaningfully quantifying their confidence when they base the quantification upon the statistical properties of a spread of ECS estimates or climate model ensemble predictions? Do they actually understand that when they do so they are not even using the correct concept of probability? Is all of this just ignorance or wilful ignorance?

    I can’t provide a categorical answer to the last question, and I suspect the reason is because, whilst there is widespread, innocent ignorance, it often gives the ‘right’ answers when it is wilful. To be fair, misconceptions regarding uncertainty run deep and wide within the sciences and they can be traced back to the early formulation of probability theory. Climate science is not particularly special in this regard, though the implications for environmental management are particularly interesting. Whatever the case, one thing is certain. As long as climate scientists continue to commit basic errors regarding the fundamental nature of uncertainty, there is no point taking any notice of their grand, scientific claims to knowing what the risks are.

    What really annoys me, however, is how individuals, such as Gleick, can condescendingly preach to others, simply because there is some supposed superior understanding to be had by being a communicator of science. The arrogance is captured in phrases such as ‘what we scientists do, however…” As I have said many times before, the really important gap is not the one between what we know and what we need to know – it is the one between what we know and what we think we know.

    Liked by 1 person

  21. Gleick did a lot more than sneak in to Heartland to get some docs. Instead of admitting that there were no incriminating docs, he simply forged some, poorly.
    That he was made a member, much less chair, of anything to do with ethics told me a tremendous amount about the state of institutional science.
    That Gleick has written a vaccuuous circular bit of illiterate drivel to attempt to counter Schellenhuber, and failed so badly, is just part of the arc in which he is traveling.

    Like

  22. As an aside on the Ofqual exam results algorithm debacle The Guardian was trying hard to put the boot in to government ‘cronyism’ today via two close erstwhile colleagues of Gove and Cummings: Firm linked to Gove and Cummings hired to work with Ofqual on A-levels. The founders are a married couple. I thought Rachel Wolf rang a bell.

    Staggering incompetence once again? Maybe.

    But on 6th February Wolf wrote Achieving net zero will require massive changes to our lives – when is anyone going to tell voters? in Conservative Home. Easily the most sensible treatment of the subject I’ve seen from a Tory insider.

    Confusing, huh?

    Until the opposition has anyone so close to the centre writing such things … well, nothing’s ideal. But would the Guardian and its outriders in the BBC etc. even begin to allow such a thing to happen within Labour?

    Sorry for the digression John.

    Like

  23. @Richard the Wolf article is a small step forwards I suppose, but it takes as a given the requirement to achieve the hallowed Net Zero by Year Zero (ok, 2050).

    As we stand, our lawgivers are hiding that they are doing something painful and unnecessary; they’re selling Net Zero as painless and unavoidable. (While delaying and prevaricating as much as they dare.) Wolf’s position is that they need to be honest that they are doing something painful, while still badging it as unavoidable.

    “We must do this thing, and it’s painless” becomes “We must do this thing, but it’s painful.”

    I would respect greatly a politician who stood forwards and said, “We need to discuss this with the electorate, to create a wide debate weighing up the pros and cons of the various approaches, and see what the people want us to do.”

    At the moment, even if we were heading for Climate Catastrophe, I can’t see any reason for the UK to unilaterally commit to Net Zero. Our contribution to CO2 emissions is small, increases elsewhere would take up the slack of our disappearance within the year, and whatever small benefit our sacrifice would bring, it would be shared equally amongst the world’s population, whether virtue signallers or freeloaders.

    Net Zero ain’t gonna happen. Each cut will be harder than the last. And to judge by how our leaders twist in the wind, there is no question that they will be able to resist when those cuts start to bite and people start shouting at them.

    Like

  24. Jit,

    “I would respect greatly a politician who stood forwards and said, ‘We need to discuss this with the electorate, to create a wide debate weighing up the pros and cons of the various approaches, and see what the people want us to do’.”

    I think you will find that the politicians will claim that is exactly what the citizens’ assemblies for the climate change emergency are doing. It’s the new normal for democracy, I’m afraid, and I for one am not impressed.

    Like

  25. @John I suspect that if we had a probability distribution of future climate states, then there would be no need at all for anyone to panic. But I base that on weak logic, induction, based on the current state and trend. You could say I think I know that we’re safe…

    If you give a probability distribution (e.g. of ECS) to the likes of Gleick, you’ll get a justification for draconian measures based on the long tail. I get the impression long tails can be used to justify anything, but that the justification would be erroneous in most cases.

    Regarding the assemblies, they were given Net Zero as a starting point, not an end point. I think the plan was an entirely cynical ploy to draw XR’s teeth. You cannot ask average people to determine policy, but you can ask them to vote on it once they understand the various choices and what they mean for them and theirs.

    @Hunterson7 Gleick didn’t cop for that, even if plenty think he did it. Remember the blogs at the time? I had the strong impression that folks on the alarmist side were determining the facts entirely on their prejudice rather than looking at the evidence dispassionately (and even dissembling). Thanks to Mosher it seemed to be a slam dunk. That so many closed ranks and that Gleick suffered no repercussions is interesting.

    Like

  26. Jit:

    the Wolf article is a small step forwards I suppose

    It’s much preferable to the Climate Assembly nonsense imho.

    But yep. Small steps forwards or massive ones back seems the only options we have.

    Like

  27. Jit,

    It would not surprise me if Gleick were to use a probability distribution to express the uncertainty regarding the value for ECS because, like many of his colleagues, he seems to be labouring under the misapprehension that such an aleatoric analysis would be the only valid scientific treatment of the uncertainty. Once again:

    “Being the product of deterministic models, the 1.5°C to 4.5°C range is not a probability distribution. There have, nevertheless, been attempts to provide a ’best guess’ from the range. This has been regarded as a further useful simplification for policy-makers. However, non-specialists – such as policy-makers, journalists and other scientists – may have interpreted the range of climate sensitivity values as a virtual simulacrum of a probability distribution, the ’best guess’ becoming the ’most likely’ value.”

    And it is not only ‘non-specialists’ who have caught the disease:

    “According to an industrial scientist, some of the scientific organizers in the preparations for the IPCC’90 wished to go even further than providing a ’best guess’ and requested the modellers to provide a probability value for the 1.5°C to 4.5°C range… ‘You can’t do it. It’s not the same as experimental error. The range is nothing to do with probability – it is not a normal distribution or a skewed distribution. Who knows what it is?’”

    I’m sure it would not have taken long before the wisdom shown by this unnamed ‘industrial scientist’ was eradicated from the system. You can’t carry on refusing to do what your leaders tell you to do for very long before you either fall in line or are shown the door. The result is an institutionalised ignorance that leaves the likes of Gleick to lecture others about making ‘classic errors’ whilst they themselves engage in a class action stupidity that is nothing short of scandalous. ECS uncertainty cannot be modelled as a probability distribution as if it were a measurement subject to aleatoric error. To the extent that the uncertainty regarding future possible states of the climate may itself be premised upon the inappropriately modelled uncertainty regarding the ECS value, the same mistake is made when probability distributions are used.

    But who am I? I’m just a bloke on the internet writing stuff that barely anyone reads. The real experts, it seems, are in charge of the IPCC. And the real experts get awards for communicating science.

    Liked by 1 person

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.