Some background

Consult any scientist, or indeed anyone who claims to be a critical thinker, and they will be only too eager to warn against the dangers of being vague. For example, if cutting the blue wire is necessary to diffuse a bomb, you are not going to be too impressed by receiving an instruction to cut the coloured one. Imprecision matters, and the uncertainty it introduces can be the source of significant risk.

From this observation you would be forgiven for concluding that vagueness should be avoided at all costs. So why isn’t it? Why, for example, is language founded upon so many words that are patently vague? Why have we allowed ourselves to develop a means of communication that renders so much of what we say open to interpretation?

To answer that question one has to appreciate the value of vagueness in general discourse. For example, words such as ‘small’ are commonly used to convey a sense of scale, but they are deliberately vague. Why? Because then you only need the one word to cover a whole range of possibilities. Furthermore, size is context-specific, and we don’t want to have to use a different word when the context changes. So, linguistic vagueness enables semantic utility, albeit at the expense of precision. You can finesse your statements by using degree adjectives such as ‘very’, but these adjectives are themselves vague for the same reason. At the end of the day, we can’t go around being totally precise unless we are prepared to use an impractically large and cumbersome lexicon.

That said, the exploitation of linguistic vagueness is not entirely without its rules, one of which is the maxim of quantity, as devised by the linguist Paul Grice. According to this maxim, you should express yourself as strongly as your information allows, but no more so. By following rules such as these, we are perfectly capable of constructing a reasoned argument and acting upon it, enabled rather than challenged by the subtleties and mysteries of linguistic vagueness. Our love of vague terms is not a mistake. On the contrary, it provides utility that enables us to operate effectively and efficiently in an uncertain and changing world.

Now I am sure there must be some of you at this stage who are saying to themselves, ‘Yes, but what about numbers?  Surely they are the solution for those requiring both precision and utility.’

Well, not really. The problem with numbers is that they are often used to quantify in areas for which there remains an epistemic deficit, and this deficit re-introduces vagueness. Take, for example, the case in which the value of a variable is estimated from an incomplete sampling of a population (e.g. when trying to determine the percentage of support for a policy when sampling only a section of the population). The question asked is how likely it will be that the true value is captured by the sample. The expression of this uncertainty requires two elements, each of which can be traded off against the other. Firstly, there is the vagueness by which the value is stated (i.e. precision). Secondly, there is the confidence that the value as stated is correct (i.e. accuracy). If one wants to increase that confidence one can do so, but only by relaxing the precision. Conversely, one can make a more precise estimate, but only by reducing one’s level of confidence in that estimate. And this decision as to how to apportion the uncertainty is down to the individual. At the one extreme one may choose to make a very precise statement even though one’s confidence in its accuracy is very low. On the other hand, one may choose to make a statement in which one can be very confident, even though it is so vague as to be practically useless. Put simply, the wider the stated range of possibilities, the more confident one can be that the true value lies within that range. Vagueness can be very useful, if you always want to be proven right.

From the above, one can see that there is a numerical equivalent to the linguistic maxim of quantity, i.e. there is a limit to the combined levels of precision and confidence and this is determined by the sample size. The only way of increasing one (let’s say, confidence) whilst holding the other constant (let’s say, precision) is by increasing the size of the sample and thereby reducing the epistemic deficit.

All of this raises the question regarding which is the true expression of uncertainty. Is it the imprecision or the confidence level? To which the obvious answer is both. It is the combination of the two that captures the uncertainty since it is only that combination that correlates with the scale of epistemic deficiency.

Now to the climate science

So what does any of this mean when it comes to making climate predictions? To answer that question I feel I need to return to a remark made by hydrologist and acclaimed climate science communicator, Peter Gleick, when he heavily criticized a book written by Michael Shellenberger. According to Gleick:

Shellenberger misunderstands the concept of ‘uncertainty’ in science, making the classic mistake of thinking about uncertainty in the colloquial sense of ‘We don’t know’ rather than the way scientists use it to present ‘a range of possibilities’.

You may recall that I was particularly scathing of this remark at the time, largely because I felt that Gleick was wrong in insisting that there exists a distinction between a scientist’s and a layperson’s conception of uncertainty, and that he was particularly wrong in suggesting that ‘We don’t know’, is the non-scientific expression. After all, is that not just the expression of the epistemic uncertainty that haunts all scientific ventures? However, having recently returned to what I had written at the time, I am no longer so sure that I had properly understood what Gleick was saying; although in my defence I have to say that the award-winning communicator had done a terrible job of explaining himself.

At the time, I had assumed that Gleick was drawing a distinction between aleatory and epistemic uncertainty (i.e. between objective variability and subjective incertitude) and that by declaring the latter as being a layperson’s colloquial misconception of uncertainty he was declaring the former to be the scientific conception. However, upon reflection, I suspect that Gleick may have been claiming that the layperson mistakenly focuses upon confidence levels as the expression of uncertainty, whereas the scientist focuses instead upon the imprecision. This is certainly what you find when googling ‘uncertainty in science’:

But uncertainty in science does not imply doubt as it does in everyday use. Scientific uncertainty is a quantitative measurement of variability in the data. In other words, uncertainty in science refers to the idea that all data have a range of expected values as opposed to a precise point value.

The argument also seems to go further by suggesting that a risk assessment should be based purely upon the confident statements that are supposedly enabled by ‘scientific uncertainty’. The argument seems to be that we can be objectively confident that the risk is high if we can be confident that those possibilities entailing disaster lie within the accepted range of possibilities. And since imprecision leads to greater levels of confidence in the statements made, it follows that imprecision is exposing the true risk in some objective way. As Erik Løhre et al put it:

The use of interval forecasts allows climate scientists to issue predictions with high levels of certainty even for areas fraught with uncertainty, since wide intervals are objectively more likely to capture the truth than narrow intervals.

These arguments are all very well but they ignore the fact that the uncertainty manifests itself as a combination of imprecision and lowered confidence, neither of which can lay claim to any scientific status that the other lacks. Whether one sees the dichotomy in terms of the aleatory versus the epistemic, or precision versus confidence, scientists gain no superior grasp of the concept of uncertainty by focusing upon just one side of the dichotomy. The assessment of uncertainty requires consideration of both the confidence in which statements are made and the imprecision of those statements. Anyone who claims that a scientific approach to uncertainty assessment requires one to focus purely upon imprecision, and then suggests this ‘scientific uncertainty’ can be used to confidently predict high risk, is just using vagueness as a ploy to predict whatever they want whilst being proven right irrespective of what happens.

To conclude

So what do I want you to take away from this discussion? Well, firstly, be aware of climate science communicators who tell you that there is a scientific notion of uncertainty that is to be contrasted with colloquial misconceptions. The reality is that the concepts of risk and uncertainty are deceptively difficult to pin down and it has been my experience that climate scientists are no better at it than you or I.

Secondly, how one chooses to express the uncertainty is a political decision and there is no scientific basis for preferring one way over the other, apart from the scientists’ preference for only making statements that meet certain confidence levels. As long as one abides by the maxim of quantity there may be many equally valid expressions at your disposal depending upon whether you want to be precise or circumspect.

Thirdly, to dismiss ‘We don’t know’ as a colloquial misconception of uncertainty is surely the classic mistake. On the contrary, it is such epistemic deficiency that dictates the limits within which precision and confidence can be traded off against each other. Effective risk management entails the reduction of this deficit whenever possible. It doesn’t require having a judicious preference for imprecision over inaccuracy just so one can confidently proclaim the possibility of disaster. The ‘layperson’ desire to see the uncertainty reduced before costly and potentially damaging decisions are taken is not a scam, it is basic risk management practice – or at least it is outside of the politico-scientific arena of climate science.

And finally, it turns out that being vague isn’t as much an anathema to scientists and critical thinkers as I had made out in my opening paragraph. The truth is that vagueness can be exploited to open up a world of utility. Climate scientists in particular see the benefit of vagueness. However, I suspect for them the concern isn’t utility in communication but the political utility bestowed by emphasising the possibilities lurking at the extremes. It’s a political utility that finds its zenith in the conviction that mankind is confronted with an existential threat that justifies a radical degradation of our way of life. Sceptics can easily demonstrate that the UK’s frantic transition to net zero is unachievable, pointless and certain to prove disastrous, but that is unlikely to result in a change of direction whilst vague forecasts leave the fear of extinction on the table.

23 Comments

  1. Apologies if the following lacks the deep analysis of the above piece, but the point made in the conclusion about the political utility of using vagueness to “emphasise possibilities lurking at the extremes” is well illustrated by an article in today’s FT. This concerned the poor grouse shooting season they’re experiencing oop north, blaming it in part on what was termed “…unpredictable weather associated with climate change…”. Does the writer mean weather flopping about all over the place in a way it didn’t use to before climate change to the detriment of the birds’ mating and parenting habits, or to a reducing capacity for the Met Office to produce decent forecasts of the coming weather so the hunter never knows what hunting garb they should pack and therefore stays away?

    I’m increasingly aware of “unpredictability” quoted as yet another symptom of climate change to add to the litany. In this instance it happens that the weather event was a cold snap so obviously it had to be climate change, not global warming.

    Liked by 1 person

  2. Thanks for the reminder of that valuable compendium, John. Ctrl-F on “unpredictably” does yield a few hits in the context of weather’s fluctuations. Of these, one only suggested that “unpredictability” was on the increase. The contention could be tested objectively by selecting weather parameters and asking if a null hypothesis of invariant (over time) standard deviation was accepted or rejected. As to the alternative meaning that weather forecasts for Scottish moorland areas are becoming less accurate, I’m sure the Met Office could have something to say, though my guess is they wouldn’t intervene with a misstatement that leaves the reader with a further reason to accept the climate change narrative.

    Like

  3. John,

    Given the vastness of the subject, this is possibly an unfair question. However, I should be interested to learn your views regarding claims in the IPCC reports to the effect that e.g. they ave “medium confidence” or “low confidence” or “high confidence” that some type of extreme weather is worsening due to climate change (itself lazy shorthand for human-induced climate change). Do you feel that the type of extreme weather in question is sufficiently vague for them to get away with claiming “high confidence” in some cases or is something else going on?

    Liked by 2 people

  4. Mark: If I may be allowed to but in here as it’s an area I know something about and have given it a lot of thought. That IPCC formulation of low and high confidence is a weird circumlocution only found in “the science”. It attempts to add an aura of statistical formality by using words such as confidence but does so in a topsy turvy fashion. Pucka science when faced with noisy data and wishing to extract a signal from that noise, and attribute a cause, would break the problem into two separate steps. The first step in the process is to assume a null hypothesis that the variable or observation of interest is NOT changing – for example that the slope of a trend line is zero. This puts it in the form where one can test if that null hypothesis is accepted or rejected. Statistical procedures are available to answer the question.

    Say the null hypothesis is rejected, then the second step is an honest re-examination to (a) check for counterexamples, and (b) are there other hypotheses which would also explain the observations.

    Now, “the science” finesses all this conventional stuff and starts with its own null hypothesis which is that change exists – no question – it’s just a matter of how strongly (how much confidence) that the data exhibits the change. No need to think about cause as that’s already handled by the paradigm or mindset. No need to worry about a failed test – that just shifts the confidence level to low; just a matter of awaiting more data, or a plausible sounding explanation, that’s bound to turn up sometime.

    So it all harks back to the way the IPCC was set up as a confirmatory exercise – tell us how the climate is changing and what to do about it – no ifs or buts or alternatives to the paradigm.

    Liked by 1 person

  5. An excellent analysis John, thank you. But – unsurprisingly perhaps – I have my doubts about your concluding sentence. After all, if the UK’s net zero policy is easily demonstrated to be unachievable, disastrous and in any case pointless, that demonstration must inevitably result in a change of direction by ousting all those vague and fearsome forecasts of possible extinction. The reason that’s not happened so far is that these overriding factors have not had a platform from which they can be demonstrated.

    Without a platform, far from easy, demonstration is impossible.

    Like

  6. Mark,

    It is indeed a big subject but I will try my best to summarise the issues.

    The IPCC’s use of terms of confidence is explained in their document Guidance Note for Lead Authors of the IPCC Fifth Assessment Report on Consistent Treatment of Uncertainties. You will recall that neither myself or Dr Terje Aven were particularly impressed by it. The document speaks of a ‘calibrated language’ but fails to offer clear definitions of the terms it uses. As Dr Aven put it:

    The important concepts of confidence and likelihood used in the IPCC documents remain too vague to be used consistently and meaningfully in practice.

    So it is actually quite difficult to determine the IPCC’s position on the relationship between uncertainty, precision and accuracy. The IPCC’s language is based upon statements of likelihood combined with statements of confidence in those statements. However, both likelihood and confidence are calibrated with such vagueness that one cannot hope to discern whether anything is understood by the guideline’s authors regarding the role of epistemic deficit. Given the incoherence and conceptual vagueness within the guidelines, I think it is safe to presume that the authors’ objectives of consistent and meaningful use of language has not been met. In fact, I think that conclusion has been demonstrated in a study, though it will take me a while to dig out the reference.

    The problems do not end there, however. I see grave problems in the way that the IPCC uses consensus amongst experts as a metric for determining confidence, such that, for example, high confidence can result from high consensus even when the data is weak (i.e. in the presence of an epistemic deficit). Then there is the general failure by the IPCC to appreciate the true nature of the dichotomy between risk and uncertainty – or as Dr Aven put it:

    A key feature of these perspectives is the sharp distinction between risk and uncertainty and how these two are measured. Much of the IPCC terminology on risk and uncertainty lacks this dichotomy.

    Or as I put it in my review:

    The result is a conceptual dog’s dinner that leaves the reader unable to discern whether the IPCC is advocating risk aversion or uncertainty aversion, or indeed appreciates that a distinction can be made between the two.

    The bottom line is that the IPCC’s AR reports are linguistic instruments and so cannot help but be vague, and the IPCC’s best efforts at mitigating that vagueness have not been nearly good enough. And yet it doesn’t seem to have held them back. Their reports are packed with political utility and that’s all that matters at the end of the day.

    Liked by 1 person

  7. Max,

    You are quite right to point out that the IPCC is a consensus-generating machine. It is clearly stated in the guidelines for lead authors that all ‘best endeavours‘ should be made to arrive at statements that enjoy consensus approval. It is not irrelevant to note that vague statements are more easily agreed, This has to be a factor, therefore.

    Liked by 1 person

  8. Robin,

    I’m not surprised to see that you baulked at the closing remark. I am well aware that you and I approach the debate with a different emphasis. For example, my take would be that a main reason why ‘these overriding factors have not had a platform‘ is because the narrative is being controlled by those who are in the thrall of the vague assessments that ‘objectively’ speak of extinction. Your concerns cannot be allowed a platform.

    Like

  9. Vagueness is a positive boon for those seeking to permanently alter our way of life (and our diet). Take Arla for instance. I’m sure by now you’ve all heard about Arla and their latest ‘exciting’ scheme to make their milk ‘planet friendly’ and sustainable by feeding their cows a ‘safe and effective’ chemical concoction which allegedly reduces methane(CH4) emissions from the gut – by preventing hydrogen(H) and carbon dioxide(CO2) from chemically binding in the gut to form methane. The ‘science’ behind this initiative is supposedly the ‘fact’ that reducing methane emissions from (in this case dairy herds) will help prevent the world from breaching the 1.5C Paris goal. Yes, no, seriously.

    You might expect that Arla, by applying such a radical ‘solution’ to climate change involving the re-engineering of the cow’s enteric system which evolved naturally over hundreds of thousands of years and by interfering with the basic diet of millions of ordinary people, would at least provide a quantitative scientific analysis of the benefits vs. possible harms of their proposed radical solution. Forget it. What you get is ‘safe and effective’ – peer-reviewed articles innit. Bovaer doesn’t get into the milk at all and it’s safe for the cows even though it might cause cancer and fertility reductions in rats and must be handled with care by farmers to protect against skin contact and inhalation. As for the planet saving ‘necessity’ of this intervention – well, it’s a “science-based target” innit. That’s all you plebs need to know. But some of us plebs need to know more.

    The MD of Arla says: “Our targets have been approved by the Science-Based Target Initiative as consistent with emissions reductions required to keep global warming to 1.5⁰C, making Arla the first large dairy company in the UK to receive this important approval.”

    We are supposed to accept this statement in all its glorious scientific vagueness: a handwaving appeal to authority (provided with a reference so tiny you need a microscope to view it). The reference is to the science-based targets website which is a masterclass in corporate waffle from a bunch of scientifically challenged green wonks who also just appeal to the authority of the IPCC ‘science’ whilst pretending to legitimise and authorise ‘science-based’ targets to any private company (like Arla) applying to them to have their batshit crazy ‘sustainable’ innovations legitimised and authorised. Not one of the executive staff at science based targets has a relevant climate/meteorological qualification or work experience. The CEO is a lawyer. If you are looking for the quantitative science-based analysis to justify the reduction of methane by cow farts – forget it. Until such time as this fake scientific rubber stamping process for all sustainability interventions is taken down (by taking down the scientific authority of the IPCC) this nonsense will continue, regardless of the absurdities and the harms of Net Zero, simply because . . . . . saving the planet, innit.

    Liked by 3 people

  10. Jaime,

    Yes, it’s a wonder, isn’t it? The precautionary principle must be applied when thinking about the risks of climate change, but never when thinking about some of the batshit crazy ‘solutions’.

    Liked by 5 people

  11. John, my take is straightforward. These factors don’t have a platform because, as Dieter Helm said in his recent excellent paper discussed in the Case Against NZ thread:

    … there is a whole institutional structure embedded within … national approaches, with jobs, organisations and activities closely tied in, and lots of lobbyists with money to gain from them. There is a whole ecosystem of “activists”, “campaigners” and companies benefiting from the associated subsidies. Admitting that both are failing puts a great number of people’s careers at stake, as well as lots of profits depending on subsidies.

    Hard, practical (and understandable) reasons: nothing to do with people and organisations being in thrall to vague fears of extinction.

    Like

  12. Jaime,

    I’ve just visited the SBTi website, and the following quote stood out for me:

    The Net-Zero Standard gives companies a clear blueprint on how to bring their net-zero plans in line with the science, which is non-negotiable in this decisive decade for climate action. Because we are running out of time.

    It’s a quote from Johan Rockstrom, the current director of the Potsdam Institute for Climate Impact Research, i.e. Hans Joachim Schellnhuber’s baby.

    Firstly, that reference to non-negotiable science pretty much sums up all that is wrong with the debate. What’s that you say? Net zero is unachievable, pointless and bound to be disastrous? I’m sorry, but this is the science we are talking about here, and the science is non-negotiable.

    Secondly, I seem to recall that Schellnhuber has been pretty adamant in the past that listening to the climate scientists is actually a pretty bad idea because they haven’t done a good enough job of exploring the risks implied by the fat tails of prediction:

    https://cliscep.com/2024/08/25/what-lies-beneath/

    So what is it to be? Should we be listening or not?

    This is why I think organisations such as the SBTi, and individuals like Gleick, make far too much of the brand image of science.

    Liked by 4 people

  13. Robin,

    I’ve never understood why you insist on there being only one factor, to the exclusion of others that seem to me to be just as germane. I don’t doubt for a minute that the Climate Complex is so deeply embedded that it is now virtually impregnable. But I don’t see that as a reason for dismissing the role that fears of extinction have played in its establishment and how they still play a role in protecting it from valid critique. Not the role, but a role, and one that cannot be ignored when attempting to push back against the worst elements of net zero.

    Liked by 1 person

  14. John,

    I’ve never insisted on there being only one factor any more than you have. No doubt there are some who foolishly fear extinction, but I believe Helm has identified the main reasons why the matters I’ve identified don’t have a platform.

    Like

  15. Robin,

    I’m pleased to hear that you accept the presence of many factors, but rather than talk of which is ‘main’ I prefer instead to think in terms of necessity and sufficiency. I do not doubt that you are right in saying that it will be necessary to overcome the factors highlighted by Helm, but I maintain that it will not be sufficient. Likewise, I believe it will be necessary to remove the fear of extinction from the public debate in order to make progress, but I do not believe it will be sufficient.

    As I said earlier, I think we choose to emphasise different aspects of the debate. I choose to post articles such as this one, rather than focus on net zero practicalities, simply because I feel it is how I can best contribute to the debate. I steer clear of other subject matters only because others on here are already doing a grand job. The closing remark, to which you took issue, was just my way of emphasising how important I feel the existential narrative and the misattribution of scientific kudos are. I don’t think they are subjects we can afford to neglect.

    Liked by 3 people

  16. John:

    Good points, well made. However I think you may perhaps be overestimating the importance of the existential narrative. Look for example at the article published by David Turver this morning: https://davidturver.substack.com/p/subsidies-galore

    Vast sums of money for the taking. I doubt if many of those benefiting from all this are much concerned about possible extinction. However I accept that, if their source of funding were to come under serious threat, a few may drum it up in an attempt to divert attention. But I think that’s unlikely to be successful if the factors I’ve identified were already in the public domain.

    Like

  17. Robin,

    I feel this may be a good point at which to draw a line under today’s discussion, lest we both resort to cherry-picking articles that justify our respective perspectives. I’d prefer that we just both respect the other’s perspect (see what I did there?)

    Like

  18. Meanwhile, albeit somewhat off topic, I’d like to draw attention to the original batshit crazy idea.

    It was conceived by a certain Lytle S. Adams, a dentist from Pennsylvania. Incensed by the dastardly attack on Pearl Habor, he proposed retaliation by dropping bat bombs on Japanese cities. These were to be casings that held over a thousand compartments in which there would be installed hibernating Mexican free-tailed bats with a timed incendiary bomb attached. After being parachute dropped, the casings would open up to release the bats so that they could disperse and roost under the wooden eaves of nearby buildings.

    The benefit of the ensuing conflagration, in terms of Japanese citizens being burned to death, would be well worth the adverse impact on the chosen bat species. After all, as Adams put it, the bat was the “lowest form of animal life“, and that, until now, “reasons for its creation have remained unexplained“. Indeed, cometh the hour, cometh the mammal, as Adams suggested they were created “by God to await this hour to play their part in the scheme of free human existence, and to frustrate any attempt of those who dare desecrate our way of life.” Roosevelt, for one, was suitably impressed with the idea, saying “This man is not a nut. It sounds like a perfectly wild idea but is worth looking into.”

    Well, the US military did indeed look into it but, unfortunately, during testing they managed only to burn down one of their own airfields following an accidental release of the bats. Subsequently, the idea came to be known as “Die Fledermaus Farce” and was finally abandoned in favour of the atomic bomb — so it all turned out alright in the end and Cillian Murphy won his Oscar.

    The only downside is that both ideas came too soon to receive SBTi approval.

    https://en.wikipedia.org/wiki/Bat_bomb

    Liked by 3 people

  19. John,

    Thank you for your thoughtful and insightful response to my question. Apologies for seeing it only now – I have had another busy day.

    Liked by 1 person

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.