In a recent Climate Etc. article, I drew attention to the debate within climate science regarding how the spread of a multi-model ensemble of differently structured models should be analysed, and I pointed out that the normative practice of interpreting the implied uncertainty in aleatory terms has no ‘epistemological validity’. This has an importance since it is well-established that treating epistemic uncertainty as if it were aleatory can result in erroneous risk assessments. A point made was that this was not a conspiracy or politically driven practice, nor was it done for personal financial gain. It was simply the natural result of an understandable need to comply with what is deemed to be current best practice, no matter how sub-optimal it may be [1].
However, in concentrating upon the social dynamics operating at the science-policy interface, the article somewhat downplayed other factors that may be playing a role in the emergence of a sub-optimal but normative practice, i.e. the practice of pretending that the epistemic uncertainties involved can be legitimately analysed as if they were aleatory. That there are other factors was hinted at when I wrote that it was only “to a large extent” that the practice was designed to appease the policy makers. In fact, it is likely that some climate scientists do it because they just don’t know any better.
In order to pursue this idea, I now draw heavily upon the essay ‘Uncertainty in climate science and climate policy’, co-written by statistician Jonathan Rougier of the University of Bristol, UK, and climate scientist Michel Crucifix of the Université Catholique de Louvain, Belgium. It is an essay that explores the manner in which scientists serve policy-makers who are attempting to make high stake decisions under uncertainty. However, the focus of their essay lies in the traditions and assumptions that underpin the scientific endeavour, and how they may not necessarily ideally equip the climate scientist when it comes to informing policy. As Rougier and Crucifix put it:
In a nutshell, we do not think that academic climate science equips climate scientists to be as helpful as they might be, when involved in climate policy assessment. Partly, we attribute this to an over-investment in high resolution climate simulators, and partly to a culture that is uncomfortable with the inherently subjective nature of climate uncertainty.
Their paper expands upon the issue, highlighting a number of problems, two of which in particular may have a bearing upon the predisposition for climate scientists to accept, uncritically, the treatment of multi-model ensemble uncertainty as if it were aleatory.
Problem number 1: The converted meteorologist
In their paper, Rougier and Crucifix are at pains to point out that the current practice of using multi-model ensembles to explore the epistemic uncertainties inherent in long-term climate projections was preceded by years of experience using computer models to address short-term meteorological forecasts. In that sense, they refer to the modern-day climatologist as a ‘converted meteorologist’. The point made is that the challenge that had faced the meteorologist is quite different to that now faced by the climatologist. Whilst both challenges demand a good understanding of the physics, the meteorologist has the primary problem of grappling with the stochastics and chaos that characterises the variability of the system under study, whereas the climate scientist is predominantly confronted with epistemic uncertainties that obstruct an understanding of long-term trends [2]. As Rougier and Crucifix put it:
Internal variability, part of the natural variability of the climate system, can be estimated from high-resolution simulators, but it is only a tiny part of total uncertainty. Over centurial scales, it is negligible compared to our combined uncertainty of the behaviour of the ice-sheets, and the marine and terrestrial biosphere.
As a consequence, the road to improved forecasting is markedly different for the two challenges. For meteorology, improvement is traditionally gained by using simulations of greater fidelity and granularity, powered by extra compute and physical perturbation ensembles of increasing size. For climate science, the solution lies more in gaining a greater understanding of the complexities and feedbacks involved and how they operate in the long term. The distinction is essentially that which exists between variability and incertitude, and hence between the aleatory and epistemic approaches. However, there is always a danger that an individual steeped in the traditions of the former may not be able to fully adjust when engaged in the latter. If Monte Carlo simulations and similar stochastic sampling techniques have proved immensely effective within their realm, the temptation might be to believe that the realm of the epistemic could be similarly tamed.
One individual who may fit this description is Professor Tim Palmer, an academic who has done as much as anyone in pioneering and finessing the techniques that underpin modern-day weather forecasting. In his book, The Primacy of Doubt [3], these successes are described in chapters such as ‘Chaos, Chaos Everywhere’, ‘The Geometry of Chaos’, ‘Noisy, Million-Dollar Butterflies’ and ‘The Two Roads to Monte Carlo’. However, in the chapter ‘Climate Change’, he states the following when discussing climate change multi-model ensembles:
This provides a natural ‘ensemble of opportunity’ to study climate change. It is an example of the so-called multi-model ensemble mentioned in Chapter 5. Each model differs from the others in the precise computational techniques used to solve the Navier-Stokes equation, and, more importantly, in the parameterisation formulae for unresolved processes.
There is nothing in the above to suggest that the aleatory sampling techniques he had described in Chapter 5 (Monte Carlo simulation) are anything other than fully applicable to the ensembles he now describes. And yet, Chapter 5 was predominantly about the modelling of variability, whereas now we have moved on to matters of incertitude, as characterised by models of varied structure. Just to reinforce the suspicion that Palmer sees nothing wrong with the adoption of aleatory methods to treat such epistemic uncertainty, on page 121 he reproduces a histogram showing the various estimates of climate sensitivity resulting from an ensemble of differently structured models. Over this he fits a probability distribution curve, seemingly unconcerned that this takes no account of the histogram’s decidedly non-stochastic representation of the space of possible model structures. This is precisely the normative practice to which I have referred above, and yet there is no hint in his book that he sees it as a pragmatic (albeit sub-optimal) expedient, or as a probabilistic treatment adopted only for the benefit of the policy-makers. Indeed, he goes on to justify its legitimacy by reference to the concept of the ‘wisdom of the crowds’. But this is a concept that only applies when the crowd acts entirely independently and is statistically representative. Of course, neither of these conditions apply to the currently available ‘crowd’ of climate models.
In fact, as one reads further into Palmer’s book, covering as it does a number of modelling applications in a variety of fields, one gets the distinct impression that there isn’t an epistemic challenge that Palmer feels couldn’t be met by Monte Carlo sampling of a sufficiently large ensemble, crunched using a sufficiently powerful computer. For example, when later discussing how one might create a ‘model of global society’ in order to forecast how the socioeconomic impacts of climate change may pan out, he says:
As we have discussed, a weather forecast model has some billions of degrees of freedom, so doubling this to incorporate degrees of freedom of individuals is not completely out of the question. Of course, individual agents will have to be treated as having some inherent stochasticity. But as I have explained, we have to do this anyway for weather variables.
Armed with an ensemble of such models, Palmer sees great potential in tackling the epistemic uncertainties appertaining to our shared future:
Of course, such a digital ensemble twin would not only be able to tackle the socioeconomic problems of climate geoengineering, it should be able to provide credible estimates of future migration, future conflict, future health risks, future food supplies, the future health of the oceans and so on.
Of course? It is far from obvious to me that a reliable ‘model of global society’ can be created by extending a stochastic model designed to forecast the outcome of a system’s variability.
Problem number 2: The denial of subjectivity
In their paper, Rougier and Crucifix address a philosophical issue that lies at the heart of much of the controversy regarding uncertainty. The problem is that it is difficult to settle upon a single notion of uncertainty when there isn’t even a universally agreed understanding of what probability is and what it signifies. As they put it:
In this paper we confine our discussion of climate uncertainty quantification to the assessment of probabilities. There are, of course, several interpretations of probability. L.J. Savage wrote of “dozens” of different interpretations of probability, Savage (1954, p. 2), and he focused on three main strands: the Objective (or Frequentist), the Personalistic, and the Necessary.
Having drawn attention to this troubling ambiguity, they then declare their own position:
Of all of these interpretations, however, we contend that only the Personalistic interpretation can capture the ‘total uncertainty’ inherent in the assessment of climate policy. Our uncertainty about future climate is predominantly epistemic uncertainty—the uncertainty that follows from limitations in knowledge and resources.
The Personalistic interpretation has the benefit of being clear cut and pragmatic, in so far as it refers to the confidence one has in one’s own beliefs, leading to a willingness to place a bet upon an outcome [4]. As they put it:
Not everyone will find the Personalistic definition of probability compelling. But at least it provides a very clear answer to the question ‘What do You mean when You state that Pr(A) = p?’.
That said, they recognise that the highly subjective and stakeholder-based nature of the Personalistic interpretation does not go down well with some:
However, many physical scientists seem to be very uncomfortable with the twin notions that uncertainty is subjective (i.e. it is a property of the mind), and that probabilities are expressions of personal inclinations to act in certain ways. At least part of the problem concerns the use of the word ‘subjective’, about which the first author has written before (Rougier, 2007, sec. 2). This word is clearly inflammatory.
This is very true. There does indeed appear to be a notion of ‘scientific uncertainty’ that is held above the notions filling the head of the hapless non-scientist. This view that there exists such a pristine notion of uncertainty would seem to be evident in the hydrologist and science communicator Peter Gleick’s review of a book by climate sceptic Michael Shellenberger. In that review, Gleick wrote:
Shellenberger misunderstands the concept of ‘uncertainty’ in science, making the classic mistake of thinking about uncertainty in the colloquial sense of ‘We don’t know’ rather than the way scientists use it to present ‘a range of possibilities’.
It is quite difficult to understand exactly what Gleick is driving at, but he certainly seems to be rejecting the idea of subjective uncertainty being legitimately scientific. It appears instead that the objective notions of uncertainty that underpin the likes of measurement theory, with its decidedly objective interpretation of probability, are preferred to the supposedly unscientific notions based upon Shellenberger’s subjective interpretation. It does rather look like a claim for the scientific credentials of aleatory uncertainty, in which a probability distribution curve would indeed capture ‘a range of possibilities’. It is easy to imagine that someone who is so squeamish regarding the ‘colloquial’ epistemics of ‘we don’t know’ would be more than comfortable to see the instruments of aleatory analysis brought to bear when addressing a multi-model ensemble of differently structured models – if only to ensure that the analysis could be called scientific!
Those who adhere to the idea that uncertainty has to be addressed objectively have been accused of a so-called ‘mind projection fallacy’ in which personal incertitude is projected upon the real world. Rougier and Crucifix believe that climate scientists have been particularly guilty of this. First, they point out how the IPCC fails to define its terms in a way that would pin the issue down:
Consider the uncertainty assessment guidelines for the forthcoming IPCC report (Mastrandrea et al., 2010). Nowhere in the guidelines was it thought necessary to define ‘probability’. Either the authors of the guidelines were not aware that this concept was amendable to several different interpretations, or that they were aware of this, and decided against bringing it out into the open.
After which, they make their accusations quite explicit:
We can hardly suppose that the omission of a definition for the key concept in such an important and high-profile document was made in ignorance. And yet the mind projection fallacy is in evidence throughout. It looks as though the authors have deliberately chosen not to acknowledge the essential subjectivity of climate uncertainty, and to suppress linguistic usage that would indicate otherwise. This should be termed ‘monster denial’ in the taxonomy of Curry and Webster (2011). Choosing not to rock the boat is convenient for academic climate scientists.
Whatever one may feel regarding the above accusation, it is hardly reassuring that such a crucial matter as the meaning of probability has been left hanging in the air by the IPCC. After all, as Rougier and Crucifix point out:
For policymakers, the meaning of ‘Pr(A) = p’ is of paramount importance, and they need to know if ten different climate scientists mean it ten different ways.
Such ambiguity suits nobody’s purpose [5].
The nature of the dilemma
There may be a number of factors that have motivated climate scientists to fall in line with the normative practice for quantifying uncertainty, but they essentially boil down to just two: either the individual joins others in adopting a pragmatic, albeit flawed, approach, or they adopt the approach because they genuinely believe it to be technically sound. Unfortunately for those in the latter category, there is the problematic nature of uncertainty to be contended with. This does not just lie in the sheer scale and complexity of the challenge that confronts us; it is much more profound than that. It lies in the absence of a universal understanding of what uncertainty actually means as a concept, combined with the commonsense assumption that we do. It lies in the fact that uncertainty laughs in the face of quantification whilst simultaneously demanding it. It lurks in the gap between p and 1-p; a gap that exists because we can’t even agree what p means. Uncertainty is a profoundly destabilising state of mind that distorts perception of risk, to the extent that one can no longer be sure whether it is uncertainty or risk to which we have become averse. And it is an enchanter that fools us into thinking it exists in the real world rather than in our own heads. I have spent much of my professional life and all of my retirement trying to come to terms with it conceptually, and I fear I have only just scratched the surface. Those who refer to the ‘uncertainty monster’ do so with much justification.
Ultimately, tackling climate change is a risk management problem in which decisions are being made under uncertainty. One cannot do this confidently until the ‘uncertainty monster’ has been tamed, and this is true irrespective of the position one takes. That said, I am certain of one thing. Anyone who believes that the current state of play in climate science is sufficiently mature and settled as to justify committing (with high confidence both in the pressing need and the probable outcome) to a set of profoundly transformative interventions, is engaging in a significant act of faith. Throughout the world, policy makers are demanding that we keep that faith, and the scientists assure us that it is justified. That’s all very well, but as someone with a risk management background, I would feel much more comfortable if I didn’t know just how many conceptual and methodological weaknesses lay behind the proselytising.
Footnotes:
[1] When I recently asked Grok to provide a synopsis of the article it said:
“The post does not allege financial corruption, political coercion, or sinister motives. Instead, it describes a systemic, institutional pressure rooted in the established norms of climate science practice—specifically, the expectation to deliver probabilistic, policy-actionable outputs using methods (like multi-model ensembles) that treat epistemic uncertainties as if they were aleatory.”
[2] When I say ‘system under study’, I refer here to the weather system. And when I say that climate scientists are more concerned with epistemic uncertainties, that isn’t to deny the existence of internal variabilities that may operate on climatic timescales, only to say that an understanding of them is predominantly an epistemic challenge.
[3] The Primacy of Doubt, Tim Palmer, Oxford University Press, ISBN 978-0-19-284359-3.
[4] Actually, so-called personalistic probability is just subjective probability by another name (it was Savage’s original terminology). Rougier and Crucifix are just adding the widely-held philosophical viewpoint that one can measure strength of personal belief by asking what the individual is prepared to wager. I don’t happen to agree with that proposition since there are many factors that can affect one’s appetite for risk, i.e. cause an individual to act in ways that seem inconsistent with their strength of belief. However, I do agree that from a decision theoretic viewpoint, the stakeholder-based subjective probability makes a lot a sense, not because it captures the full uncertainty, but because it captures the full subjectivity of decision-making.
[5] In fact, the definitional deficiencies in the Mastrandrea guidelines go beyond covering the concept of probability. As risk scientist Professor Terje Aven said when reviewing the guidelines, ‘The important concepts of confidence and likelihood used in the IPCC documents remain too vague to be used consistently and meaningfully in practice.’ I might add that guidelines for the consistent treatment of uncertainties are hardly helped by a failure to define what is meant by ‘uncertainty’.
John,
Thank you for your continuing efforts to educate us about the important distinction between epistemic and aleatory uncertainty. I’m getting there!
But the paragraph that disturbed me most was this one:
Consider the uncertainty assessment guidelines for the forthcoming IPCC report (Mastrandrea et al., 2010). Nowhere in the guidelines was it thought necessary to define ‘probability’. Either the authors of the guidelines were not aware that this concept was amendable to several different interpretations, or that they were aware of this, and decided against bringing it out into the open.
I wonder what we can expect next time out from the IPCC? From what I have read, I fear it’s becoming more alarmist and political, and less scientific.
LikeLiked by 1 person
We need to select the best model, and bin the others. The trouble is, how do we know which is the best model? By the time we find out how much too hot it’s running, it’s already obsolete, and has been replaced by another, better model, with a smaller grid or something (I think they are now at 50 km * 50 km, down from the previous generation of 100 km * 100 km – please correct me if I’m wrong).
This trouble must also affect the subjective probability, since no-one has to take the model outputs remotely seriously. And yet governments around the world are willing to decide policy based on these simulated Earths. I also abhor the way they make themselves look good by tuning themselves on the training period. And I’m sure if you asked, the important people would be sure that the models were independent of one another – meaning that some sort of statistical analysis would be less dumb than it is now.
I’ve used the word “cyber-deference” before, and I think it is a serious problem. A computer, being emotionless, has no baggage and no axe to grind. If it tells us we’re going to fry, we believe it. But not many of us have an inkling of what lies behind the front panel.
LikeLiked by 1 person
Mark,
The IPCC’s AR6 marked a departure from previous reports in that it wasn’t entirely focussed upon probabilistic assessments with their lack of epistemological validity. Those who are not happy with them have been arguing behind the scenes and advocating for a different approach called ‘storylines’. Those who pushed for storylines to be so prominent in AR6 did so for a number of reasons, but primary amongst them was the fear that the probabilistic risk assessments were underplaying the risk. Storyline approaches are said to be better when dealing with low probability, high impact risks, but what is really meant is that they overplay the risk due to their ‘probability blindness’. And that, I suspect is why they appealed so much to the IPCC.
I’m not sure what we are going to get in AR7, but I suspect it will feature a bit of everything to keep everyone happy. We will have more storyline alarmism but also a hefty dose of good old-fashioned Otto-style extreme weather event attribution, complete with all of its epistemological dodginess.
https://cliscep.com/2021/09/15/ar6-telling-stories-and-selling-ideas/
LikeLiked by 2 people
Ah yes, your friend, Ms Otto:
https://eidclimate.org/climate-experts-raising-the-alarm-about-a-controversial-professors-involvement-in-upcoming-ipcc-report/
LikeLiked by 1 person
John – great post, much to think about.
Mark – great Otto catch, thanks for the link proving her activist leanings.
LikeLiked by 1 person
Mark,
As your linked article says, Otto has never kept her political motivations secret, and her appointment as a lead author doesn’t bode well. However, I would argue that it isn’t upon such an appointment that the IPCC loses its purely scientific credentials. The exploitation of extreme weather event attribution to encourage compliance with climate change policy has a tradition that goes back at least to AR5, WG3, Chapter 2. As I said in the concluding part of my five-part critique of that document, regarding the proposal to “Characterize the likelihood of extreme events and examine their impact on the design of climate change policies”:
https://cliscep.com/2021/02/21/the-ipcc-on-risk-part-5-leaving-no-room-for-doubt/
LikeLiked by 1 person
John – thanks for the link back to your 2021 post/posts at the time, well worth a reread to refresh my memory. Most MSM “with salient tales of natural disaster” have certainly delivered the desired message to the UK public, with Otto & other selective climate scientists given airtime to push “if you think it’s bad now, just wait. We have to act now, before it’s to late”.
LikeLike