When attributing an extreme weather event to human-induced climate change, most climate scientists know better than to attempt a categorical statement of causation. The best that can be supported in the current state of understanding is a probabilistic attribution such as a Fraction of Attributable Risk (FAR) or a Risk Ratio (RR). The stock statement issued by journalists, until recently, had been along the lines of:
“Experts say climate change is expected to increase the frequency of extreme weather events, such as heatwaves. However, linking any single event to global warming is complicated.”
Complicated indeed, because the problem is that these extreme weather events are regional, and whilst the effects of anthropogenic heating on global thermodynamics are claimed to be understood, the effects on regional atmospheric dynamics are still subject to deep uncertainty. Such uncertainty has been something of an impediment to progress, and so it should not come as a surprise to the sceptical to see the IPCC declaring in its assertive Sixth Assessment Report (AR6) that it has finally slain the monster. But has it?
The nature of the beast
AR6, WGI has this to say about deep uncertainty:
“A situation of deep uncertainty exists when experts or stakeholders do not know or cannot agree on: (1) appropriate conceptual models that describe relationships among key driving forces in a system; (2) the probability distributions used to represent uncertainty about key variables and parameters; and/or (3) how to weigh and value desirable alternative outcomes (Abram et al., 2019).” 
I find little to disagree with there, though the succinct way I would prefer to put it is that deep uncertainty is a second order uncertainty in which one cannot be certain just how uncertain one is. Either way, the report goes on to explain that deep uncertainty is a particular issue when dealing with low likelihood, high impact (LLHI) events. According to AR6, the same goes for projections of sea level rise and ‘processes related to Marine Ice Sheet Instability and Marine Ice Cliff Instability’.
The fact that deep uncertainty exists at the heart of the science of climate change is something that most activists don’t like to talk about very much, and those that do seek to turn the problem into an opportunity by invoking the idea of uncertainty being ‘actionable knowledge’. However, the authors of AR6 appear to believe that they have finally discovered an effective solution to the problem of deep uncertainty; which would be fantastic news for them since that would enable them to speak much more confidently about the LLHI end of the risk spectrum — the end within which extreme weather events ply their destructive trade. The wonderful new idea is referred to as the ‘storylines’ or ‘narrative’ approach to climate change risk assessment, and its addition to the risk assessment toolbox is deemed by AR6 to exorcise the ghost of deep uncertainty, by building:
“…a cohesive overall picture of potential climate change pathways that moves beyond the presentation of data and figures.” 
Who could resist a method that moves beyond data and figures? And the hype doesn’t stop there, since we are told that storylines:
“…can also help in assessing risks associated with [Low Likelihood High Impact] LLHI events (Weitzman, 2011; Sutton, 2018), because they consider the ‘physically self-consistent unfolding of past events, or of plausible future events or pathways’ (Shepherd et al., 2018b), which would be masked in a probabilistic approach.”
Better still, given that story-telling plays a starring role in AR6, you should be confident that it is an idea that receives the full backing of the climate change community, since the IPCC reassures everyone that the ‘openness’ and ‘transparency’ of their world-beating review process ensures that controversial and ill-supported approaches are rejected.
I’d love to believe that, but there are a couple of reasons not to. Firstly, the storyline approach does not constitute a risk assessment, let alone one that addresses deep uncertainty. Secondly, just about everyone in the field of Detection & Attribution (D&A) would tell you so.
So how come the storylines approach has become the cause célèbre of AR6? And how, even whilst the infighting is still raging behind the scenes, does a fringe idea of contested merit come to effect such a loudly announced change of direction for the IPCC? These are important questions, so before explaining what the storylines approach is, and outlining its strengths and weaknesses, I will need to dwell upon the elephant poo in the room.
Trouble at t’mill
Each time the IPCC produces an Assessment Report, a different tone and direction can be discerned. Ostensibly, this reflects developments in the science and the extent to which policy has been revised as a result of greater understanding. However, it is equally true that every report is compiled by a different team to the previous one – or, at least, the same people adopting a different hierarchy. Consequently, changes in tone and direction are also influenced by the personal views of those who were the most successful in grabbing the keyboard and those who attained editorial control. The result, nevertheless, is always presented as the outcome of the IPCC’s unimpeachable review process, intended to ensure that any shift in focus legitimately represents a shifting consensus within the relevant expert community. This time around, the new pitch is introduced with:
“The AR6 has adopted a unified framework of climate risk, supported by an increased focus in WGI on low-likelihood, high-impact events.” 
Under which one is informed:
“AR6 also makes use of the ‘storylines’ approach, which contributes to building a robust and comprehensive picture of climate information, allows a more flexible consideration and communication of risk, and can explicitly address low-likelihood, high-impact events.”
There is no hint of any internal dissent there. There is no statement to the effect that the storylines approach remains controversial or is criticised by the majority of the D&A community for contributing little to a ‘robust and comprehensive picture’. And yet, not that long ago, Naomi Oreskes, that most famous of eminent climate scientists and vocal advocate of the storylines approach, had observed:
“…the majority of D&A scientists reacting in a very negative and even personal manner.”
Such was the adverse reaction from the mainstream D&A community that a certain Professor Eric Winsberg teamed up with Oreskes to write a paper intended to defend the storylines approach and put the D&A crew back in its box. There was clearly no love to be found in the room, and certainly nothing to suggest that the storylines approach had been universally accepted as the best way forward. This situation prompted Dr Judith Curry to remark at the time:
“In any event, using such storylines, and claiming (even implicitly) that they are part of the AGW ‘consensus’ is scientifically dishonest.” 
And yet that is precisely the scientific dishonesty we now see in AR6 WGI. So has there been a change of heart? Has the D&A community seen the error of its ways and rolled over? One would not have thought so, given the nature of their concerns. For example, in dismissing Winsberg’s accusation that the debate boiled down to a question of ethics and the D&A community’s rejection of Bayesianism, Stott, Karoly and Zwiers wrote:
“The question of ethics and its relation to the question about how to formulate the null hypothesis for testing is not fundamentally a question of a choice between Bayesian and frequentist approaches. Instead, whether posed in a Bayesian or frequentist manner, we return to the point that the event attribution problem is an estimation problem. Given that changes locally can be very different to global expectations, as a result for example of dynamically induced changes over-coming thermodynamically induced ones, great care must be taken in using prior expectations derived from global considerations. In some cases, the inappropriate use of such prior information could reach too liberal conclusions. In other cases, the neglect of relevant prior information could lead to overly conservative conclusions.” 
There have been no recent developments in climate science that render the above statement invalid. Presumably, therefore, the objections must still stand. Basically, the D&A community distrusts the storyline approach, not because it is a wrong approach, but because claims that it deals with deep uncertainties do not stand up to close scrutiny. The uncertainties are just swept under the carpet by using dodgy priors.
The real problem with D&A
When appraising the value of the storylines approach to climate change risk assessment, it is only fair that one first looks at the strengths and weaknesses of the approach favoured by the D&A community. So let us start by looking at what the advocates of the storyline approach say regarding the limitations of probabilistic event attribution. If it isn’t a problem regrading ethics and rejection of Bayesianism, then what could it be?
Actually, Theodore G. Shepherd, a much-cited advocate of storylines, puts the case for distrusting the probabilistic approach very well. It is all down to an issue that I have been trying to explain on this website for three years now. Whilst contrasting with the legitimate use of probabilistic techniques for dealing with aleatory uncertainty (i.e. physical variability), Shepherd says:
“The uncertainty in the climate response to forcing is conceptually very different. It is not a property of the physical climate system; rather, it is a property of a state of knowledge, or degree of belief, and it can be reduced as knowledge improves. In contrast with aleatoric uncertainty, which is objective, such epistemic uncertainty is subjective . Therefore, treating epistemic uncertainty as if it were aleatoric, with a focus on the multi-model mean as a best estimate, has no epistemological justification. This has been recognized for some time [21,27,28], but the practice continues to be normative (e.g. as in figure 1). It is interesting to consider why this is so, since, in most areas of science, the essential distinction between systematic and random sources of uncertainty is well recognized.” 
Well, maybe not for ‘most areas of science’, but certainly most engineering and risk management professionals could tell you what is wrong with the climate scientists’ handling of uncertainty. Even so, coming from the guy who helped give AR6 its latest big idea, that’s quite a damning indictment. Elsewhere, Shepherd puts it even more bluntly:
“Since epistemic uncertainty is deterministic and inherently subjective, it follows that there is no objective basis for a probabilistic approach, and no such thing as objective climate information.” 
What Shepherd actually meant here was that there is no basis for an objective probabilistic approach — epistemic uncertainty can still be dealt with probabilistically but by using Bayesian techniques. The real sin committed by climate science is not in trying to treat epistemic uncertainty probabilistically, but in failing to separately propagate the two types of uncertainty in order that each may be individually addressed using the appropriate branch of probability theory (i.e. frequentist or Bayesian). It’s this that leads to epistemic uncertainty being mistreated as though aleatory. Even so, Shepherd puts a mighty fine case for distrusting the conventional approach to D&A, and hence distrusting the confident attributions made by its practitioners.
A shallow solution to a deep problem?
It’s all very well criticizing the ‘probabilistic’ approach, but what has AR6 got to say about storylines and how they reach the parts that so-called probabilistic techniques cannot? This is how they are described:
“In the broader IPCC context, the term ‘scenario storyline’ refers to a narrative description of one or more scenarios, highlighting their main characteristics, relationships between key driving forces and the dynamics of their evolution…WGI is mainly concerned with ‘physical climate storylines. These are self-consistent and [a] possible unfolding of a physical trajectory of the climate system or a weather or climate event on timescales of hours to multiple decades.” 
You will note that probability and uncertainty are not an issue here. All that is required to legitimize a storyline is that it should be self-consistent and deemed possible. As described above, a storyline is not an estimate. Instead, storylines are designed to answer the following type of question: ‘Since this event has happened, what do we know regarding the physical processes involved that leads us to suspect that anthropogenic global warming could have been a factor?’ As such, storylines are akin to an accident investigation. They are ‘just so’ stories that determine what may have been the sequence of events that led to an outcome.
Safety analysts have been employing such narrative techniques for years, so it has to be said that there is nothing particularly novel about the idea (heaven knows why climatologists thought they needed to invent a new term). The difference is that nobody within the safety community has ever deluded themselves that these techniques address deep uncertainty or are preferential to probabilistic assessment. They are good at identifying hazards and their possible causes, but they have nothing to say regarding assessment of risk. For that one needs a quantification of scale, and to do that probabilistic techniques, or their equivalent, are unavoidable. As Stott et al. said, attribution is an estimation problem.
And yet, AR6 has this to say regarding storylines:
“Since AR5, ‘storylines’ or ‘narratives’ approaches have been used to better inform risk assessment and decision making, to assist understanding of regional processes, and represent and communicate climate projection uncertainties more clearly.” 
Well, maybe to ‘better inform risk assessment’ but certainly not to obviate. As for representing and communicating uncertainty more clearly, it suffices to say that storylines do neither. Furthermore, there is nothing in this statement to reveal that the advocates of storylines actually consider them to be superior to the D&A community’s probabilistic approach and, as it happens, that the D&A community strongly disagrees.
So why is AR6 pushing a technique that is deemed by its proponents as being preferable to the D&A community’s probabilistic approaches even though that community says nay? And why is it leading the reader to believe that storylines address deep uncertainty when, at best, they fudge the issue or, at worst, avoid the subject altogether? To answer that, we need to understand what values the IPCC deems important.
It’s all about IPCC’s values
One of the points that the likes of Oreskes and Winsberg have been quite keen to make is that probabilistic event attributions are supposedly prone to underestimate the risk, whilst the storyline approach is prone to heighten risk perception. This they deemed to be very much to the credit of the storylines approach, since an overestimation of risk is ethically preferable (i.e. better safe than sorry). The germ of this argument can be seen in AR6 when it says:
“Recent work also recognizes that choices made throughout the research process can affect the relative likelihood of false alarms (overestimating the probability and/or magnitude of hazards) or missed warnings (underestimating the probability and/or magnitude of hazards), known respectively as Type I and Type II errors. Researchers may choose different methods depending on which type of error they view as most important to avoid, a choice that may reflect social values (Douglas, 2009; Knutti, 2018; Lloyd and Oreskes, 54 2018).” 
However, nowhere does AR6 WGI reveal that this was a main argument used to attack the so-called probabilistic approach towards event attribution – the argument being that the probabilistic approach reflects the wrong social values by favouring Type II errors. I happen to think that was a bogus argument, but that is immaterial. The point is that AR6, quite predictably, has turned its focus towards a method that prides itself in biasing towards Type I errors. That is why storylines are being pushed, not because a consensus has been established in their favour, but because they are perceived as being politically correct as far as the IPCC is concerned. The IPCC, a long-time advocate of exploiting the availability heuristic to increase the perception of risk, had decided that probabilistic event attributions were not doing a good enough job of it. An alternative was required that placed availability bias at the heart of the methodology. Or, as Shepherd puts it:
“…the conventional approach to climate change risk is semantic (e.g., what is a 1 in 1000 year event?), whereas storyline approaches are episodic (e.g. have we seen this before; and if so, what might the next event be like?). Behavioural psychology shows that humans have difficulty responding rationally to risks from events that are outside their experience…we act as though the probability of a bad outcome is less than it really is if an event of that type has not happened to us recently (or ever), and more probable than it really is if it has. This asymmetrical response is known as the ‘availability bias’ (Kahneman 2011). Essentially, we—even those with quantitative scientific training—are more likely to respond to episodic than to semantic information.” 
In other words, the storyline approach invokes episodic memory, which is seen as a good thing because the resulting availability bias is more likely to provoke the desired response. You see the accident – you cut your speed. You see the extreme weather event – you cut your carbon dioxide output. One does not have to come up with probabilities to achieve this. All that is needed is a real event and a self-consistent story that is physically possible. Once again, the IPCC has chosen to promote techniques not because they have merit (which in fact they do) but because it suits their agenda of framing risk in a way that raises concern. The deep uncertainty has not gone away, but a technique has been found that neatly sidesteps it and points towards action.
I should make it clear that I do not seek to take sides here in the debate between the probabilistic event attribution and storylines approaches – they both have strengths and weaknesses and neither covers the whole territory. I only wish to point out that it is disingenuous for the IPCC not to have drawn attention in AR6 to the controversy that reigns, and I suggest that their motives for promoting storylines is value-driven rather than because the approach enjoys consensus support. The fact is that there currently isn’t an ideal means of dealing with the deep uncertainty associated with regional atmospheric dynamics, and storylines do not so much address the problem of predictive attribution as avoid it altogether by focusing upon a different class of question. It’s a valid class, and there is much merit in the way that Shepherd, in particular, has gone about dealing with it, but that doesn’t alter the fact that the IPCC’s primary objective is simply to avoid Type II errors and go for overestimation of risk as a matter of principle. Despite whatever else they say, that I believe is the true motive and that is what lies behind their new preoccupation with storylines.
 Under conditions of exogeneity and monotonicity, the FAR can be shown to be equivalent to the probability of necessity (PN), as per causal analysis. For reasons that should be apparent after reading footnote 22, this has a broader relevance to the risk assessment approach advocated by AR6.
 AR6, WGI, section 126.96.36.199, para 7.
 ‘Actionable knowledge’ is Stephan Lewandowsky’s idea for re-framing ignorance as if it were knowledge. See ‘Uncertainty as knowledge’ from the Philosophical Transactions of the Royal Society. To see what I think about it, take a look at ‘No-one Does Wrong Quite Like Lewandowsky’
 Ibid, section 188.8.131.52, para 1.
 Ibid, Executive Summary, para 8.
 I jest. Oreskes is a professor of history and philosophy and has no expert authority when it comes to climate science or uncertainty analysis. It seems innappropriate that she and her cronies should be cited so much by AR6.
 See Severe Weather Event Attribution: Why values won’t go away. To see what I thought about it at the time, take a look at ‘When Philosophers Attack’.
 See ‘Extremes’, posted on Climate Etc., June 13, 2019
 See Stott, Karoly and Zwiers, 2017.
 See, for example, Gleick: What’s Not to Like?
 See ‘Storyline approach to the construction of regional climate change information’, T.G. Shepherd, 2019. Emphasis is mine.
 Although I should point out that there is such a thing as Objective Bayesianism. But I digress.
 I do wish that when sceptics, such as I, say these things there would be someone out there taking note. Greta says we must all listen to the climate scientists rather than the likes of me. Have you ever listened to Professor Theodore G. Shepherd, Greta? I thought not.
 AR6, WGI, section 184.108.40.206, para 2.
 There are several existing techniques that share features with the climate science ‘storylines’ approach. These include Fault Tree Analysis (FTA), Failure Mode and Effects Analysis (FMEA) and event trees, all from safety engineering, and event histories from the social sciences – in fact, any technique based upon a cause-effect history.
 There again, only in climate science is probabilistic assessment mishandled so badly by so many.
 Ibid, section 220.127.116.11, para 1.
 Ibid, section 18.104.22.168, para 5.
 See my series of articles on the IPCC’s treatment of risk in AR5 WG3.
 See Storylines: an alternative approach to representing uncertainty in physical aspects of climate change, T. G. Shepherd, 2018. Emphasis is mine.
 I am particularly encouraged to see Shepherd advocate the use of causal networks in order to quantify the storylines approach. These networks enable the calculation of probability of necessity (PN), which is the equivalent of the FAR, as calculated using the D&A approach. Depressingly, it is rare to see climate scientists embrace the latest developments in causal analysis.