I had promised myself that I would waste no further time writing about the misconceptions and controversies surrounding the application of risk management within the climate change context. The evidence would suggest that I have said all I have to say on the subject, and folk are now getting a bit fed up of hearing about it. However, I recently came across a pre-print posted at the PhilSci Archive, which I felt was too important to pass by without comment. It goes by the title, ‘Severe Weather Event Attribution – Why Values Won’t Go Away’ and is co-authored by Eric Winsberg, Elizabeth Lloyd and Naomi Oreskes. If Professor Winsberg’s publications are anything to go by, he is a man after my own heart, sharing many of my interests both within and outwith the climate change arena.1 Elizabeth Lloyd is a professor of History and Philosophy at Indiana University. Naomi Oreskes, of course, needs no introduction. The point to take away here is that the paper represents the views of three professors of philosophy.

I strongly recommend that everyone reads the Winsberg et al paper and forms their own opinion. But, for what it is worth, here is mine.

Listening to the Scientists

Firstly, what comes over with abundant clarity is that Oreskes and Mann (to name but two) are, by their own admission, embroiled in a heated dispute with the vast majority of Detection and Attribution (D&A) experts. Far from representing the mainstream view, they are part of a small group of contrarians whose opinions have been dismissed as being seriously flawed. The criticism made of them by the mainstream is that they advocate an approach to D&A that results in exaggerated and unreliable estimates of the extreme weather risks resulting from climate change. The Winsberg et al paper is a response to such criticism, explicitly defending the contrarians’ narrowly-held counterview that the conventional techniques for event attribution (i.e. model-based calculations of either Risk Ratio or Fraction of Attributable Risk) systematically underestimate the risks.

By supporting the criticism made of the mainstream, the paper’s authors are arguing in defence of a minority of climate scientists who speak out against those experts who actually specialise in the subject. There is, of course, a delicious irony to this, since it is Oreskes who places such great store in the importance of scientific consensus and expert opinion. Despite her shrill warnings against the perils of motivated denialism, it appears that, when it suits, she sees nothing wrong with arguing against the expert consensus – being a merchant of doubt is perfectly acceptable, it seems, as long as it supports her agenda.

Furthermore, the above insight has great relevance to the BBC’s recent proclamation on the subject: “Climate Change – The facts”. On that programme, both Michael Mann and Peter Stott were filmed confidently making extreme weather event attributions, without a glimmer of acknowledgement that they were on opposing sides of a bitter dispute, in which each side calls into question the reliability and even ethicality of the other’s analytical approach. To be precise, it is Peter Stott who, amongst others, challenges Michael Mann, etc. for using methods that overestimate the risks, and Michael Mann, amongst others, who challenges Peter Stott, etc. for using methods that underestimate them.2

I have it on good authority (a sixteen year-old girl who’s too cool for school) that I should listen to the scientific consensus. That is all very well, but first I think the media need to provide better advice as to where the consensus exists. And if there are two camps who believe the other is using inadequate methods, the media should at least acknowledge the possibility that both camps are right.

Forget Risk Assessment, Let’s Just Tell Stories

My second point gets to the meat of the matter since it relates to the specifics of the criticisms made of the established D&A experts, and how those criticisms are then defended in the Winsberg et al paper.

There is a lot of discussion within the paper regarding so-called ‘story-telling’ versus risk-based assessment and the classification of scientific questions, but, when it comes down to it, the controversy existing between the D&A establishment and contrarians such as Trenberth, Sherwood, Oreskes and Mann revolves around the legitimacy of the Bayesian approach these contrarians take. At its heart, the dispute is little more than a good old-fashioned frequentist versus Bayesian bun fight.

Specifically, the question asked is how one should approach an attribution that posits anthropogenic influences on both the thermodynamics and atmospheric dynamics of the climate when the former is much better understood than the latter. The contrarians maintain that little can be said regarding the anthropogenic influence on atmospheric dynamics at the regional level and so, when looking at a specific event one can only ask the following conditional question: “Taking the extreme event as a given constraint, to what extent can we expect thermodynamic factors to have worsened it?” Essentially, one has to handle the problem as one of Bayesian updating, noting the extent to which posterior probabilities differ from prior probabilities once thermodynamic factors have been accounted for.

Of course, the conditioning of the questions asked is a key issue here, as is the choice of prior belief upon which one’s Bayesian analysis is predicated. As explained in a paper written by Peter Stott, David Karoly and Francis Zwiers:

The choice of approach should focus primarily on the method that is most appropriate for the inference problem at hand. In instances where the prior is not controversial, a Bayesian method may be preferable [to frequentism] from both an estimation and testing perspective. But in other instances where the prior is highly contentious, a Bayesian approach may have little relevance except in those cases where the available evidence overwhelms the choice of prior.”

The relevance of the assumed prior information matters when one wishes to start from an understanding of the anthropogenic influence on a global scale in order to then draw conclusions regarding events occurring at specific regional locations – and this applies equally with respect to thermodynamics and atmospheric dynamics. As Stott et al say:

An important point to consider in event attribution is the potentially limited relevance of prior information about the causes of global climate change to the regional event attribution problem. While it is generally accepted that a warmer atmosphere will lead to higher atmospheric moisture content and heavier extreme precipitation globally, there are a number of locations where a prior belief that this expectation applies locally could lead to an incorrect conclusion about anthropogenic influence on climate events at regional scales”.3

So the D&A experts object to the contrarian approach, not because they have an in-built aversion to Bayesian inferencing, but because they cannot see how Bayesianism can be assumed to be a more reliable technique for attribution when it depends so much upon the reliability and relevance of prior understanding. Indeed, when they look at the specifics of the Bayesian models used by their detractors, they see plenty of reason to think that the Bayesian approach is leading to an over-estimation of risk, particularly since it disallows that atmospheric dynamics may have any role in the explanatory framework.

Winsberg et al, like to characterise the controversy as the irrational rejection of Bayesianism by an old guard who are too attached to their beloved models. They even suggest that at the root of the issue is a value-driven preference for avoiding the overestimation of risk, when the correct approach (they maintain) would be to avoid approaches that underestimate it. I’ll move on to that issue shortly but, in the meantime, I hasten to emphasise that the controversy has nothing to do with values. There is nothing wrong with Bayesianism when one gets the science right, and nothing wrong as long as one appreciates that conditional questions can only provide conditional answers.

Winsberg et al argue that the criticisms of the contrarian approach miss the point. The Bayesian models are created to tell the story of how one particular factor (a relatively well understood one) increases risk, and that insight stands on its own. As such, they are the correct tools to address the class of question being asked. The anthropogenic impact on climate thermodynamics cannot, inter alia, help but increase risk and the best way of understanding this effect is to perform a Bayesian analysis that focuses upon it. The credibility of the models therefore actually lies in their conditioning. The trouble with this position, of course, rests in the assumption that thermodynamics can be analysed inter alia. The true value of Bayesian models is that, no matter how inchoate they may be, they will always give you something. Knowing how valuable that something is – now that’s the killer question.

Values – That Old Chestnut

Finally, we turn to what the paper has to say on the relevance of value judgment in the D&A controversy. Given that the paper was written by three philosophers of science, one might expect that they should devote a great deal of space to this subject – and they don’t disappoint. In fact, they make two main points, one of which should be obvious, the other I find naïve and all too familiar.

The first point is that value judgments are required irrespective of the use of mathematical models. One may think that a quantified metric of attribution, such as a Fraction of Attributable Risk or a Risk Ratio, is more objective and scientific than a qualitative judgment based upon a cause-effect narrative, but this is not the case. There are many respects in which value judgements can affect the direction a risk analysis takes and the conclusions that are subsequently drawn, and this is true for both quantitative and qualitative approaches. In fact, the inevitable subjectivity of risk analysis is a well-known problem amongst risk management practitioners, and one does not need three professors of philosophy to point it out. That said, some elements of the climate science community do seem oblivious to the problem, so the authors may have a point when they suggest that the D&A experts seem to be overly confident regarding the objectivity of their approach.

The second point made is somewhat more contentious, and relates to the question posed above: Is there a moral imperative to overestimate risk rather than underestimate it? The authors deem this a pertinent question because, according to them, the D&A experts are citing the overestimation of risk as the main problem resulting from the approach adopted by the contrarians. Why, ask Winsberg et al, do the D&A experts presuppose this to be a problem? Surely, it is better to overestimate rather than underestimate. Put another way, when looking for risks, false positives are better than false negatives. Is that not the central precept of the precautionary approach?

Well, it is. But there are those that will point out that life is not simple enough to pretend that a general principle can be readily applied in all circumstances. The fact is that, once again, this is not a philosophical problem requiring the attention of three professors. It is, instead, merely a question of pragmatics. Any practicing risk manager can tell you that risks should never be analysed and managed in isolation.4 Instead, risk managers model the interactions of risks and proposed risk mitigations using techniques such as Risk Response Diagrams and Influence Diagrams. At no stage does one need to apply principles such as, ‘a false positive is always better than a false negative’. All possibilities are considered and assessments are made on a case-by-case basis. Whether or not a false positive is the greater of the two evils very much depends upon what one has in mind to address the risk concerned. It also depends very much upon the stakeholder perspective that was chosen when performing the analysis. The D&A experts are not criticising the contrarians’ approach because they believe in the inherent benefit of avoiding false positives. They criticise it because it actually does result in potentially damaging false positives and they refute the allegation that their own approach biases towards false negatives.

So Why Have I Bothered?

As explained in my introduction, it was somewhat against my better judgment that I should devote so much of my time towards discussing the issues raised by the Winsberg et al paper. Nevertheless, I have done so because I think it is a paper that, whether it intended to or not, draws attention to three important points:

  • Behind the façade of settled consensus within D&A, there lies a bitter dispute that goes so deep that even the scientific legitimacy and ethicality of the techniques being used is called into question by the adversaries.
  • Whilst both sides do a good job of seeing the weaknesses in their opponent’s approach, the media seem incapable of seeing it in either. As long as both sides are saying things that satisfy the journalists’ confirmation bias, no further questions will be asked.
  • Against the backdrop provided by the above, there exists individuals with ideologies and philosophical positions that predispose them to a precautionary stance. It is tempting to conjecture that with a less-distanced view of the science, and a better understanding of the pragmatics of risk management, this would be a stance that they would be less inclined to assume.

As to the question of who has the more legitimate approach, I think this is probably the wrong question to ask. It is akin to the question, “Which is better, Bayesianism or frequentism?”

To which the answer is, “It depends. Why not try both?”

Notes:

[1] Anyone who publishes on the uncertainties associate with climate modelling before turning his attention to the philosophical implications of Hawking Radiation is someone I would gladly share a pint with.

[2] The two teams that are pre-eminent within the field of D&A are the Climate Monitoring and Attribution Group at the U.K.’s Hadley Centre in Exeter, headed by Peter Stott, and the Environmental Change Institute of Oxford University, headed by Myles Allen and Friederike Otto. The scientific minority to which I refer comprises: Kevin Trenberth, John Fasullo, Theodore Shepherd, Alexis Hannart and everyone’s favourite magician of statistics, Michael Mann. Oreskes is holding the contrarians’ coats and is clearly on their side. For example, in an earlier paper, Lloyd and Oreskes wrote of, “…the majority of D&A scientists reacting in a very negative and even personal manner.”

[3] Of course, that isn’t what Stott said when the BBC came calling.

[4] I have mentioned before now that safety analysts will often apply the Globally At Least Equivalent (GALE) principle. This requires that the net system safety risk resulting from a system modification should never increase. This applies to all modifications, including those ostensibly intended to reduce a specific risk.

 

60 Comments

  1. It seems to me that the Bayesian ‘storytellers’ are relying upon a prior belief which is itself rooted in formal detection and attribution at the global level, which they then apply with blissful unawareness of atmospheric dynamics and natural variability to the regional level. Further, the D&A approach to attribution of global warming really only yields high confidence after 1950, whereas extreme weather events must necessarily be viewed in the context of hundreds of years of regional weather data (if available). Seems to me like the Bayesians are tying themselves up in Gordian knots by trying to claim that their method is complementary to formal D&A.

    Liked by 1 person

  2. Ken,

    I would try to avoid expressions such as ‘broadly agree’ in this case. For a paper that touches on so many subjects it would be surprising that anyone would agree with everything it says and it is difficult to be categorical about when ‘broad agreement’ kicks in. If the paper is seen as nothing more than a reminder that, in statistics, both Bayesian and frequentist approaches are possible, and it may be instructive to see what each has to offer for a given problem, then it is making an uncontroversial statement that I can agree with. However, I do not feel that the paper is stopping there. It seems to be accusing the frequentist camp of failing to appreciate this point and of being overly critical of the Bayesian approach in general, and I do not see the evidence for that. Just as one can warn against being over-confident in the objectivity of frequentism, one can warn against being too incautious with respect to the subjectivity and conditional nature of Bayesianism. I think that is all that is happening.

    Notwithstanding the above, I maintain that the discussion regarding values (a major part of the paper) is a red herring and in that respect I have been critical of the paper.

    Does all of this mean I am in broad agreement? I don’t know and I don’t care because I’m not anxious to form such an executive summary. It’s complicated, and I’m just trying to pick over the bones of the issues raised in as fair and honest a manner as I am capable of. Also, as an aside, I should point out that some of the criticism in my article isn’t actually aimed at the authors but is aimed at journalists and how they have approached the issue. Perhaps this has contributed to the mood of disagreeability that you have picked up on.

    Like

  3. John,

    Notwithstanding the above, I maintain that the discussion regarding values (a major part of the paper) is a red herring and in that respect I have been critical of the paper.

    Why a red herring? The point here is that neither approach is truly value-free. If all we were interested in was quantifying the anthropogenic influence on extreme weather events, then we could simply wait until we have sufficient date, and better models, to carry out a definitive analysis. However, that isn’t all we want to know. We’d also like to know if maybe we should be doing something (mitigation, or adaptation) to account for possible changes to these extreme weather events. If the formal D&A analysis means that we can’t definitively answer this question now, then there is some merit in considering an alternative approach. Deciding to do so, or not, is going to involve some type of value judgement. We shouldn’t be claiming that there is some value free way to make such a decision.

    Like

  4. John, thanks very much for this. A fine job of balancing on a seesaw between the parties in conflict. I appreciated Hulme’s similar attempt to overview the X-Weather issue, though I guess you would see him on the classical side. He does note the array of motivations at play, and then four categories of attribution methods:
    Physical Reasoning
    Classical Statistics
    Fractional Attributable Risk
    Eco-Systems Philosophy
    Hulme’s articles are here:http://www.mikehulme.org/2014/06/attributing-weather-extremes-to-climate-change/
    My synopsis is https://rclutz.wordpress.com/2016/03/14/x-weathermen-are-back/

    Like

  5. Ken,

    Both sides may be engaging in an analytical approach that cannot be said to be value-free, but that does not mean that either side’s preference is the result of a value-driven decision, at least not the value that the paper chooses to concentrate upon (the value of false negatives over false positives). In fact, as far as false negatives and positives are concerned, there is no inherent value one way or another, and this point is already understood and dealt with within the wider risk management community. Instead, those who, on this occasion, have chosen a non-Bayesian approach, have done so because they think this results in more accurate assessments. As I have said, this is a straightforward frequentists verses Bayesians bun fight. As such, it is a technical dispute over the nature of the uncertainties and how these bear upon the applicability of statistical methods. It might look like an argument over values, but I’m afraid that’s how bun fights often develop.

    Like

  6. Ron,

    Thanks for the links. I guess the Mike Hulme review of methods pre-dates the efforts of Shepherd, Trenberth, etc., hence the failure to mention ‘story-telling’. I guess it is the ‘Classical Statistics’ and FAR approaches that are supposed to be complemented by the story-telling.

    Like

  7. Seems to me that the storyline idea is really pseudoscience, i.e., vague verbal formulations without quantification. In reality thermodynamics and dynamics interact to determine the flow fields in the atmosphere and the oceans. Ignoring one of them is an invitation to systematic errors. This I think therefore must be more about a political storyline designed to achieve a political result.

    Vitamin and nutritional supplement salesmen love to push such storylines. Here’s a really good one: Antioxidants prevent oxidative stress for cells and DNA oxidation and therefore are really good for you and can prevent cancer. Unfortunately, recent large studies show no benefit from vitamin supplementation in people with a well balanced diet. But the storyline makes billions for those pushing it and gives people a false sense of control over their health. It plays to well known weaknesses in human reasoning and character.

    Liked by 3 people

  8. Can there be anything more pathetic than the image of a party host crying into an untouched bowl of twiglets because no-one turned up? In the meantime, over at ATTP the strains of revelry go on deep into the night. I’d love to join in but there is always the risk that I’ll bump into Willard, drunkenly delivering his ‘who the fuck are you?’ speech. So I’ll leave my plaintive message to the ATTP crew below, where no-one will is likely to trip over it and spill their beer:

    All this talk of complementarity is very interesting but it almost completely misses the point of the dispute that the Winsberg paper actually relates to. The story-telling approach is a branch of Bayesianism. Any Bayesian analysis gets its epistemic legitimacy from the conditioning of the questions asked, the choice of priors and the evidential basis for the Bayesian updating. These are the details that are being questioned by the likes of Stott and Zwiers. The rest is pretty much bullshit.

    Like

  9. John, I guess if you want more people to turn up to your party, you need to encourage a few more people to gate-crash the proceedings who aren’t too interested in going straight to the heart of the problem, but enjoy dancing around the edges to the tune of a few more interesting (to them) records.The party over at ATTP was swinging for a while, even though it to’d and fro’d and mostly missed the point, but Willard the party pooper then turned up and now it’s gone totally dead. I don’t suppose any of them will turn up here because they know the host will blow them out.

    Like

  10. Presumably there’s plenty of beer left then. I haven’t much to offer. But maybe it’s worth pointing out that while all science is temporarily (albeit temporary can sometimes mean years or decades or generations) corruptible via cultural hi-jack, the broader concept of the Bayesian approach relative to the frequentist approach, and especially the subjective judgements about prior probabilities and the assignment of probabilities to uncertainty, seems to me likely to lend itself more readily to cultural hi-jack. Certainly by the time you have ‘story-lines’ postulating such assignments, you are deep into the territory where such narratives can all too easily slide into the narrative evolution that comes from deeply buried emotive instincts in us. What I know about these two approaches could probably be written on a postage stamp, and I presume that they are both valid and different but not better or worse, *if* rigorously followed. But if one lends itself much more easily to cultural hi-jack, for sure that opportunity will not be passed up where a cultural conflict is already in play. Where’s the beer?

    Like

  11. Andy,

    I lied about the beer, and I’m going out now. But I will be back in tomorrow, and I will then endeavour to be the perfect host. I’ve got some things to say about Bayesianism and culture, so I’ll catch you later. In the meantime I can confirm that “they are both valid and different but not better or worse.”

    Jaime,

    Likewise. I have some observations to make regarding how and why the point has been missed, but they will have to wait now until tomorrow. Sorry.

    Like

  12. At the AGW Party, aquestion… Given that it hasn’t been warming for the last 15 years, tho’ CO2 continues to rise, given that the 1990’s warming was not unprecedented and that hockey sticks abound in the historical temperature record, given the improbability of the model simulations matching nature and they don’t, https://curryja.files.wordpress.com/2015/12/christy_dec8.jpg
    serfs ask, ‘Where’s the evidence for anthropological Global Warming on which probability assessment must draw?’

    Like

  13. Jaime,

    The conditional question that the ATTP revellers seemed to have asked themselves is this:

    Given that the Winsberg et al paper is an accurate and fair characterisation of the nature of the debate, do I find myself agreeing with it?

    The answer they gave to this question is ‘yes’. Unfortunately, this is a prime example of the dangers of accepting a prior belief and then failing to collate the evidence required to update it. Had they taken the trouble to read what the D&A experts have actually said on the subject (see, for example the two quotations provided in my article), rather than just accepting the selective representation given in the Winsberg paper, they would have understood that this is not an ideological issue, ideally suited for the attentions of three professors of philosophy. It is, instead, just a dispute over which of the two statistical approaches available is most likely to provide the most accurate attributional advice, given the levels of uncertainties involved in this instance. I do not pretend to have a firm understanding of the uncertainties and how these inform such a decision, but I’m willing to accept that those who do D&A for a living will have a firmer grasp of the science involved than three professors of philosophy and a paleoclimatologist who already has a track record of abusing statistics.

    To elaborate:

    The first quote given in my article demonstrates that the D&A experts are not ideologically opposed to Bayesianism and appreciate that it approaches the problem by asking a different category of question.

    The second quote demonstrates that the D&A experts believe that it is the application of a global understanding to a regional attribution that is the fundamental problem.

    To the above I could have added the following quotes taken from Stott, Karoly and Zwiers:

    “The question of ethics and its relation to the question about how to formulate the null hypothesis for testing is not fundamentally a question of a choice between Bayesian and frequentist approaches. Instead, whether posed in a Bayesian or frequentist manner, we return to the point that event attribution problem is an estimation problem. Given that changes locally can be very different to global expectations, as a result for example of dynamically induced changes over-coming thermodynamically induced ones, great care must be taken in using prior expectations derived from global considerations. In some cases, the inappropriate use of such prior information could reach too liberal conclusions. In other cases, the neglect of relevant prior information could lead to overly conservative conclusions.”

    “Thus, the null hypothesis of human influence is not inherently a preferable alternative to the usual null hypothesis of no human influence.”

    “Finally, we make a remark about ethical practice as it relates to event attribution. Ethical practice should include such considerations as being clear about methods and assumptions (including priors), rigorously assessing tools and uncertainties, and being clear on which hypotheses are being tested and why a particular testing formulation is suitable for the circumstances being considered. This view of what constitutes ethical practice for a practitioner should not be controversial and should be kept distinct from considerations of what constitutes ethical practice for policy makers, business leaders, and politicians.”

    My party may be a quiet party, but I like to think it is better informed.

    Liked by 1 person

  14. Andy,

    Bayes’ Rule enables one to combine empirical evidence with prior beliefs, no matter how shaky they may be. Superstitions, dogmas, established facts – they are all the same to Bayes. Just feed in new evidence, turn the handle and see how far that gets you along the road towards enlightenment. That is what lies behind its strength and utility. It is also why many feel it lacks scientific rigour; it’s those pesky priors that, to some, “are as horrid as piss”.

    Incidentally, the story of how Bayesiansim’s popularity has ebbed and flowed down the years is itself a classic example of the how culture and social dynamics influence the development and acceptance of scientific understanding. I have recommended the following book before on this website and I do so again now:

    “The Theory That Would Not Die” by Sharon Bertsch McGrayne.

    Like

  15. I’ll munch some Twiglets and sup some ale any day with you John, in preference to being jostled in the increasingly crowded and drunkenly debauched fancy dress party over at ATTP. I’ll give some careful thought to what you said and post later. Meanwhile, I popped back to ATTP out of curiosity and noticed they were attempting some character assassination in my absence, so left this comment (which will likely disappear):

    “I gave up commenting here yesterday, because it turned personal and my responses were deleted (but not the insults) and mostly, the issues I raised failed to be addressed adequately. I see it’s got even more personal in my absence which is entirely true to form and Tom really should know better, being a frequent visitor and contributor over at that ‘other site’. If Mosher or any other advocate of the ‘new way’ wants to throw their hat in the ring at John Ridgway’s post, please do, and if you want to try to continue the character assault upon me personally there, please also do, where I shall be free to respond without the threat of Willard erasing my comments.”

    Like

  16. Ken,

    I am responding to your comment posted here:

    https://cliscep.com/2019/06/16/rice-and-mosher-in-defence-of-happy-slappy-severe-weather-attribution-at-attp/

    Taking your first point:

    “Except, one of the reasons for preferring the D&A approach appears to be because it reduces the risk of reputational harm if someone were to claim a positive result that subsequentally turns out to be false. This is a perfectly valid concern and I can see why many scientists might have this preference.”

    Why do you say one of the reasons “appears to be”? Do you have evidence that fear of reputational harm has motivated the D&A consensus? According to the D&A experts, they have taken their decisions based purely upon the merits of the tools on offer and the context in which the tools are to be used. To reject their testimony and, instead, suggest that they are guilty of intellectual cowardice can only poison the debate. For that reason, I did not mention this element of the Winsberg et al paper in my own article – quite frankly, I thought it was not worthy of consideration. Remember, Winsberg et al are non-experts (as far as climate science is concerned) adjudicating upon a climate science technicality (the effects of uncertainties relating to regional attributions) and choosing to go against the consensus view held within the D&A community. This is hardly a suitable platform from which to cast professional slurs. If I may repeat what Stott et al have said on this matter:

    “Ethical practice should include such considerations as being clear about methods and assumptions (including priors), rigorously assessing tools and uncertainties, and being clear on which hypotheses are being tested and why a particular testing formulation is suitable for the circumstances being considered. This view of what constitutes ethical practice for a practitioner should not be controversial…”

    In respect of your second point:

    “However, it doesn’t change that the formal D&A approach then runs the risk of suggesting false negatives. Hence – in my view – there is merit in considering an alternative approach that may fill in some of the holes.”

    There is always merit in considering alternative approaches. I happen to have worked in the field from which Shepherd and Trenberth take their analogy and so I can see where they are coming from; a variety of techniques would be employed to analyze the safety of a system, including Fault Tree Analysis, Event Trees, Failure Mode Effects Analysis, Hazard Operability Studies, etc. – each technique designed to answer a different class of question. In this way, one could often “fill in some holes” by taking the differing perspectives offered by the various techniques. But this is not what is happening here. There are circumstances when some tools are more suited to the occasion than others, and the decision then becomes one of choosing the optimal tool. The D&A community claim to have undergone this evaluation and decided that FAR and RR provide more reliable attributions (allegations of false negatives notwithstanding). Once again, re-quoting Stott et al:

    “…whether posed in a Bayesian or frequentist manner, we return to the point that the event attribution problem is an estimation problem. Given that changes locally can be very different to global expectations, as a result for example of dynamically induced changes over-coming thermodynamically induced ones, great care must be taken in using prior expectations derived from global considerations.”

    So it seems to me that they are advocating their own approach over the ‘story-telling’ approach because they think the latter is taking too much for granted. It may be offering a different perspective by answering a different class of question, but those who have the expertise in this area say they are concerned by the manner in which it provides its answers.

    Also note that Stott et al do not see this as a simple case of one technique providing false negatives whilst the other provides false positives:

    “In some cases, the inappropriate use of such prior information could reach too liberal conclusions. In other cases, the neglect of relevant prior information could lead to overly conservative conclusions.”

    Regarding your final point:

    “Of course, I’m not suggesting that the storyline approach should/would always suggest a link between climate change and extreme events. I do, however, think there may well be cases when there is a link, and that it’s possible to infer such a link based on our understanding of the underlying physics, and that may not be evident from a D&A approach. Hence, a storyline approach may provide more relevant information.”

    I don’t think the existence of a link is in question here – it is the strength of the link that everyone is trying to determine. The D&A experts are trying to use the climate models for the basis of this assessment. Love them or loathe them, these models represent our best understanding of how the underlying physics can explain what is observed and make predictions. Why would you think that an approach that disregards a major element of that physical model could provide more relevant information? As I’ve already said, the story-telling approach is a branch of Bayesianism. Any Bayesian analysis gets its epistemic legitimacy from the conditioning of the questions asked, the choice of priors and the evidential basis for the Bayesian updating. These are the details that are being questioned by the likes of Stott and Zwiers. The rest is pretty much bullshit; by which I mean it is missing the point.

    Like

  17. John,

    Why do you say one of the reasons “appears to be”? Do you have evidence that fear of reputational harm has motivated the D&A consensus?

    There’s a quote on page 12 that specifically mentions reputational harm.

    According to the D&A experts, they have taken their decisions based purely upon the merits of the tools on offer and the context in which the tools are to be used. To reject their testimony and, instead, suggest that they are guilty of intellectual cowardice can only poison the debate.

    Noone is suggesting “intellectual cowardice”. I think people are entitled to have these discussions and to, potentially, disagree. Similarly – as I understand it – the argument is not against D&A, it’s in favour of considering an alternative approach in some circumstances. There’s no suggestion that those who favour D&A should suddenly do something different. The suggestion is that there may be cases when a storyline-type approach may be suitable. In some sense, they’re asking slightly different questions. The D&A approach tends to be about whether or not climate has made some type of extreme weather event more frequent, while the storyline approach tends to be whether or not climate change has impacted the characteristics of an event that has already happened.

    Remember, Winsberg et al are non-experts (as far as climate science is concerned) adjudicating upon a climate science technicality (the effects of uncertainties relating to regional attributions) and choosing to go against the consensus view held within the D&A community.

    FWIW, I think highlighting the lack of direct expertise is a slippery slope. Also, there are those directly involved in D&A who make arguments in favour of an alternative. Bear in mind, that expertise at some particular method doesn’t give someone the right to define what methods are acceptable, and what are not.

    This is hardly a suitable platform from which to cast professional slurs.

    Noone is casting professional slurs.

    If I may repeat what Stott et al have said on this matter:

    What Stott et al. said about ethics is entirely correct and uncontroversial. Noone in favour of considering the storyline approach is suggesting behaving unethically.

    It seems to me that you are interpreting the storyline approach in a very uncharitable way. The suggestion is not that it be used to make stronger claims than are warranted, or that inappropriate priors are used. It is very simply that there may be cases when some storyline approach may allow us to infer something about the impact of climate change on an extreme weather event, when (maybe) a D&A approach is unsuitable, or when we’re asking a slightly different question.

    Whether you like it or not, any kind of analysis is going to require some kind of statistical judgement. The standard frequentist D&A approach can provide strong support for a hypothesis, but can also lead to the rejection of a hypothesis that turns out to be correct. Similarly, a Bayesian approach could lead to some kind of link between climate change and extreme weather events that does not actually exist. Firstly, many people know this and hence use these methods with due caution. Secondly, there is no fundamental reason why one of these issues outweighs the other (which was one of the points in Winsberg et al.). A preference for one doesn’t immediately mean that the other should never be used.

    The argument in favour of a storyline approach is not (as far as I’m aware) an argument against carrying out D&A analyses. It’s simply an argument in favour of an alternative that could be used in some circumstances. This seems completely unobjectionable to me and I find it slightly odd that some seem so worked up about it.

    Like

  18. It seems to me that ATTP is missing the point by such a massive amount that you find yourself speculating about his motives. If you accept the point that he understands John’s argument, then why should he evade it so badly, if he is arguing in good faith? To conclude that he is a bit thick is uncharitable so I am forced to decide that he is a bit dishonest

    Like

  19. MIAB,

    I appreciate your support but I would I like everyone to refrain from speculating upon anyone’s honesty. I have a ‘no moderation’ policy and I am hoping that I can get to a point of mutual understanding with Ken (if not agreement) before anyone is tempted to take advantage of that policy. Yes, it does feel like Ken is evading my point, but I’m sure he feels the same way about me.

    Fear ye not, I do not intend going around this circle indefinitely. There will come a point (sooner rather than later) when it will become evident that no convergence of understanding is possible, and I promise you I’ll give up at that point.

    Like

  20. John,
    Your point seems to be that there is a fundamental problem with the Bayesian-like approach that would be used by the storyline method, which you seem to evidence by quoting some climate scientists. Correct me if this isn’t your point.

    My position is that I disagree with this, at least in the sense of it being *more* problematic than the standard, frequentist D&A approach. All approaches will have strengths and weaknesses, and it’s important to understand this and to take this into account. Noone – as far as I’m aware – is suggesting using a storyline approach in some inappropriate way.

    There will come a point (sooner rather than later) when it will become evident that no convergence of understanding is possible, and I promise you I’ll give up at that point.

    There is a difference between a convergence of understanding, and agreement, as I hope you realise.

    I’ll comment on this point you made in your earlier comment.

    Why would you think that an approach that disregards a major element of that physical model could provide more relevant information?

    The point is not to disregard a major element of the physical model. The point is that there will be cases when the data is sufficiently sparse (i.e., an extremely rare event) where it isn’t easy to consider the impact of this other element of the physical model (i.e., the dyamics). In such a case, it may still be possible to comment on how the thermodynamic conditions may have been influenced by anthropogenically-driven warming and, hence, how this may have influenced this event.

    As has been pointed out before, this is asking a somewhat different question standard D&A. Instead of asking how AGW might be influencing the frequency of such extreme events, it’s asking how it might have influenced this event, given that it happened. As long as one understands the context, there should be no issue with addressing either of these scientific questions.

    Like

  21. Why would one call a storyline approach a Bayesian approach? The storyline approach seems to me to not be a formal quantitative approach at all and that’s the problem with it.

    Like

  22. John,

    re. Your comment 16th June 9.02am.

    You quote:

    “The question of ethics and its relation to the question about how to formulate the null hypothesis for testing is not fundamentally a question of a choice between Bayesian and frequentist approaches. Instead, whether posed in a Bayesian or frequentist manner, we return to the point that event attribution problem is an estimation problem. Given that changes locally can be very different to global expectations, as a result for example of dynamically induced changes over-coming thermodynamically induced ones, great care must be taken in using prior expectations derived from global considerations.”

    This was basically what I was trying to argue at ATTP – though the above is more succinctly and economically stated than I managed. Attribution is an inexact ‘science’ – like they say, it is primarily an estimation problem. Formal attribution provides a more robust and inclusive estimation of anthropogenic influence but its ‘drawback’ (according to Oreskes et al) is that it can give false negatives – ‘Heavens above! We might have missed the chance to blame climate change for the rain in Spain which fell mainly on the plain!’ So they are advocating the use of a less robust attribution method which is far more likely to give false positives, but the reason it’s likely to give false positives is it ignores a huge and vital component of extreme weather causation – regional atmospheric dynamics plus prior knowledge of past similar weather events – where available. Their prior knowledge basically consists of thermodynamics in a world warmed by GHG emissions, scaled down to the regional level and applied inexpertly to the magnitude and severity of the event in question.

    “Finally, we make a remark about ethical practice as it relates to event attribution. Ethical practice should include such considerations as being clear about methods and assumptions (including priors), rigorously assessing tools and uncertainties, and being clear on which hypotheses are being tested and why a particular testing formulation is suitable for the circumstances being considered. This view of what constitutes ethical practice for a practitioner should not be controversial and should be kept distinct from considerations of what constitutes ethical practice for policy makers, business leaders, and politicians.”

    That sounds like a roundabout way of saying what I did: that the motivation for this ‘storyline’ method may be primarily political, that policy considerations may cloud the judgement of ‘practitioners’ in their search for ‘actionable information’.

    Like

  23. DPY,

    Why would one call a storyline approach a Bayesian approach? The storyline approach seems to me to not be a formal quantitative approach at all and that’s the problem with it.

    What’s being called the storyline approach here, is more formally a Bayesian conditional approach. For example, this paper by Kevin Trenberth says

    Past attribution studies of climate change have assumed a null hypothesis of no role of human activities. The challenge, then, is to prove that there is an anthropogenic component. I argue that because global warming is “unequivocal” and ‘very likely’ caused by human activities, the reverse should now be the case. The task, then, could be to prove there is no anthropogenic component to a particular observed change in climate, although a more useful task is to determine what it is. In Bayesian statistics, this change might be thought of as adding a ‘prior’.

    Like

  24. Ken,

    I had prepared a detailed response to your previous comment and was on the point of posting it when your latest comment arrived. Upon reading it, it seems even less likely that we will ever reach a point of mutual understanding (and, yes, surprisingly enough, I do know the difference). So I’ll keep this brief:

    “Your point seems to be that there is a fundamental problem with the Bayesian-like approach that would be used by the storyline method, which you seem to evidence by quoting some climate scientists. Correct me if this isn’t your point.”

    This isn’t my point, and I’ve already stated my point as clearly as I am capable of – as indeed have the ‘some climate scientists’ to whom you dismissively refer. Time to give up. I must admit, this moment has arrived earlier than I had expected.

    I’ll finish, if I may, with the following observation: The fact that you can’t see how anyone (which includes the majority of D&A specialists) could get so worked up about it should be your biggest clue. If none of the reasons you can think of make any sense to you, it is probably because you have misunderstood what “it” is.

    Like

  25. John hits the nail when he said: “I don’t think the existence of a link is in question here – it is the strength of the link that everyone is trying to determine.”

    This is not complicated. Ignoring dynamics gives in many cases the wrong sign of the answer. In any case its a qualitative answer and that’s not really science. Vague verbal formulations that ATTP and others like to call “scientific understanding” are not “science.” Without quantification, they are more like theological explanations. It’s not surprising that philosophers would like them, but they are not science.

    Like

  26. Ken, that quote from Trenberth is bordering on the unhinged in my opinion. It’s so far removed from conventional notions of scientific inquiry and the real world that I fear for Trenberth’s state of mind.

    Like

  27. Jaime,

    Yes, that is the greatest irony of all of this. You and I are simply respecting the D&A scientific consensus and we are being challenged by a group that either fails to appreciate this or have decided, judiciously, that this would be a good time to jump off the consensus bandwagon because it isn’t headed in the direction they wanted it to after all.

    Like

  28. John, I got told off by Mosher for ‘divining’ such motivations. Ken will no doubt divine that your motivation for agreeing with the consensus attribution scientists is that you object to mitigation policy.

    Like

  29. Meanwhile, poor science is not limited to storyline attribution methods. Seems that climate scientists should spend more time making sure their published papers are of high quality and actually correct. An egregious recent example of a “scary” storyline with very weak “science” behind it.

    https://cliffmass.blogspot.com

    Like

  30. DPY,

    Another example of media promotion of bad science. By the time rational bloggers and trained scientists like Cliff Mass get around to dissecting the faults with the study, the damage is done and the gullible public is filled with dread at the prospect of more frequent killer heatwaves spawned by man-made climate change. It happens so often that it’s like trying to hold back the tide.

    Like

  31. DPY,

    I’m glad you came along, because you have raised a point that I should have made much more of before now.

    In my critique of the Winsberg paper I have tended to treat the storyline approach as an alternative means of quantifying attribution (which is what the D&A experts seem to have concentrated upon). However, when one looks at the storyline approach advocated by Trenberth, it is obvious that one is dealing here with an entirely qualitative approach. As such it doesn’t really stand as an alternative to FAR and RR. Nor can it been seen as complementary, any more than a HAZOP complements risk assessment. It is one thing to investigate causality by identifying only that which is plausible and physically consistent, and quite another thing to quantify the causal links. The former is a precursor to the latter, it doesn’t complement it. By the same token, the qualitative identification of causal links does not complement D&A risk assessment.

    Like

  32. Thanks John. It continues to amaze me to what lengths people with strong political convictions will go to justify their scary stories. Fear has always been a strong motivator of human behavior and one that is easily exploited in the modern era of mass media (and increasingly corrupt and unreliable media I might add). Fear usually generates inappropriate responses however and has no place in science.

    The silence of climate scientists in the face of the last decade of increasingly shrill propaganda on harm from climate change is not only unethical, its unscientific.

    Like

  33. Paul: Indeed, but actually a very small, likely tiny, subset of climate scientists. Albeit hampered by searching only in English (although this is the main IPCC language), out of the 831 scientists directly involved in AR5, I could find only about a dozen who propagate catastrophe narrative, but these were very prolific and came up time and again. Even if I’m out by an order of magnitude, this is still a small minority. The few catastrophists own the megaphone, and it appears that the far larger mainstream do not dare to shout back, let alone take it from their hand. And the catastrophists who are actual climate scientists are joined by other scientists who are not, but trust the conclusions probably through faith in the fraternity of science, which opens the door to their own emotive conviction in catastrophe, and so then propagation in their turn. See here for catastrophe narrative quotes from 50 scientists, about half climate scientists and half other. The same names crop up repeatedly, and they are very few indeed compared to the enterprise of climate science, and absolutely miniscule compared to the enterprise of generic science.

    Liked by 2 people

  34. Ah… I just read the other post and saw you made the exact same point about small numbers and silence of the great majority of the field 0:

    Like

  35. Jaime,

    It occurs to me that much of the debate so far has been focused upon an issue that is not uppermost in the minds of the proponents of the story-telling approach, since we have been taking the D&A lead that holds that it should be about the pros and cons of the two basic approaches to estimation. Namely, there are two classes of question:

    How probable is it that I would see data (events) like this given some hypothesis about the world?

    How probable is this hypothesis given the data (event) I have observed?

    Essentially, this is the frequentist / Bayesian dichotomy.

    You and I have been trying to explain that the D&A consensus favours their frequentist approach because, whatever its drawbacks, it is using all the available information. The Bayesian approach should do the same but, on this occasion, it appears that its proponents are not doing so, ostensibly because they see no explanatory value in one particular class of information (our understanding of atmospheric dynamics).

    Meanwhile, Trenberth, if he is observing this debate, must be wondering what we have all been smoking. To his thinking, the two classes of question posed above are just two subclasses of the wrong class of question. Believing the uncertainties to be so profound as to preclude any estimation of probabilities, he rules out any statistically based argument, Bayesian or frequentist. As far as he is concerned, all one should ask is whether it is plausible, and physically consistent with the evidence, to conclude that AGW will have made some contribution to the occurrence or severity of a given weather event, and that is all one needs to ask. Quantifying the degree of attribution is a mug’s game. Hence, the narrative is all that matters.

    If this logic sounds familiar to you, then it should. It’s nothing more than another enunciation of the precautionary principle. When questions regarding probability let you down, resort to questions of plausibility.

    So when it boils down to it, the disagreement is far more fundamental than a debate regarding the correct statistical approach. One group is saying that attribution is a question of estimation, and another group is saying that estimation is impossible, just focus upon the plausibility.

    No wonder the D&A community is up in arms. From their perspective, there is nothing complementary or complimentary to be found in Trenberth’s advocacy of a storyline approach.

    Like

  36. John: when uncertainties are profound, what seems plausible (or not) will depend more upon bias than on anything else. And so this is fertile ground for our (deeply embedded) mechanisms of emotive group bias, to achieve dominance. If this dominance gets its grip before there are means / experience to bound uncertainties better, or at least progress down that path, the affected group will actively resist the process of better bounding, and indeed their very perceptions of uncertainties will continue to be driven by group bias, which in turn is tied to the core group narrative. Hence indeed ‘the narrative is all that matters’.

    Liked by 1 person

  37. John,

    This isn’t my point, and I’ve already stated my point as clearly as I am capable of – as indeed have the ‘some climate scientists’ to whom you dismissively refer. Time to give up. I must admit, this moment has arrived earlier than I had expected.

    I genuinely am not trying to misrepresent your point. If it isn’t what I thought it was, I really can’t work out what it is. I’m also not “dismissively referring” to “some climate scientists”, and it’s somewhat bizarre that you think I am. I have to admit that I find it odd that you’re criticising my apparent lack of agreement with what some climate scientists have said given your association with a site that specialises in doing so. Maybe you can explain when we should be accepting what climate scientists say and when it’s okay to challenge it?

    Like

  38. Ref: storyline vs. reality- physics is illiterate and doesn’t give a fig about the eloquent compelling stories offered up in sacrifice to the climate.
    Ref: ATTP’s straight face trolling-
    Why waste time on such banality.

    Like

  39. Ken,

    “I genuinely am not trying to misrepresent your point.”

    And I never said you was. For Christ’s sake, you can’t even get that right! That’s why I am giving up on you. I find you too obtuse for words. But don’t worry about it. That’s just my opinion and, as you said in the safety of your own blog, I’m just some guy on the internet.

    Like

  40. John,
    Okay, apologies, I’m not trying to misunderstand your point.

    That’s just my opinion and, as you said in the safety of your own blog, I’m just some guy on the internet.

    I don’t think that’s quite what I said, but maybe it is. However, it was that kind of sentiment that made me suggest that pointing out that Winsberg et al. weren’t domain experts might be a bit of a slippery slope.

    Like

  41. “Given that Koonin has no climate expertise, presumably he thinks that his status as a physics Professor gives him the credibility to speak about the topic.”

    Welcome to the slippery slope Ken!

    Like

  42. I know Jaime it is really wearing thin with me too. I thought Koonin made some good points. Schmidt and his echo had to find a few fine points to quibble over to appear to be good consensus enforcers. The problem is that climate science is not very advanced as a science and relies on crude models.

    Liked by 2 people

  43. DPY,

    Ken (and Schmidt) will have to do better than that if they are going to dismiss the ‘contrarian’ arguments of Dr Christy et al. They seem to think they have an exclusive monopoly on good science and that observations are secondary to that science. Alas, good scientists and a world which refuses to warm at the rate required to declare a climate emergency (forcing alarmists to have to rely instead upon the weather to make such claims) may soon disabuse them of that notion.

    https://www.thegwpf.com/putting-climate-change-claims-to-the-test/

    Like

  44. Ken,

    “Bear in mind, that expertise at some particular method doesn’t give someone the right to define what methods are acceptable, and what are not.”

    Spoken like a true climate science sceptic. Remind me, which side did you say you were on?

    Actually, I thought you were all for respecting the views of the expert consensus. Isn’t it you who once wrote:

    “You can’t expect the scientific community to keep revisiting scientific ideas just because a small fraction want to do so. This doesn’t stop them from continuing to investigate, on the off chance that the scientific community is wrong, but the community overall doesn’t need to continue engaging with these ideas if they regard them as almost certainly wrong.”

    The point here is that the D&A community’s consensus view is that the storyline approach, by attempting regional level event attribution based purely upon an understanding of climate thermodynamics at the global level, is the wrong thing to do. Enjoying the consensus doesn’t make them right, of course, but, as you say, you can’t expect the scientific community to keep revisiting scientific ideas just because a small fraction want to do so.

    Liked by 1 person

  45. John,
    You suggest that there’s a consensus that it’s wrong to consider a storyline-type approach. Here, however, is a recent paper with a large author list that seems to explicitly consider both approaches. For example, the abstract says

    A number of specific published conclusions (case studies) about possible detectable anthropogenic influence on TCs were assessed using the conventional approach ofpreferentially avoiding Type I errors (i.e., overstating anthropogenic influence ordetection). ……
    ……
    The issue was then reframed by assessing evidence for detectable anthropogenic influence while seeking to reduce the chance of Type II errors (i.e., missing or understating anthropogenic influence or detection). For this purpose, we used a much weaker “balance of evidence” criterion for assessment. This leads to a number of more speculative TC detection and/or attribution statements, which we recognize have substantial potential for being false alarms (i.e., overstating anthropogenic influence or detection) but which may be useful for risk assessment.

    Like

  46. Ken,

    It is not I who claims that the storyline approach does not form part of the consensus, it is Winsberg et al. When referring to the probabilistic, risk-based approach, they say:

    “This is the now-conventional technique for attributing climate change to extreme events.”

    That is to say, ‘conventional’, as in the dictionary definition: ‘Based on or in accordance with general agreement, use, or practice’.

    As Judith Curry has said, “…using such storylines, and claiming (even implicitly) that they are part of the AGW ‘consensus’ is scientifically dishonest.”

    I also note that you subtly changed my proposition. Consideration is not the issue, it is adoption — specifically when, “attempting regional level event attribution based purely upon an understanding of climate thermodynamics at the global level.”

    Like

  47. MIAB,

    Yes, I think you will find that over at ATTP the resident experts always ‘win’. On 11th June they won against those who tried to argue that the climate models are not the failures that some climate scientists say they are. A couple of weeks prior to that, they won against those who tried to argue that the climate models actually are the failures that some climate scientists say they are.

    Incidentally, I finally got around to reading the paper that ATTP cited in his comment at 19th June 1:29pm. What I found is that it had nothing to do with the debate raised by the Winsberg et al paper. Story telling simply didn’t come into it. All the authors did was take the conventional, risk-based approach, based upon analysis of climate models, and asked what would happen if they relaxed the threshold for statistical significance, i.e. look for p<0.1 instead of the traditional p<0.05. Furthermore, the authorship list only looks so impressive because it includes all of the scientists who were involved in the experiment, i.e. these were the scientists who were asked how making such a change altered their view regarding the success of the particular hypothesis tests chosen for the experiment. It would be churlish of me, however, to suggest that this was a ploy to make the paper’s thesis look well-supported.

    Rather than get side-tracked by experiments exploring the effects of differing levels of uncertainty aversion, we should stick to the question in hand, which relates to the choice between two competing statistical paradigms and the bearing such a choice has on the attribution of extreme weather events at the regional level when applying one’s understanding of climate thermodynamics at the global level.

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.