I had promised myself that I would waste no further time writing about the misconceptions and controversies surrounding the application of risk management within the climate change context. The evidence would suggest that I have said all I have to say on the subject, and folk are now getting a bit fed up of hearing about it. However, I recently came across a pre-print posted at the PhilSci Archive, which I felt was too important to pass by without comment. It goes by the title, ‘Severe Weather Event Attribution – Why Values Won’t Go Away’ and is co-authored by Eric Winsberg, Elizabeth Lloyd and Naomi Oreskes. If Professor Winsberg’s publications are anything to go by, he is a man after my own heart, sharing many of my interests both within and outwith the climate change arena.1 Elizabeth Lloyd is a professor of History and Philosophy at Indiana University. Naomi Oreskes, of course, needs no introduction. The point to take away here is that the paper represents the views of three professors of philosophy.
I strongly recommend that everyone reads the Winsberg et al paper and forms their own opinion. But, for what it is worth, here is mine.
Listening to the Scientists
Firstly, what comes over with abundant clarity is that Oreskes and Mann (to name but two) are, by their own admission, embroiled in a heated dispute with the vast majority of Detection and Attribution (D&A) experts. Far from representing the mainstream view, they are part of a small group of contrarians whose opinions have been dismissed as being seriously flawed. The criticism made of them by the mainstream is that they advocate an approach to D&A that results in exaggerated and unreliable estimates of the extreme weather risks resulting from climate change. The Winsberg et al paper is a response to such criticism, explicitly defending the contrarians’ narrowly-held counterview that the conventional techniques for event attribution (i.e. model-based calculations of either Risk Ratio or Fraction of Attributable Risk) systematically underestimate the risks.
By supporting the criticism made of the mainstream, the paper’s authors are arguing in defence of a minority of climate scientists who speak out against those experts who actually specialise in the subject. There is, of course, a delicious irony to this, since it is Oreskes who places such great store in the importance of scientific consensus and expert opinion. Despite her shrill warnings against the perils of motivated denialism, it appears that, when it suits, she sees nothing wrong with arguing against the expert consensus – being a merchant of doubt is perfectly acceptable, it seems, as long as it supports her agenda.
Furthermore, the above insight has great relevance to the BBC’s recent proclamation on the subject: “Climate Change – The facts”. On that programme, both Michael Mann and Peter Stott were filmed confidently making extreme weather event attributions, without a glimmer of acknowledgement that they were on opposing sides of a bitter dispute, in which each side calls into question the reliability and even ethicality of the other’s analytical approach. To be precise, it is Peter Stott who, amongst others, challenges Michael Mann, etc. for using methods that overestimate the risks, and Michael Mann, amongst others, who challenges Peter Stott, etc. for using methods that underestimate them.2
I have it on good authority (a sixteen year-old girl who’s too cool for school) that I should listen to the scientific consensus. That is all very well, but first I think the media need to provide better advice as to where the consensus exists. And if there are two camps who believe the other is using inadequate methods, the media should at least acknowledge the possibility that both camps are right.
Forget Risk Assessment, Let’s Just Tell Stories
My second point gets to the meat of the matter since it relates to the specifics of the criticisms made of the established D&A experts, and how those criticisms are then defended in the Winsberg et al paper.
There is a lot of discussion within the paper regarding so-called ‘story-telling’ versus risk-based assessment and the classification of scientific questions, but, when it comes down to it, the controversy existing between the D&A establishment and contrarians such as Trenberth, Sherwood, Oreskes and Mann revolves around the legitimacy of the Bayesian approach these contrarians take. At its heart, the dispute is little more than a good old-fashioned frequentist versus Bayesian bun fight.
Specifically, the question asked is how one should approach an attribution that posits anthropogenic influences on both the thermodynamics and atmospheric dynamics of the climate when the former is much better understood than the latter. The contrarians maintain that little can be said regarding the anthropogenic influence on atmospheric dynamics at the regional level and so, when looking at a specific event one can only ask the following conditional question: “Taking the extreme event as a given constraint, to what extent can we expect thermodynamic factors to have worsened it?” Essentially, one has to handle the problem as one of Bayesian updating, noting the extent to which posterior probabilities differ from prior probabilities once thermodynamic factors have been accounted for.
Of course, the conditioning of the questions asked is a key issue here, as is the choice of prior belief upon which one’s Bayesian analysis is predicated. As explained in a paper written by Peter Stott, David Karoly and Francis Zwiers:
“The choice of approach should focus primarily on the method that is most appropriate for the inference problem at hand. In instances where the prior is not controversial, a Bayesian method may be preferable [to frequentism] from both an estimation and testing perspective. But in other instances where the prior is highly contentious, a Bayesian approach may have little relevance except in those cases where the available evidence overwhelms the choice of prior.”
The relevance of the assumed prior information matters when one wishes to start from an understanding of the anthropogenic influence on a global scale in order to then draw conclusions regarding events occurring at specific regional locations – and this applies equally with respect to thermodynamics and atmospheric dynamics. As Stott et al say:
“An important point to consider in event attribution is the potentially limited relevance of prior information about the causes of global climate change to the regional event attribution problem. While it is generally accepted that a warmer atmosphere will lead to higher atmospheric moisture content and heavier extreme precipitation globally, there are a number of locations where a prior belief that this expectation applies locally could lead to an incorrect conclusion about anthropogenic influence on climate events at regional scales”.3
So the D&A experts object to the contrarian approach, not because they have an in-built aversion to Bayesian inferencing, but because they cannot see how Bayesianism can be assumed to be a more reliable technique for attribution when it depends so much upon the reliability and relevance of prior understanding. Indeed, when they look at the specifics of the Bayesian models used by their detractors, they see plenty of reason to think that the Bayesian approach is leading to an over-estimation of risk, particularly since it disallows that atmospheric dynamics may have any role in the explanatory framework.
Winsberg et al, like to characterise the controversy as the irrational rejection of Bayesianism by an old guard who are too attached to their beloved models. They even suggest that at the root of the issue is a value-driven preference for avoiding the overestimation of risk, when the correct approach (they maintain) would be to avoid approaches that underestimate it. I’ll move on to that issue shortly but, in the meantime, I hasten to emphasise that the controversy has nothing to do with values. There is nothing wrong with Bayesianism when one gets the science right, and nothing wrong as long as one appreciates that conditional questions can only provide conditional answers.
Winsberg et al argue that the criticisms of the contrarian approach miss the point. The Bayesian models are created to tell the story of how one particular factor (a relatively well understood one) increases risk, and that insight stands on its own. As such, they are the correct tools to address the class of question being asked. The anthropogenic impact on climate thermodynamics cannot, inter alia, help but increase risk and the best way of understanding this effect is to perform a Bayesian analysis that focuses upon it. The credibility of the models therefore actually lies in their conditioning. The trouble with this position, of course, rests in the assumption that thermodynamics can be analysed inter alia. The true value of Bayesian models is that, no matter how inchoate they may be, they will always give you something. Knowing how valuable that something is – now that’s the killer question.
Values – That Old Chestnut
Finally, we turn to what the paper has to say on the relevance of value judgment in the D&A controversy. Given that the paper was written by three philosophers of science, one might expect that they should devote a great deal of space to this subject – and they don’t disappoint. In fact, they make two main points, one of which should be obvious, the other I find naïve and all too familiar.
The first point is that value judgments are required irrespective of the use of mathematical models. One may think that a quantified metric of attribution, such as a Fraction of Attributable Risk or a Risk Ratio, is more objective and scientific than a qualitative judgment based upon a cause-effect narrative, but this is not the case. There are many respects in which value judgements can affect the direction a risk analysis takes and the conclusions that are subsequently drawn, and this is true for both quantitative and qualitative approaches. In fact, the inevitable subjectivity of risk analysis is a well-known problem amongst risk management practitioners, and one does not need three professors of philosophy to point it out. That said, some elements of the climate science community do seem oblivious to the problem, so the authors may have a point when they suggest that the D&A experts seem to be overly confident regarding the objectivity of their approach.
The second point made is somewhat more contentious, and relates to the question posed above: Is there a moral imperative to overestimate risk rather than underestimate it? The authors deem this a pertinent question because, according to them, the D&A experts are citing the overestimation of risk as the main problem resulting from the approach adopted by the contrarians. Why, ask Winsberg et al, do the D&A experts presuppose this to be a problem? Surely, it is better to overestimate rather than underestimate. Put another way, when looking for risks, false positives are better than false negatives. Is that not the central precept of the precautionary approach?
Well, it is. But there are those that will point out that life is not simple enough to pretend that a general principle can be readily applied in all circumstances. The fact is that, once again, this is not a philosophical problem requiring the attention of three professors. It is, instead, merely a question of pragmatics. Any practicing risk manager can tell you that risks should never be analysed and managed in isolation.4 Instead, risk managers model the interactions of risks and proposed risk mitigations using techniques such as Risk Response Diagrams and Influence Diagrams. At no stage does one need to apply principles such as, ‘a false positive is always better than a false negative’. All possibilities are considered and assessments are made on a case-by-case basis. Whether or not a false positive is the greater of the two evils very much depends upon what one has in mind to address the risk concerned. It also depends very much upon the stakeholder perspective that was chosen when performing the analysis. The D&A experts are not criticising the contrarians’ approach because they believe in the inherent benefit of avoiding false positives. They criticise it because it actually does result in potentially damaging false positives and they refute the allegation that their own approach biases towards false negatives.
So Why Have I Bothered?
As explained in my introduction, it was somewhat against my better judgment that I should devote so much of my time towards discussing the issues raised by the Winsberg et al paper. Nevertheless, I have done so because I think it is a paper that, whether it intended to or not, draws attention to three important points:
- Behind the façade of settled consensus within D&A, there lies a bitter dispute that goes so deep that even the scientific legitimacy and ethicality of the techniques being used is called into question by the adversaries.
- Whilst both sides do a good job of seeing the weaknesses in their opponent’s approach, the media seem incapable of seeing it in either. As long as both sides are saying things that satisfy the journalists’ confirmation bias, no further questions will be asked.
- Against the backdrop provided by the above, there exists individuals with ideologies and philosophical positions that predispose them to a precautionary stance. It is tempting to conjecture that with a less-distanced view of the science, and a better understanding of the pragmatics of risk management, this would be a stance that they would be less inclined to assume.
As to the question of who has the more legitimate approach, I think this is probably the wrong question to ask. It is akin to the question, “Which is better, Bayesianism or frequentism?”
To which the answer is, “It depends. Why not try both?”
 Anyone who publishes on the uncertainties associate with climate modelling before turning his attention to the philosophical implications of Hawking Radiation is someone I would gladly share a pint with.
 The two teams that are pre-eminent within the field of D&A are the Climate Monitoring and Attribution Group at the U.K.’s Hadley Centre in Exeter, headed by Peter Stott, and the Environmental Change Institute of Oxford University, headed by Myles Allen and Friederike Otto. The scientific minority to which I refer comprises: Kevin Trenberth, John Fasullo, Theodore Shepherd, Alexis Hannart and everyone’s favourite magician of statistics, Michael Mann. Oreskes is holding the contrarians’ coats and is clearly on their side. For example, in an earlier paper, Lloyd and Oreskes wrote of, “…the majority of D&A scientists reacting in a very negative and even personal manner.”
 Of course, that isn’t what Stott said when the BBC came calling.
 I have mentioned before now that safety analysts will often apply the Globally At Least Equivalent (GALE) principle. This requires that the net system safety risk resulting from a system modification should never increase. This applies to all modifications, including those ostensibly intended to reduce a specific risk.