I would like to share with you a few words of wisdom provided by a gentleman I had the privilege to work with back in my days as a safety risk analyst. His name is Felix Redmill and, prior to his recent retirement, he was an internationally recognised expert within the field.1 All of the following quotes are taken from his paper titled, Exploring Subjectivity in Hazard Analysis. I believe the advice provided by the paper exemplifies the humility and pragmatism that one often encountered within the safety-related systems arena, and yet is all too rarely encountered within the confident proclamations that many climate scientists and activists make. Outside the world of climate science, the paper’s content would be accepted, by most, as sage advice. But if it were to be offered instead as a warning against placing too much trust in the reliability of climate risk assessment, it would probably be dismissed as the ramblings of a merchant of doubt. As you read Felix Redmill’s advice,2 you are invited to consider why it is that such received wisdom is considered uncontroversial as far as ensuring public safety is concerned, and yet would be construed as a denialist’s charter by those who profess to having the planet’s welfare at heart:

At first glance, evaluating consequences may seem objective, but what we evaluate depends on where we look, and this is determined by a number of decisions.”

It must also be decided, at the scope-definition stage of the [risk] analysis, whether estimations should be based on the worst possible consequence, the worst credible, or the most likely, and the risk values are influenced by the choice. Further, each scenario is not clearly defined and waiting to be ‘measured’, but is a potential outcome whose parameters must be subjectively defined – perhaps in line with the goals or mind-set of a particular industry sector.”

There is always subjectivity in the estimation of consequences, and more so when there is little or no experience of the hazardous event. Not only is there a degree of uncertainty about a potential future event, there is also error, inaccuracy, and the use of discretion and judgement in the description and valuation of what might occur.”

Qualitative analysis is by definition approximate, but quantitative analysis is often assumed to be wholly objective. Yet there is considerable subjectivity in the analysis process. In spite of the appearance of accuracy, quantitative analysis is subject to assumptions that are not always made explicit.”

Subjective assumptions and omissions through human judgement or negligence can have enormous implications on the accuracy and relevance of probabilistic calculations.”

Precision is not the same as accuracy and should not be assumed to imply it.”

A computing acronym, GIGO (garbage in, garbage out) is also appropriate to mathematical risk models, but the results of risk analyses are often taken to be accurate and the assumptions and inaccuracies in their derivation unrecorded and forgotten.”

Even high-pedigree data sources may lead to false probabilities… crucial conditions during data collection and the assumptions made in deriving results are often not recorded, and the conditions under which the derived probabilities are used predictively may be very different from those under which the data were collected.”

But confidence levels are not commonly assigned to probabilities… In any case, how can confidence be derived in very low probabilities? In most instances adequate statistical data are not available for the assessment of rare events, and it could take years to discover if the assumptions on which estimates are based are valid, or even reasonable.”

Not only are subjective omissions and inaccuracies almost inevitable, but they can also be of great significance to the result of a risk analysis.”

Finally, in a plea for improvement in the teaching syllabus for safety risk analysts, Felix Redmill offered the following recommendations:

It would also be useful to revise the risk-analysis syllabus to cover the ways in which subjectivity is introduced and the effects that it has, and to make the process’s assumptions more explicit to analysts. Thus, analysts would be taught to understand not only the mathematical assumptions but also their own human biases. There would then be an increased chance that subjectivity would be considered, and partly neutralised, during the analysis and management of risk, and that evidence would exist for placing confidence figures on results.”

Climate scientists need to reflect more carefully upon the basis for their predictions and accept that any attempt at the assessment of risk is highly subjective, requiring a great deal of humility and caution. If this understanding exists outside the world of climate science, then I see no legitimate reason for it not being accepted within it. These are my final words on the subject.

Notes:

[1] Felix Redmill, CEng., FIET, FBCS spent 25 years as a consultant in project management and in risk management for safety-critical systems. He has presented and published widely on aspects of safety and risk management, and, on behalf of the European Workshop on Industrial Computer Systems Reliability, Safety and Security, he edited two volumes of guidelines, which influenced the development of the international safety management standard, IEC61508.

[2] Advice that I further emphasise was not actually directed at climate scientists undertaking risk assessment. I alone choose to draw attention to its applicability. In fact, I have no idea what Felix’s views are on climate change.

 

20 Comments

  1. “It must also be decided, at the scope-definition stage of the [risk] analysis, whether estimations should be based on the worst possible consequence, the worst credible, or the most likely, and the risk values are influenced by the choice. Further, each scenario is not clearly defined and waiting to be ‘measured’, but is a potential outcome whose parameters must be subjectively defined – perhaps in line with the goals or mind-set of a particular industry sector.”

    “There is always subjectivity in the estimation of consequences, and more so when there is little or no experience of the hazardous event. Not only is there a degree of uncertainty about a potential future event, there is also error, inaccuracy, and the use of discretion and judgement in the description and valuation of what might occur.”

    In climate science, it now seems routine to base risk analysis on the worst possible consequence, regardless of whether it is credible or not. Subjectivity is virtually a given in that risk assessors have no experience of the hazardous event except the knowledge that it will be existential and world-threatening.

    Liked by 2 people

  2. Great quotes. I think you’re right this is widely accepted, theoretically speaking, outside of the climate domain. But the expression is partly necessary (especially the education quote), because in practice the implementation, and an ingrained ethos based on this understanding throughout relevant orgs, often still falls behind the theory. Hence even outside of CC folks still stumble into some of the pitfalls, although I’ve personally never observed institutional resistance to improvement within industry, and indeed there may often be some understanding of where to steer on that, plus on potential sources of bias. In the context of widely applied public policies (i.e. outside of a bounded safety-critical system), it’s probably worth noting that the risk analysis should still be fully inclusive – i.e. should also be performed regarding the potential risks / downsides of all the remedial / mitigation policies themselves, many of which indeed have life threatening / shortening or negative well-being potentials, as well as benefit potentials. The above text implies this (e.g. ‘subjective omissions’), but is not too explicit within these short and generic quotes. Green advocates frequently miss this angle out.

    Like

  3. Jaime,

    As I’m sure you appreciate, this isn’t just a case of dismissing fat tails out of hand. As Felix points out, basing policy upon worst possible or worst imaginable outcomes is a potentially legitimate approach. However, a robust rationale for such a policy has to be established, and simply saying that it is justified because risk analysis typically focuses upon the extreme outcomes (as claimed by Ken Rice) doesn’t cut it – it simply isn’t true. Nor is it sufficient to claim that it is necessary for robust decision making (as claimed by Jonathan Bamber) because that is just a misunderstanding of what robust decision making is all about (RDM is about building flexibility and adaptability into risk management, not just gearing up for the worst case). I suppose we can always fall back on the good old precautionary principle, supported by arguments for the existence of concave loss functions, but these arguments lose their appeal once matters of risk efficiency are taken into account.

    So one is left with a case that still has to be made. I have tried pointing this out at Ken’s web site but that did not go down well with a group who seemed more concerned with challenging my credentials for having an informed opinion on the matter. Furthermore, on the evidence of the lack of interest shown in my last three articles here, I have to recognise that there is little appetite within this forum to discuss the matter any further. Perhaps it is felt that everything that has to be said on the matter has already been said. I’m going to proceed upon that assumption and hang my boots up.

    Like

  4. Andy,

    Yes, the very fact that Felix Redmill felt the need to write his paper indicates that subjectivity is underestimated within safety-related areas just as much as it may be within climate science risk management. My experience, however, is similar to yours. A call for a greater understanding of the effects of subjectivity will always find welcome within safety-related applications because emphasising the importance of self-awareness is all part and parcel of safety culture. Within CC, however, the same words of warning would be treated very differently, because we are dealing with a very different culture methinks.

    Liked by 1 person

  5. The climate consensus is the antithesis of the high integrity reasonable approach to dealing with risk.

    Like

  6. This recent Guardian article made me wonder if the dreaded precautionary principle is infecting the sceptical side of the debate on how to deal with our changing climates – is it even logical to think that the whole planet has one climate?
    Conservatives should change how they think about global warming. I did

    https://www.theguardian.com/commentisfree/2019/jun/10/conservatives-should-change-how-they-think-about-global-warming-i-did?CMP=Share_AndroidApp_Copy_to_clipboard

    Liked by 1 person

  7. MIAB,

    I’d like to tell you a personal story:

    After experiencing episodes of choking, both during and shortly after eating, I had been referred to an ear, nose and throat consultant. The consultant couldn’t provide an explanation, but he did diagnose acid reflux and booked me in for an urgent endoscopy and micro-laryngoscopy. This is important because acid reflux can cause all sorts of complications that may need diagnosis and treatment. “I’m not overly concerned”, explained my consultant, “but as a precautionary measure, we need to investigate further”. That said, every procedure carries with it risks, and so these were duly discussed before I signed to give consent.

    Then, only two days before the due date, my dinner was to be cut short by the sudden appearance of a large blood blister on the roof of my mouth. This, together with other evidence, led me to conclude that I was suffering from a rare condition known as Angina Bullosa Hemorrhagica (ABH). In English, this means I can suffer large blood blisters in the mouth or throat following just mild trauma, such as swallowing hot food.

    ABH could certainly explain my choking episodes; in fact, that is its most characteristic symptom. But here is the rub: people with ABH have been known to suffer post-operative respiratory obstruction resulting from pharyngeal bullae. And what operation are we talking about here? Yes, you guessed it – endoscopy. So the precautionary procedure used to investigate the risks associated with acid reflux could conceivably result in me choking to death in the recovery room whilst still under anaesthetic. So much for precaution!

    My consultant had referred to the operative risks as ‘hypothetical’, based upon the fact that he had never personally witnessed any of the commonly recognised endoscopy complications. So, it seems, data gained by observing a sample of the general population indicated a very low risk. But this assessment was based upon the wrong baseline. One has to update the a priori probability in view of the new evidence, i.e. the presence of ABH.

    So the relevant question isn’t what percentage of my consultant’s patients had experienced problems during endoscopy (zero); it is what percentage of his ABH patients had done so. ABH is very rare. For all I know, an ABH patient might be highly likely to experience asphyxiation following endoscopy, and yet this would still be a very rare event due almost entirely to ABH’s rarity. Furthermore, this rarity may mean that there is insufficient data to reliably assess the strength of the link between endoscopy and post-operative respiratory obstruction in ABH patients. If so, the consultant would have no reliable way of assessing my risk. I knew it was a plausible risk (I had read that it had happened to at least one person with ABH), but my uncertainty regarding the probability couldn’t be higher. Therefore, I should invoke the Precautionary Principle and refuse the endoscopy, based purely upon the plausibility of a catastrophic outcome.

    Now let’s look at the reflux risk. There are several medical hazards associated with acid reflux, but only by accepting a biopsy can I know which of them, if any, apply to me; and this requires the endoscopy. The range of potential hazards encompasses cancer, with the concomitant risk of an untimely death. Therefore, I should invoke the Precautionary Principle and insist on the endoscopy, based purely upon the plausibility of a catastrophic outcome.

    This is a perfect example of the decision-making paralysis that can result from relying upon the precautionary principle: frustratingly, it can be its own worst enemy.

    In fact, it is not uncommon to encounter a problem requiring a risk-based decision, for which none of the probabilities concerned is readily quantifiable (at least not objectively) and, for which, even one’s aversion to uncertainty can’t be used as an arbiter. One is then left to make a decision based entirely upon consideration of the posited impacts. This is basically an irrational neglect of probability. But it turns out that, in the real world, irrationality can sometimes be the only basis upon which one can proceed. Furthermore, when the posited impacts are equally catastrophic, the irrationality behind your decision will be compounded by arbitrariness. You shouldn’t feel like you have abandoned rationality, so much as rationality has abandoned you.

    The author of the Guardian article should try lying awake at two in the morning with acid reflux and Angina Bullosa Hemorrhagica before claiming to understand how uncertainty and risk management work in the real world.

    Liked by 2 people

  8. JOHN
    Most interesting, and my deepest sympathies. Is this the article I’ve been looking for for ages which unlocks the secrets of Bayesian statistics?

    MAN IN A BARREL
    The Guardian article you link to is most interesting. At “the real debate in climate science..” it links to an article which sides with sceptics to some degree versus Cook and the Guardian’s own Nuccitelli, and it quotes Richard Tol and Matt Ridley. It also gets its knickers in a philosophical twist on a number of subjects which John may wish to comment on. I’ll take this one:

    “scientists believe that the chance of a nasty surprise is much greater than the chance of a pleasant surprise”

    Given that the definition of a surprise is that it is something we know nothing about, what gives scientists the right to decide that the distribution of unknown future events along a scale of nastiness / pleasantness is skewed towards the nasty? Isn’t this just another version of the old climate chestnut: “We think it’s going to be worse than we think it’s going to be”?

    Liked by 2 people

  9. Scientists BELIEVE that we are in for a surprise and that it’s more likely to be a nasty one than a nice one. Fine, but that’s not scientists doing science, so we should ignore them, because scientists don’t believe, they calculate, they pore over the evidence and make scientific judgements based solidly on that evidence and known science. They don’t paradoxically anticipate a ‘surprise’ and even pronounce opinion on the character of what that ‘surprise’ will be. The surprise happens when the unexpected and unanticipated happens, i.e. scientists’ beliefs are totally blown out of the water by events taking place which they could not even have guessed might happen – like they are 100% wrong.

    Like

  10. Regarding the author of that Guardian piece:

    “Jerry Taylor is president and co-founder of the Niskanen Center, a public policy thinktank in Washington DC.”

    They very helpfully are transparent regarding their donors:

    Click to access Niskanen-Center-donation-history-05.01.2019.pdf

    Their biggest donors appear to be the William and Flora Hewlett Foundation. That Foundation’s “environment” section of its website says:

    “The Environment Program makes grants to protect people and places threatened by a warming planet by addressing climate change globally, expanding clean energy, and conserving the North American West.”

    “Climate change is the defining issue of our day. It is an urgent global crisis that affects every problem philanthropy seeks to solve, whether it’s improving health, alleviating poverty, reducing famine, promoting peace, or advancing social justice. To safeguard human health and the environment, the Hewlett Foundation supports work to ensure that energy sources are clean and efficient, and that global average temperature rise is kept well below 2° Celsius.

    The Hewlett Foundation has been investing for a number of years in various strategies to avoid the worst effects of climate change and spare human suffering by reducing greenhouse gas (GHG) emissions. Our grants focus on cleaning up power production, using less oil, using energy more efficiently, preserving forests, addressing non-CO2 greenhouse gases, and financing climate-friendly investments. Our grantmaking is focused in developed countries with high energy demand and developing countries with fast-growing energy demand or high deforestation rates.”

    https://hewlett.org/strategy/climate-and-energy/

    Not wishing to involve your site, or me, in litigation, I make no comment regarding the fact that Jerry Taylor in that Guardian piece says he has now changed his views regarding climate change and what, if anything, should be done about it.

    Liked by 1 person

  11. “scientists dony’t believe, they calculate, they pore over the evidence and make a scientific judgements based solidly on that evidence and known science”

    Don’t believe a word of it. Scientists guess and believe their guesses (for a time, and often for too long). All look for supportive evidence, few deliberately seek opposing evidence. Poor science occurs when contrary evidence is ignored, explained away or deliberately altered. Guesses become cherished even in the face of contrary evidence. Good science manages to incorporate apparently opposing evidence and in so doing makes the “guesses” stronger. Almost all published science obscures the guesswork stage, supporting the outside view of the sacredness of the scientific method which implies no guesswork only a rigid obedience to the evidence. Anyone who has ever tried to bring understanding to plain facts (or even recognizing them as factual) knows just how little the concept of scientific method actually fits reality or the way human brains operate.

    Like

  12. Alan, some science might be done like that. Your prognosis is bleak indeed. I would hope that a lot of science does still proceed along the lines of the scientific method. Of course, intuition does play a necessary and vital part in scientific discovery and research. Scientists ‘guess’ at what might be happening and they then seek out the data which might confirm their intuitive ‘hunch’. I think this is an entirely legitimate way of doing science but it does rely upon scientists having the integrity to abandon their hunch and their search for confirmatory data when conflicting data starts to emerge in significant quantities. Some of the very best scientists throughout history have been driven by intuitive insights. I’m always reminded of Captain Kirk remarking to somebody who seemed rather doubtful that Spock could get them out of a crisis by guessing some figures to input into the Enterprise computer – “Spock’s guesses are better than most people’s facts,” he said!

    Most science I think though is nose to the grindstone mundane, routine collection of data combined with number-crunching in the effort to strengthen the framework of existing scientific consensus. It’s certainly not evidence-lite speculation of catastrophic outcomes driven primarily by lack of knowledge and data.

    Like

  13. Geoff,

    Thank you for your concerns, but things aren’t as bad as they seem. The story is an old one and there is an update.

    I decided in the end that this was one of those circumstances where uncertainty had to be reduced before I could make a reliable, risk-based decision. So I cancelled the endoscopy and made a new appointment with my consultant so that I could ask some pertinent questions (like have you heard of ABH and have you seen it in any of your patients?) As it happens, the original consultant was unavailable but the consultant I did get to see had some interesting news. Apparently, when the first consultant told me he was not worried, he was lying. He had been very worried because, when he examined me, he thought he had seen a throat tumour. Had I been told that at the time, my risk calculation would, of course, have been very different. Anyway, the new consultant quickly re-examined me and declared that there was nothing there! Whatever the first bloke thought he had seen was no longer present. Having never heard of ABH, the first consultant had probably seen a temporary bullosa and jumped to the wrong conclusion. However, had the first consultant been right, postponing action until I had better information would have been the worst thing I could do.

    So all is well that ends well. I still have ABH and I would still need a bloody good reason to agree to undergoing an endoscopy, but I thought my encounter with the medical profession has a lot to say regarding the perils of making critical decisions under conditions of profound uncertainty. It also shows the dangers introduced when experts try to manipulate the risk perceptions of the general public, no matter how well-intended.

    As for the Bayesian thing – yes you are right in concluding that the diagnosis problem was classic Bayesianism in action. As it happens, my next article will cover the controversy surrounding the use of Bayesian statistics in extreme weather event attribution.

    Like

  14. John, what a compelling and frightening story of modern day hubris and complacency.
    The implications of medical endangerment are significant and a very good comparison to what the climate consensus is doing to us.
    I am so pleased to hear the excellent outcome you experienced.

    Like

  15. John:
    ‘But it turns out that, in the real world, irrationality can sometimes be the only basis upon which one can proceed. ‘

    Indeed, and because as a species we have faced multiple simultaneous unquantifiable risks essentially forever, a system evolved to ensure that at least *the same* irrationality would dominate, in order that we could proceed as a group (because innumerable *different* irrationalities, only produces gridlock). It is called culture. This appears to have been a big net survival advantage, likely as the vast leverage gained by acting together easily absorbed the costs of the physical and mental paraphernalia of arbitrary belief (e.g. religious infra-structure and habits). And even in despite of occasional calamitous outcomes caused by the belief itself. A dominant irrationality in a group (which in fact is not constant but evolves over time) will continue unless toppled by the rise of another belief that competes within the same domain, or unless bulk instinctive skepticism is triggered to challenge / reject it (instinctive skepticism requires no domain knowledge, serving as a balance against culture taking far too many liberties as well as to help reject any competitive cultures). The roots of the system are simple in-group / out-group signalling and reinforcement, but in more recent times complex world views taking specific positions on how to live and so how to approach the risks of living, are incorporated. We can support (or oppose) several cultures at once, as to some extent they are domain bounded, although multiple memberships tend to be (locally) compatible.

    Alan: ‘Scientists guess and believe their guesses (for a time, and often for too long). All look for supportive evidence, few deliberately seek opposing evidence. Poor science occurs when contrary evidence is ignored, explained away or deliberately altered. Guesses become cherished even in the face of contrary evidence.’

    Which makes the enterprise of science extremely fragile to bias or complete hi-jack by the system above, and this has happened frequently, whether via culture writ small (just some group think), or large (major religion, political ideology, or indeed catastrophic climate culture).

    Like

  16. HUNTERSON7,

    ‘Complacent’ is not necessarily the word I would use, since everyone was acting out of concern and with an urgency that they felt was justified under the circumstances. The problem is that the consultants were faced with ontological uncertainty, i.e. they didn’t know that they didn’t know about ABH. The second consultant said that he had worked in ENT for over 20 years and, not only had he never encountered it, he hadn’t even heard of it – such is its rarity. In fact, the only professional who had heard of it was my dentist, and it is he who confirmed my self-diagnosis.

    Experts do their best, I suppose.

    Andy,

    “…as a species we have faced multiple simultaneous unquantifiable risks essentially forever, a system evolved to ensure that at least *the same* irrationality would dominate, in order that we could proceed as a group (because innumerable *different* irrationalities, only produces gridlock). It is called culture.”

    That’s a very neat way of summing it up.

    Liked by 1 person

  17. JJ, for the classic example of scientists making errors and indulging group think, just look at the history of determining the charge on the electron. Millikan made a small error in his calculation but no one either spotted that error or had the temerity to point it out when their experimental results showed that the accepted figure was inaccurate. They just came up with slightly different numbers, converging by small steps ever so slightly to the figure accepted today. I forget how long it took for someone to point out the error but it was at least 10 years.

    Liked by 1 person

  18. JR thanks for your response because it clarifies what worries me most about the likes of ATTP, Annan and the egregious Lew. If we do not really know what risks we are facing, precautionary efforts can easily turn out to be misplaced. The way that building flood defences along the Mississippi to prevent floods in one area led to much more severe flooding elsewhere, as in the 1920s (depicted in Bessie Smith ‘s Backwater Blues) is just one example that occurs to mind.

    Like

  19. Jaime. I make a distinction between those working in science and scientists. Most real scientists I have met get rather bored quickly. They don’t themselves collect or even massage piles of data. They use what others (or machines) have collected. Those collecting such data are commonly identified as being scientists, but usually are not. They are science-trained technicians. A scientific mind seeks relief from boredom by seeking links in data (imagined or not). That’s why I admire people like Roy Spencer and Willis Eschenbach in the climate field, they commonly use data and manipulate it in new ways, coming up with new relationships. Commonly their work is what I call elegant.

    I have spent a good proportion of my life being a technician – using my experience to interpret new data, as when asked to describe/interpret rock cores for others. Oil companies and geological surveys usually allow such people time to conduct original research on material they have studied and found to be of scientific interest, but whereas publication confers some prestige to the institution you work for it is not the reason for your employment. Sometimes in fact, it’s not exactly clear why your activities confer financial benefit to the company (why not just employ consultants?) and I spent much time in a company-wide investigation trying to place a value upon science-technician and scientific activities to the company.

    There are naturally exceptions to my distinction. One obvious one being involved with the discovery and isolation of Radium. This involved the arduous processing and recrystallization of a tonne of pitchblende ore by the Curies to produce a tiny amount of radium (chloride). That work could have been done by technicians.

    Liked by 2 people

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.