I would like to share with you a few words of wisdom provided by a gentleman I had the privilege to work with back in my days as a safety risk analyst. His name is Felix Redmill and, prior to his recent retirement, he was an internationally recognised expert within the field.1 All of the following quotes are taken from his paper titled, Exploring Subjectivity in Hazard Analysis. I believe the advice provided by the paper exemplifies the humility and pragmatism that one often encountered within the safety-related systems arena, and yet is all too rarely encountered within the confident proclamations that many climate scientists and activists make. Outside the world of climate science, the paper’s content would be accepted, by most, as sage advice. But if it were to be offered instead as a warning against placing too much trust in the reliability of climate risk assessment, it would probably be dismissed as the ramblings of a merchant of doubt. As you read Felix Redmill’s advice,2 you are invited to consider why it is that such received wisdom is considered uncontroversial as far as ensuring public safety is concerned, and yet would be construed as a denialist’s charter by those who profess to having the planet’s welfare at heart:
“At first glance, evaluating consequences may seem objective, but what we evaluate depends on where we look, and this is determined by a number of decisions.”
“It must also be decided, at the scope-definition stage of the [risk] analysis, whether estimations should be based on the worst possible consequence, the worst credible, or the most likely, and the risk values are influenced by the choice. Further, each scenario is not clearly defined and waiting to be ‘measured’, but is a potential outcome whose parameters must be subjectively defined – perhaps in line with the goals or mind-set of a particular industry sector.”
“There is always subjectivity in the estimation of consequences, and more so when there is little or no experience of the hazardous event. Not only is there a degree of uncertainty about a potential future event, there is also error, inaccuracy, and the use of discretion and judgement in the description and valuation of what might occur.”
“Qualitative analysis is by definition approximate, but quantitative analysis is often assumed to be wholly objective. Yet there is considerable subjectivity in the analysis process. In spite of the appearance of accuracy, quantitative analysis is subject to assumptions that are not always made explicit.”
“Subjective assumptions and omissions through human judgement or negligence can have enormous implications on the accuracy and relevance of probabilistic calculations.”
“Precision is not the same as accuracy and should not be assumed to imply it.”
“A computing acronym, GIGO (garbage in, garbage out) is also appropriate to mathematical risk models, but the results of risk analyses are often taken to be accurate and the assumptions and inaccuracies in their derivation unrecorded and forgotten.”
“Even high-pedigree data sources may lead to false probabilities… crucial conditions during data collection and the assumptions made in deriving results are often not recorded, and the conditions under which the derived probabilities are used predictively may be very different from those under which the data were collected.”
“But confidence levels are not commonly assigned to probabilities… In any case, how can confidence be derived in very low probabilities? In most instances adequate statistical data are not available for the assessment of rare events, and it could take years to discover if the assumptions on which estimates are based are valid, or even reasonable.”
“Not only are subjective omissions and inaccuracies almost inevitable, but they can also be of great significance to the result of a risk analysis.”
Finally, in a plea for improvement in the teaching syllabus for safety risk analysts, Felix Redmill offered the following recommendations:
“It would also be useful to revise the risk-analysis syllabus to cover the ways in which subjectivity is introduced and the effects that it has, and to make the process’s assumptions more explicit to analysts. Thus, analysts would be taught to understand not only the mathematical assumptions but also their own human biases. There would then be an increased chance that subjectivity would be considered, and partly neutralised, during the analysis and management of risk, and that evidence would exist for placing confidence figures on results.”
Climate scientists need to reflect more carefully upon the basis for their predictions and accept that any attempt at the assessment of risk is highly subjective, requiring a great deal of humility and caution. If this understanding exists outside the world of climate science, then I see no legitimate reason for it not being accepted within it. These are my final words on the subject.
 Felix Redmill, CEng., FIET, FBCS spent 25 years as a consultant in project management and in risk management for safety-critical systems. He has presented and published widely on aspects of safety and risk management, and, on behalf of the European Workshop on Industrial Computer Systems Reliability, Safety and Security, he edited two volumes of guidelines, which influenced the development of the international safety management standard, IEC61508.
 Advice that I further emphasise was not actually directed at climate scientists undertaking risk assessment. I alone choose to draw attention to its applicability. In fact, I have no idea what Felix’s views are on climate change.