Two unavoidable facts have confronted humanity throughout history: the future is uncertain and decisions have to be made. This means that mankind has had to develop a thorough understanding of the mathematical and philosophical frameworks for risk and uncertainty, together with a grasp of domain-specific details, whether it be with respect to security, safety, environmental, health, financial or programme risk. The professionals we entrust with properly assessing and managing such risks are credentialed and highly trained. One would not want amateurs or the opinionated layman to be in charge of such an important undertaking. It is therefore quite remarkable that when it comes to perhaps the most critical of risk-based decisions currently facing mankind, we decided that these professionals should be sidelined whilst scientists with very little risk management expertise and psychologists with none whatsoever should be allowed to dictate the narrative. It is strange indeed, but that is exactly the situation we find ourselves in with respect to climate change risk – and the impact of this mistake could not be more damaging.
For example, it is quite the scientific thing to do to believe that all uncertainty can be captured by the shape of a probability distribution. And yet this isn’t true, as becomes evident once the probabilities become too difficult to calculate. What one does in these circumstances is very telling, but rest assured that whilst a professional risk manager will ensure that all possibilities are catered for, including the worst case, the risks and uncertainties entailed in a proposed solution will not be overlooked either. Professional risk managers do not obsess over the uncertainties that exist in the problem domain to the exclusion of all other considerations. They are just as concerned regarding the solution domain and the potential damage that can be done by over-committing to a potentially disastrous risk mitigation (e.g. by incurring damage such as economic destabilisation, systemic energy poverty or the collapse of industry). Furthermore, they know their job is not to reduce risk to its lowest possible level but to seek instead a path forward that maximises risk efficiency, i.e. the greatest possible risk reduction for a given outlay. Nothing remotely like that comes across when climate risk is discussed and global efforts to achieve net zero are proposed.
A prime example of an academic who has been allowed to publicly denigrate the orthodox and responsible approach to risk management is psychologist Stephan Lewandowsky. To be perfectly clear, when it comes to risk management, Lewandowsky is the archetypical opinionated layman who possesses no professional risk management credibility whatsoever. His notoriety is in pushing dodgy psychology in an attempt to discredit those who actually do know what they are talking about. So, whilst risk professionals understand that uncertainties demand a satisficing strategy that performs well enough across a vast range of plausible future scenarios, he sneers at such discretion, insisting instead that we follow a strategy optimised to avoid a single, identified worst-case outcome. To him, uncertainty is ‘actionable knowledge’, but to every respectable risk manager, uncertainty renders the best path a moving target requiring a strategy that is constantly stress-tested against an ever-changing myriad of different potential futures. Lewandowsky – our amateur – boasts that optimising to avoid the worst case is a robust approach, but that just goes to show how little of the required background he has. Any real risk manager could tell him that this is the very opposite of robustness, since the robust approach lies in the ability to easily change direction as the future reveals its true intent. Good risk management addresses the structural uncertainty of the world rather than just the statistical uncertainty of a model.
Lewandowsky is not alone, of course. The profound ignorance that is driving the dominant narrative can be found across the board. The accepted narrative tells you that urgent and uncompromising action is required, and that any attempt to introduce some degree of standard risk management’s anti-fragility is actually underhand ‘delayism’. This mischaracterisation lies at the heart of the mistake being made when we put the amateurs in charge. By granting a vocal but ill-informed group of academics the privilege of constructing the moral framework within which our risks are to be addressed, we have managed not only to marginalise but also demonise those who would wish to apply nothing more than risk management basics. Legitimate intent is denied; you are either fully on board with the climate crisis thesis, or you are a bad faith actor espousing anti-scientific rhetoric. Rejection of the psychology expert’s cherished, optimised solution is attributed to cognitive biases like motivated reasoning or personality traits like conspiracist ideation. This effectively closes the door on robust risk management pragmatism. Saying that a solution is too challenging for a fragile economy might actually be a valid technical observation; but by adopting the cognitive science perspective it becomes a ‘discourse of delay’.
So, according to Lewandowsky the psychologist, those who base their strategy upon the need to reduce epistemic uncertainty are guilty of a SCAM1. And according to Peter Gleick the hydrologist, focusing upon the epistemic is a classic layman failure to understand what scientific uncertainty really is. Challenging the case for urgent action is a culture war, cries Jennie King of the Institute for Strategic Dialogue, with her BA in Asian and Middle Eastern Studies. But let us not lose sight of the fact that, when it comes to risk management and its practical application, Lewandowsky, Gleick and King are the laymen here. They are the ones lacking the required credentials. They are the ones who don’t actually understand uncertainty and think it begins and ends with measurement theory. They are the ones who have no real notion of the satisficing and efficiency-seeking that lies at the heart of practical risk management.
And yet none of this really comes anywhere near as damaging as the mistake that has been made by giving the whip hand to a profession that believes that risk is just a state of mind. You might think that risks are the consequence of physical reality but not so for psychologists such as Sander van der Linden. According to him:
A “risk” is not something that exists “out there”, independent of our minds and culture (Slovic, 1992, p. 119). Indeed, unlike a physical threat or danger, the human notion of risk is a mental construct (Sjöberg, 1979), it cannot be sensed—it is only perceived (Fischhoff, Slovic, & Lichtenstein, 1983).
This view has serious consequences, since it means that for the psychology profession (i.e. the profession that seems to be in charge of dictating the narrative for managing climate risk) managing a risk is not about reducing likelihoods or mitigating impacts, it’s about bringing about the right state of mind. For the credentialed risk manager there is an important distinction to be made between a risk and its perception, but for the clueless psychologists that distinction appears to have disappeared, no doubt in order to leave the subject conveniently within their domain of expertise. Which is great for them, but not so good for us who have to deal with the consequences of a reality that refuses to bend to our carefully cultured mental constructs. Try telling the person whose house burnt down in a fire that the fire risk had been a mental construct all along.
If there were any doubt that the psychologists are only interested in managing risk by managing attitudes, these are dispelled by reading what the IPCC, acting under their influence, has decreed to be the necessary ‘framework for the management of risk’. This is carefully explained in their AR5 WG3 Chapter 2:
The choice of climate policies can thus be viewed as an exercise in risk management (Kunreuther et al., 2013a). Figure 2.1 suggests a risk management framework that serves as the structure of the chapter.
They call Figure 2.1 a risk management framework, but it isn’t one as I understand the term. It’s certainly not one of the recognised international standard frameworks that could have been chosen.2 In fact, it is just the section headers taken from chapter 2, placed in boxes, and connected by lines in a way that is vaguely suggestive of a process. And if the accompanying text is anything to go by, the process is not describing the lifecycle of a risk from identification through to resolution, but is instead providing an iterative decision-cycle designed to deal with the practicalities of climate policy implementation. As such, it is heavily focused upon behavioural science insights that can improve the chances of gaining public acceptance. As the IPCC document puts it:
We review what is known about public support or opposition to climate policy, climate-related infrastructure, and climate science. In all three cases, a critical issue is the role that perceptions of risks and uncertainties play in shaping support or opposition.
First there is the acceptance of the risk assessment:
One of the major determinants of popular support for climate policy is whether people have an underlying belief that climate change is dangerous. This concern can be influenced by both cultural factors and the methods of communication (Smith, 2005; Pidgeon and Fischhoff, 2011)…The use of language used to describe climate change — such as the distinction between ‘climate change’ and ‘global warming’ — play a role in influencing perceptions of risk, as well as considerations of immediate and local impacts (Lorenzoni et al., 2006).
Of course, nowadays only ‘climate crisis’ and ‘global boiling’ will do.
Then there is acceptance of the measures deemed necessary to manage the risk:
Descriptive models not only help explain behaviours that deviate from the predictions of normative models of choice but also provide entry points for the design of decision aids and interventions collectively referred to as choice architecture, indicating that people’s choices depend in part on the ways that possible outcomes of different options are framed and presented (Thaler and Sunstein, 2008).
That would be Thaler and Sunstein of ‘nudge theory’ fame. They call it ‘libertarian paternalism’ and if you need to know more then just consult the UK Behavioural Insights Team’s MINDSPACE framework.
So, as you can see, it is psychology all the way down. Risk is just something that occupies a mind space. Get the mindset right and the problem is solved!
Except of course it isn’t. The substance of the risk has been totally overlooked, and what we are left with is a battleground between the psychologists defending an in-group of experts, and an out-group comprising the witless and wilful who are deemed to have the wrong risk assessment in their minds. It is the latter who are supposed to be in need of nudging out of their motivated complacency. That’s why we can have the likes of John Cook, Sander van der Linden and Stephan Lewandowsky pushing toxic ideas such as ‘critical ignoring’ and ‘prebunking’ in an effort to gain mass compliance, rather than encouraging open-mindedness. There are plenty of people with genuine and legitimate concerns regarding the solutions on offer, but by dismissing the uncertainty of the solution as a mere delayer’s narrative that has to be prebunked, we risk losing the means to build policies that can actually withstand the complexities of the real world. The irony is that by manipulating the out-group, the in-group makes its own decision-making less robust. You need the doubters to help identify the scenarios where your plan might fail. By silencing or pathologizing the critics, the in-group creates a feedback loop that reinforces its own optimised but potentially fragile plan. That is what I mean by the mistake of the century. The bottom line is that the cognitive science lens isn’t just an analytical tool, as the IPCC would have you believe; it has become a boundary-maintenance tool that allows the expert class to maintain authority while avoiding the messy pragmatism of real-world risk management.
The psychology profession is fond of accusing climate change sceptics of having an inflated opinion of themselves. Sceptics are deemed to suffer the delusion that they, the opinionated laymen, know more than the climate science experts. The irony is that those who are making this accusation seem unaware that sceptics are not assuming a superior understanding but are instead proposing a robust approach to risk management; and the reason why the accusers can’t appreciate this is because they are themselves the opinionated laymen when it comes to risk management concepts. The shame of it all is that any hope of there being a robust approach to climate risk management is lost, and what we are left with are scientists who have no experience of the pragmatics of political decision-making under deep uncertainty, supported by psychologists who can’t see beyond the psychology of risk perception and how to manipulate it. This, to my mind, is where the truly damaging effect of amateurism lies. The amateur has not only stepped ahead of the professional, the amateur has poisoned the arena to such an extent that the professionals can no longer even operate.
Footnotes:
[1] This is the accusation of the so-called Scientific Certainty Argumentation Method. It is a term first coined by sociologist William Freudenburg, and is supposed to be the preferred method of people who demand scientific uncertainty before proceeding. Sociologists – that’s another group who know nothing about risk management technicalities.
[2] The IPCC could, for example, have adopted and adapted ISO 31000 with its emphasis on robust decision-making, but that would have made them look like they were being guided by the professionals, and we all know that the opinionated layman doesn’t need guidance.