I was doing a failure mode effects analysis on my past life the other day and I can tell you the results were not reassuring. They seemed to be suggesting that my curriculum vitae owed a lot to a faulty career choice in my formative years. Instead of plumping for the obvious attractions of an education in high integrity systems development, I instead opted to take a degree in physics; and the rest, as they say, is a chronologically ordered causal narrative. I assure you physics seemed a good decision at the time, but the virtually non-existent impact of my academic efforts appears to be a grim and fossilising testimony to my lack of good judgement. Indeed, it was only relatively late in my career that I finally happened upon my true calling. At last I discovered safety-systems engineering and, in a desperate bid for temporal reparation, I found myself saying, ‘Cut! Can we start that again from the top please, but this time can we leave out the physics bit? Oh, and try putting a bit more fame and fortune into it this time’.

Alas, it is all a bit late now and I’m long since retired. If you find me at all, you will find me wondering that my life had been a movie for which I hadn’t even passed the audition.

So what can I do now to fashion the myth of filmic success from the dismal reality? How about if I were to start by telling you what my safety-systems engineering career taught me about the eminently more successful, Dr Daniel Ellsberg, and what his discoveries mean for climate change policy?

The quite interesting Daniel Ellsberg

Daniel Ellsberg, Ph.D. is quite an interesting character who, despite being an influential thinker in the fields of decision theory and economics, is better known for being the Julian Assange of the 1970s. By releasing the so-called ‘Pentagon Papers’1 to the New York Times, he revealed to the American public that the Johnson administration had systematically lied about the motives for taking the country to war against North Vietnam. However, it is not Ellsberg’s whistle-blowing that demands the attention of the good readers of CliScep – it is his studies regarding the psychology of decision-making. These studies2 posed a paradox that, in its simplest form, can be expressed thus:

Having been told there are an equal number of red and black balls in a black bag, you are asked to stake £100 on whether a black or red ball is retrieved after selecting just one ball (you win £100 if the ball is your chosen colour but lose your stake if it is not). Then the exercise is repeated; however, this time you are told there are red and black balls in the bag but you are not allowed to know how many of each. Are you as keen to take the gamble?

In repeated experiments,3 the answer obtained when people were asked was a resounding ‘No!’ It seems that the average Joe is freaked out by not knowing the ratio for the number of balls, and this is taken to represent a bigger gamble. Yet, logically, the subjects had been presented with the same decision. Faced with total uncertainty regarding the ratio, they should take the gamble as a fifty-fifty choice; which is exactly what they had when they knew that there were equal numbers of each colour. In fact, it is the ambiguity implicit in the set-up of the second gamble that spooks people and this leads to what is known as ambiguity aversion.4

I tried this out on my wife last night by asking her what she thought was the bigger gamble (choosing with or choosing without knowledge of the number of balls). She knew ‘for a fact’ it was the latter and took my insistence that they were logically equivalent as further evidence that she had married the village idiot. My counter proposal that she had, in fact, married a genius led to an interesting debate, but the matter remains unresolved.

But why is this a paradox?

Thus far, you may be wondering why I am referring to this phenomenon as a paradox (if you are not, you can safely skip this section).

Well, with a slightly more elaborate version of the game, you can set up a situation in which the individual’s aversion to ambiguity can lead to the acceptance of a lesser pay-out, even though it would only require the sense that God gave geese to work out what would have been the better decision. Even more puzzling, it leads people to expressing self-contradictory preferences that violate utility theory (and the economists’ faith in the truth of utility theory, it appears, is strong enough for them to see any violation as a paradox). Consider the following:

You have a bag containing 30 red balls and 60 other balls that are either black or yellow (you don’t know how many yellow balls there are or the number of black balls, but you do know that the total number of black and yellow balls equals 60). You are now given a choice between two gambles that a gambler should treat as equivalent:

Gamble A: You win if you draw a red ball

Gamble B: You win if you draw a black ball

Subsequently, you are given the choice between a further two gambles that should also be treated as equivalent:

Gamble C: You win if you draw a red or yellow ball

Gamble D: You win if you draw a black or yellow ball

As with Gambles A and B, there is nothing in the set-up that should suggest to the gambler that Gambles C and D are anything other than equivalent (notwithstanding the gambler’s personal theory). However, when asked to state a preference, most people strictly prefer Gamble A to Gamble B whilst then preferring Gamble D to Gamble C. This, once again, demonstrates the average individual’s aversion to those gambles with the greater component of epistemic uncertainty. The paradox lies in the fact that having a consistent personal theory of the expected utilities for each gamble (which would at least be a logical position to take) means that a preference for Gamble A over Gamble B should actually be accompanied by a preference for Gamble C over Gamble D.5

So what has Ellsberg’s Paradox got to do with climate change?

If Ellsberg’s Paradox teaches us anything, it is that our aversion to uncertainty makes us do stupid things. We like to think we are rational creatures, deciding upon courses of action that promise the greatest benefit with the minimum of risk. But sadly this is not true. Most people who claim to be making a risk-based decision are doing nothing of the sort – they are, instead, being ambiguity averse. They seek the course of action that involves the least ambiguity and they wouldn’t have a clue what the risks truly are.

Ambiguity aversion is what we sceptics stand accused of when we insist on the removal of uncertainties before committing to climate change policies. In contrast, the presupposition made by those demanding action is that the delays incurred will heighten the risk, therefore a precautionary approach is advocated.6 However, those who advocate such precaution are just as much running scared from ambiguity as the sceptics are. They would much rather go for the devil they know than accept the psychological torment that comes with the uncertainty of catastrophe. At the end of the day, this is just Pascal’s wager.

Removing the ambiguity

Of course, the other way of avoiding having to deal with ambiguity is to convince oneself it doesn’t exist – a strategy that seems to lie behind much of the rhetoric that increasingly dominates the climate change debate. It seems that the possibility of future catastrophe is no longer motive enough to take evasive action; what we need instead is the certainty of such catastrophe. Surely, according to the rhetoric du jour, there is no uncertainty and it is not a matter of risk. In fact, we are not even talking about the future. Just look at what is happening right now! And yet, in the midst of all of this, we still have the conspiracist merchants of doubt, seemingly unaware that there is no longer any room for their merchandise. Why aren’t these people in jail already?

Well, this might be the sort of self-assured narrative that suits those who do not like ambiguity, but it certainly doesn’t help when the reality is that we are attempting a risk-based decision in the face of deep uncertainties that obscure the levels of risk. One can speak of climate change deniers, but there is nothing more denialist than the construction of a fantasy world bereft of ambiguity, just because one desperately seeks a gamble that is free from it.

There can be no arguing that the ambiguity of climate change evidence is a bad thing, basically because it can lead to the miscalculation of risk on both sides of the debate. Both sides are averse to the ambiguity and both sides have their strategies for dealing with it: The advocates for action seek to exploit ambiguity or to ignore it; the sceptics want to see its reduction. There are potential consequences either way, but if the sceptics prove to be misguided, then at least it will be an honest mistake. Those who deny that the ambiguity exists have no need of such honesty because they have God on their side.

Robust decisions in a fragile world

The potential effects of failure can indeed be difficult to analyse, and history can be a very harsh judge. When I look back upon my career-defining decisions it is tempting to regret those that appeared to delay any form of ultimate fulfilment. But this misses the point. My decision to study physics was taken at a time in my life when experience was at a premium and the evidence for the best route forward was replete with ambiguity. In such circumstances it made sense not to commit too early to too focused a course of action. The decision to study physics was a robust one because it remained valid for the widest range of possible futures. It is easy for me in my dotage to scoff at my younger self but, by doing so, I give myself no credit for pragmatism and adaptability.

As with career decisions, so with climate policy. The aim should not be to minimise, avoid or deny ambiguity, it should be to embrace it by employing a robust decision-making strategy. Fittingly, it is the Rand Corporation that is leading the way here with its Robust Decision-Making (RDM) tools and methodologies. Speaking on behalf of the corporation, Professor R. J. Lempert writes:

We agree that significant uncertainty exists regarding the future impacts of climate change and the costs of avoiding those impacts, that it is dangerous to ignore or downplay that uncertainty, and that acknowledging these uncertainties can provide a strong foundation for dialogue… To help decision makers, planners and investors make decisions in uncertainty, various methods and tools exists to identify typically robust or adaptive plans.”

Indeed, various methods and tools do exist to identify robust and adaptive plans, but you wouldn’t think so listening to those who prefer the unambiguous pathway. Which is a shame, because the powers that be, clamouring for solutions in a fit of Torschlusspanik, run the risk of killing the patient to cure the disease. This is not a recipe for success. It is a recipe for regret.

Footnotes:

[1] These were top secret documents to which he had access as a member of the RAND Corporation.

[2] Ellsberg, D. (1961). “Risk, Ambiguity, and the Savage Axioms”, Quarterly Journal of Economics 75 (4): 643–669.

[3] Ellsberg’s Paradox was originally conceived as a thought experiment but it has since been confirmed empirically many times over.

[4] Another name for this is uncertainty aversion. Whatever you call it, it does appear to be a real phenomenon; fMRI scanning has demonstrated that the human brain does indeed behave very differently when forced to make decisions in the face of ambiguity (ref. Hsu, M. and C.F. Camerer (2004), Ambiguity Aversion in the Brain, Academy of Behavioral Finance & Economics).

[5] For those of you who are interested in the technicalities of decision theory, the preference for Gamble D over Gamble C involves a violation of a dominance principle known as L. J. Savage’s Sure-thing principle.

[6] Actually, if the actions proposed were not so obviously self-destructive, I might have some sympathy with such logic.

 

56 Comments

  1. All this risk and ambiguity analysis is fine, but it’s also moot as far as climate goes. It appears to be based on a premise that there is some costly action that will mitigate the risk. The real debate should be about how impractical, infeasible, stupid, idiotic, futile and useless any action is. Pat Michaels has pointed out that “not doing something is something”. He alleges that it frees up resources to be better utilized. He cites hybrid cars and fracking as examples. I heard him say all this in a video somewhere, so if I’ve misquoted him, he can show up here and chew me out.

    Like

  2. Canman,

    “All this risk and ambiguity analysis is fine, but it’s also moot as far as climate goes. It appears to be based on a premise that there is some costly action that will mitigate the risk.”

    Actually, I don’t think that it is so moot. The point of robust decision-making is to seek the pragmatic and adaptable course of action, i.e. to avoid making an irreversible commitment to a costly solution that may be unnecessary. Accepting the existence of ambiguity may be a necessary element of such an approach since the alternative of avoiding ambiguity often comes with irreversible commitment and high cost. You are right, of course, that high risk is posited, and this makes the debate surrounding uncertainty levels all the more pertinent. Even so, costly action is surely the very thing we want to avoid if at all possible.

    Liked by 1 person

  3. your paradox reminds me of a somewhat similar paradox which I proposed at time of Bre-X gold fraud. Assays from Bre-X were crooked but no one knew at the time. The stock traded up to a market cap of over $1 billion and then down again. 99.99% of the trading appears to have been among people who were equally uninformed of fraud. The question that I asked of people at the time was: if you and I are betting on a flip of a two-headed coin – but neither of us knows that it’s two-headed – is it a fair bet? My answer was that it was fair. Bre-X crooks didn’t appear to have ended up with much of the spoils. Say it was 5%. Even if it was 20% analysis applies. Then public investors didn’t, in aggregate, lose $1 billion in the eventual collapse. Other than the “house” take, profits and losses by public investors more or less canceled out. Indeed, the “house” take by crooked promoters was much less than public lotteries. I don’t recall anyone agreeing with the paradox.

    Liked by 1 person

  4. Yes, the Bre-X scandal is very relevant here. Some have suggested that the ambiguity aversion in Ellsberg’s paradox can be explained by the natural distrust that kicks in when a game-setter withholds information. However, to my mind this does not explain the irrationality. Even if there are no red balls in the bag, the gambler is still being offered a fair gamble; the no-red-ball scenario is still encompassed within the fifty-fifty odds. After all, who’s to say that the game-setter was not cheating by including no black balls?

    Like

  5. “The aim should not be to minimise, avoid or deny ambiguity, it should be to embrace it by employing a robust decision-making strategy.”

    Absolutely. Unfortunately, the good practice of Lempert whom you quote and many other sources of great advice have not thus far been collectively acted upon, because the main biases undermining such advice are far stronger than only ‘ordinary’ ambiguity aversion (albeit it may play its part*), especially as they act in concert. The very ‘purpose’ of an emergent cultural consensus is to create a certainty that tolerates no ambiguity (at an instinctive / emotive level that subverts reason), because such ambiguity represents an out-group signal, a risk (in this case) to the cultural consensus on climate catastrophe. While all bias is unreasonable to a greater or lesser extent, the more powerful and deeply embedded the bias, the more difficult it is to challenge via the application of corrective reasoning, because the biases undermine the very act of reasoning (indeed cultural bias can make reason a slave). And strong cultural bias can create an unshakeable belief in what are essentially fairy stories, a far more challenging situation to correct.

    We can tell what kind of aversion to uncertainty dominates, because cultural bias is about identity, to which we would expect no particular alignment for normal ambiguity aversion**. In countries where the identity of adherence to (or rejection of) climate catastrophe culture has aligned to older political cultural boundaries, such as the US, it’s very obvious that the main bias (and support of ‘certainty’) is indeed identity related. In countries where this has not occurred, or only weakly like the UK, there are still other major indicators that identity conflict is nevertheless the main driver, for instance the whole ‘denier’ demonization thing of those who raise legitimate questions about uncertainties.

    “…run the risk of killing the patient to cure the disease.”

    Indeed, which itself is suggestive that ‘the disease’ is not actually the issue. It’s just a proxy for identity.

    *especially for those much nearer to reality constraints, e.g. within the science, for whom despite the presence of some cultural bias the majority cannot buy into a fairy tale that doesn’t sit with the evidence; maybe other / indirect biases / justifications emerge.
    **though one bias may be enabled by another; strong identity related bias possibly enables a whole range.

    P.S. I’ve heard about ambiguity bias possibly being explainable by essentially distrust in the gamesetter, but have no idea of the validity. This is presumably testable, i.e. when the gamesetter is nature itself or at least some process that players are convinced has no human intervention. I don’t know of any such test though.

    Like

  6. Andy,

    “Unfortunately, the good practice of Lempert whom you quote and many other sources of great advice have not thus far been collectively acted upon…”

    If by that you are referring to the need to accept uncertainty and yet still act, then I’m not sure that Lempert would agree with you. As he says, in his capacity as a RAND Corporation expert: “A vast literature exists on uncertainty and climate change. Most of it suggests that uncertainty is a reason for action”.

    “The very ‘purpose’ of an emergent cultural consensus is to create a certainty that tolerates no ambiguity…”

    I couldn’t agree more.

    “…cultural bias is about identity, to which we would expect no particular alignment for normal ambiguity aversion…”

    The question regarding the level of influence that culture has upon ambiguity aversion is an interesting one that has received some attention. One of the influential interpretations of the causes of ambiguity aversion goes under the name ‘competence hypothesis’, as proposed by Heath and Tversky (1991). According to this hypothesis, ambiguity aversion disappears when the subject has an expert competence that can be used to help them determine subjective probabilities. The IPCC gleefully highlights this in Chapter 2 of AR5, in which they proclaim that ambiguity aversion in climate-related decision-making is only a problem for those who lack the expertise of the climate scientists.

    However, they have got this very wrong. The ‘competence hypothesis’ should really be called the ‘confidence hypothesis’, since it actually refers to the confidence that experts have in subjective probabilities (ambiguity aversion can be re-expressed as a preference for objective probability over subjective probability). And this is where the cultural differences come in. Individuals from Western societies are well-known for their analytical or essentialist mode of cognitive processing, as opposed to the Eastern society preference for holistic or relational cognition. In the former, analysis focuses upon essence and individual properties, whilst the latter focuses upon shared properties and relationships. These two modes of cognition have been shown to correlate strongly with social orientation in which the Western emphasis upon individualism contrasts with the Eastern emphasis upon collective identity and responsibility. Accordingly, studies have shown that subjects from a Western culture are much more likely to place credence in their subjective probabilities, whilst Eastern subjects are much more inclined to favour decision-making that relies upon objective probabilities. This has been shown in studies of ambiguity aversion – the aversion is much stronger in subjects from an Eastern culture. No one is suggesting that Easterners are less competent at determining trustworthy subjective probabilities – it’s just that they are cognitively disinclined to indulge in such trust.

    “P.S. I’ve heard about ambiguity bias possibly being explainable by essentially distrust in the gamesetter…”

    Yes, I’ve heard that one too. I wouldn’t want to place too much credence in it though. See my comment at 5th Jan 10:35am.

    Liked by 1 person

  7. John:

    “If by that you are referring to the need to accept uncertainty and yet still act…”

    I’m referring to what your quote of him says, i.e. ‘…that it is dangerous to ignore or downplay that uncertainty, and that acknowledging these uncertainties can provide a strong foundation for dialogue…’

    “…And this is where the cultural differences come in…”

    Wrong cultural differences. Differences in say Western / Eastern approaches to decision making or similar magnitude effects, are eclipsed by the much stronger bias effects on a par with say zealous religious belief (this is the order of widespread belief in catastrophic climate change). The former differences are in any case ruled out in data from both public and academic surveys in the West, or indeed a single Western nation such as the US, which both show the high level of identity investment in ‘certainty’ (or rejection thereof) of anthropogenic climate change (as driven by the dominant biases / identification with cultural values), in which surveys there are not Eastern populations present. Hence the main ‘flavour’ of the bias, i.e. the aversion to uncertainty, is revealed, and is owed to cultural / identity related issues not to normal ambiguity aversion (notwithstanding the asterisked lines above); the US population alone doesn’t have a Western / Eastern split that might introduce another variable, be it second order or not.

    [It’s also the case that Kahan’s work in particular shows that the more domain knowledgeable and cognitively capable folks are, the *more* polarised they are on climate change, not less. I.e. their capabilities / knowledge are in service to their cultural identity on the issue. (This doesn’t necessarily extrapolate to true ‘experts’, as the survey work didn’t include them)].

    Like

  8. In my comment above, I expressed skepticism about actions taken, and that may be outside the theoretical bounds of the discussion. The discussion, as well as I understand it, is interesting. I never realized that Danial Ellsberg was such an influential decision theorist. I wonder how well any of this uncertainty and ambiguity can be applied to proposed actions to be taken, because there’s certainly going to be lots of disagreement about the merits of these various actions.

    Like

  9. The fallacy John describes has been used in the climate debate – the less we know about climate change, the more scared we should be about it.

    Can anyone recall who or where?

    Like

  10. Paul,

    I don’t know if this is what you have in mind, but Nassim Taleb, Rupert Read, and others, have written on the need to treat uncertainty as a hazard in its own right. For example, take the following paper:

    https://docs.google.com/file/d/0B8nhAlfIk3QIbGFzOXF5UUN3N2c/edit?pli=1

    Its title refers to GM crops but the paper also addresses climate change. It’s not exactly Ellsberg’s paradox in action, but it is a formal argument for ambiguity aversion, nevertheless.

    Like

  11. John, great article. Very thought provoking.

    “Subsequently, you are given the choice between a further two gambles that should also be treated as equivalent:

    Gamble C: You win if you draw a red or yellow ball

    Gamble D: You win if you draw a black or yellow ball”

    At the risk of appearing to be somewhat dim (which I am naturally averse to), I can’t see that C and D are equivalent. There are 90 balls in total – 30 red, 60 yellow and black. therefore the chance of drawing a black or yellow ball is 2/3, correct? However, the chance of drawing a red or yellow ball is 1/3 + P(yellow). But P(yellow) – which is unknown – may be as little as 1/60 or as great as 59/60). So Gamble D involves a quantifiable probability (risk), whereas with Gamble C the probability (risk) is unknowable within a certain well defined range (ie. (30+1)/90 < P(red or yellow) < (30+59)/90).

    In essence, Gamble D means the probability of not winning is just 1/3 (good odds). However, the probability of not winning with Gamble C goes from just under 2/3 (bad odds) to almost zero (extremely good odds). So they're not statistically equivalent and they are certainly not psychologically equivalent. I think most people would opt for Gamble D, where the risk of losing is well defined – and acceptable – rather than C, where the risk of losing is ill defined (ambiguous).

    Liked by 1 person

  12. Jaime:

    If say you’re a professional gambler and do this game often, plus the game-setter is known to be 100% trustworthy (which per above may be a potential source of bias if this was not believed), it is easier to see that C and D will work out to be equivalent over time. Yet even for a 1-off bet, they are in principal still the same anyhow, because all you are doing is adding more chance of losing on one side of what is already a probability function not a certainty, plus equally more chance of winning on the other side of the same function, as indeed you noted (…I’m sure Steve M or John could phrase this in much more appropriate terms than me).

    “…they are certainly not psychologically equivalent.” Indeed, with the difference being ambiguity aversion. Most people prefer what they perceive to be a more bounded possibility of a bird in the hand, but as long as the ‘additive’ risks are symmetrical it shouldn’t be any different.

    Like

  13. Jaime,

    Thank you for the feedback. It is most appreciated.

    I think the best way to look at the paradox is to focus upon the expected utilities to see where the subject is being inconsistent. The preference for Gamble A over Gamble B implies suspicion that there are fewer than 30 black balls. However, a preference for Gamble D over Gamble C suggests a suspicion that there are more than 30 black balls (i.e. fewer than 60 red and yellow balls and hence fewer than 30 yellow balls). Therefore, purely from the perspective of expectation and suspicions, the preference for Gamble D over Gamble C is inconsistent with a preference for Gamble A over Gamble B. Where the subject is being consistent, however, is in their preference for the gamble that can be calculated using objective rather than subjective probabilities.

    Gamble C and D are mathematically equivalent if one accepts that objective and subjective probabilities are equivalent (i.e. flipping a coin is equivalent to just guessing between two possibilities in the presence of total subjective uncertainty – for example, guessing who is the taller between two people you have never heard of). The legitimacy of this equivalence lies behind a lot of philosophical debate amongst probability theorists. However, as you have clearly concluded, Gambles C and D are not psychologically equivalent, any more than are Gambles A and B.

    Perhaps I should have made it clearer in the article that the gambler should be concerned about mathematical equivalence.

    Like

  14. Barry, Andy

    Indeed, Lewandowsky et al make essentially the same argument as Taleb et al. However, no sensible reading of either paper could understand them to be turning “uncertainty on it’s [sic] head to produce a certainty of doom”.

    Most of what Lewandowsky has to say about his paper, I have no trouble with – greater uncertainty can mean greater risk. However, I had to do a double-take when he said: “We have also seen that greater uncertainty means that the expected damages from climate change will necessarily be greater than anticipated.” This sounds at first hearing as if he has abandoned his senses, but (giving him the benefit of the doubt) he may just be using the term ‘expected’ in the sense employed by utility theory, in which ‘expected value’ is the long-run, averaged value of a repeated, uncertain speculation. If one widens the probability distribution to include more high value possibilities, this will increase the expected value. The same goes for risks and the expectation for negative impacts. But maybe I’m giving him too much credit for alluding to a mathematical construct when, in fact, he just hasn’t read properly the paper he put his name to.

    My main problem with the reasoning of Taleb, Lewandowsky, etc., is not that they turn uncertainty on its head – it is that, when discussing uncertainty and its relationship to risk, they rely too much on the frequentist’s armory. It is as if they think that all climate change risk can be modelled as if it were aleatory when, in fact, it is broadly epistemic (at best) or ontological (at worst).

    Like

  15. Thanks Andy and John. Still confused though. My statistics is rudimentary but C and D still don’t look mathematically equivalent to me. If the probabilities I very simply calculated are technically wrong, I’m happy to concede. Granted, over many, many bets, if the number of yellow and black balls varies randomly, then all is probably equal, where the probability function (black/yellow) averages out to 1/3 for a very large number of gambles, but I was just looking at the probabilities involved in just one gamble.

    Like

  16. Jaime,

    John’s way of perceiving the issue is no doubt better than mine (and also aligned to the literature), but your statement above is a step to how I think about it. So if I just tweak this…

    ‘Granted, over many, many bets, if the number of yellow and black balls varies randomly [yes it will, because we trust the game-setter], then all is probably [actually] equal, where the probability function (black/yellow) averages out to 1/3 for a very large number of gambles, but I was just looking at the probabilities involved in just one gamble [for which the odds must therefore be identical]…

    i.e. if the real result over many gambles is the same, the odds for any single gamble must perforce be the same also. If this were not so, the result for many gambles would be different.

    Like

  17. John,

    granted, my statement is harsh and somewhat of a caricature. However, you may indeed be giving him too much credit. A wider picture can be gained from following the links at the Climate Etc post noted above. As you note, the situation is characterized by substantial uncertainty that is not statistical in nature. So a statistical approach, in the specific guise of analysis of a fat tail distribution, gives rise to claims of more than just a rise in ‘expected damage’ per the context of utility theory (as far as I can see anyhow). So for instance a claim of ‘a 200-fold increase in the likelihood of a catastrophic outcome when uncertainty [which is equated to climate sensitivity] increases from .49 to 2.6’ . Yet this conclusion is wholly dependent on having already defined a very specific character to the nature of ‘the uncertainty’.

    Like

  18. slight addition…

    ‘Yet this conclusion is wholly dependent on having already defined a very specific character to the nature of ‘the uncertainty’, and the system it is relevant to.’

    Like

  19. Jaime,

    I think Andy’s response is pretty much what I would have given. The concept of probability is pretty much anchored to the assumption that a single gamble is just one opportunity that can, in principle, be repeated many times, and that the single outcome’s probability is determined by the aggregated outcomes of the long run. At least, that is the frequentist view. Mind you, the devoted frequentist would not even allow the term ‘probability’ to be used when there is no potential for a repetition of the gamble. But let us not go there…

    Andy,

    I still need to get back to you on the subject of ambiguity aversion and cultural differences. However, I have been having one of those days, so I have yet to find the time. Maybe tomorrow.

    Like

  20. John,

    “…I have yet to find the time”

    No worries. Such things are only worth pursuing when one has time and gathered thoughts. I hope your ongoing resolution with your wife isn’t part of your harassment; I thought that paragraph was a real gem 🙂

    Like

  21. I must be dumb. At the odds offered I wouldn,t bet on A or B or C at all, but I would bet on D all day, because that would be a winning strategy. I see the point that if I bet on A that implies that I suspect a lack of black and that maintaining that expectation makes C a reasonable bet but this would only be so if my suspicion proved to be correct. Faced with the choice of C and D I am being given new information and the odds so greatly favour D that there is no way anyone in their right mind would bet on C.

    Like

  22. Shaun,

    No, you are not being dumb, but you may be assuming that the results of previous gambles can be used to inform the odds for subsequent gambles. This wasn’t actually the nature of the experiment. People were being asked to state a preference for gambles, but they were not actually taking them (or, if they were, then the bags used for gambles A and B were different to those used for C and D). There was no opportunity for Bayesian updating.

    If that is not actually an assumption you had been making, then may I be so presumptuous as to suggest that you are, instead, illustrating just how keenly ambiguity aversion can be experienced? You say you would go for gamble D all day long and that “there is no way anyone in their right mind would bet on C”. You also say that the odds so greatly favour gamble D, but I wonder where that view is coming from. Based upon what the gambler knows about the number of balls, he or she can categorically state that the odds are 2 to 1 in their favour for gamble D. But these are exactly the same odds as for gamble C. The only difference is that in gamble C one has to apply Laplace’s principle of insufficient reason and (in the absence of any information to the contrary) assume that there are the same number of black balls as there are yellow balls (or if there are not, then the odds of an imbalance in favour of black are just the same as those for an imbalance in favour of yellow). The only thing that causes further angst with respect to gamble C is the (dare I say) irrational reluctance to apply the principle of insufficient reason and assume equal priors (remember, there is no Bayesian update so you are stuck with those equal priors).

    If I have still failed to appreciate your concerns, Shaun, then please accept my apologies. Maybe I am the village idiot after all!

    Jaime,

    You’re another one who isn’t being dumb. Disputes amongst statisticians regarding the true nature of probability have been so intractable through the ages that it has been suggested that the collective noun for statisticians should be a ‘quarrel’.

    Liked by 2 people

  23. I also do not understand. I will take the first choice of bets – A and B. I am told that “Faced with total uncertainty regarding the ratio [in bet B], they should take the gamble as a fifty-fifty choice”. But why? I believe the correct behaviour for bet A is to not change your bet. On balance you should win as many times as you lose. However, this behaviour will not work for Bet B. If there are more black balls than red and you repeatedly chose red you will lose until you recognize the reason for your loses and change to choosing black.

    However, there is no guarantee that the person offering bet B is playing fair. If they can alter the odds as they see fit. Professional gamblers observe traits in their opponents and make appropriate changes to their own behaviour to decrease the chances of the person taking the bets from being successful. The ratio of black and red balls can be changed at whim. Thus I would not take bet A (no point), nor bet B (odds can be changed to favour the person manipulating the odds).
    If the odds cannot be changed, then I would definitely take bet B, because early results would indicate red-black ratio, enabling greater success in later wagers.
    So what’s wrong with my logic?

    Like

  24. I guess John that you’ve provided an answer to my questions in your response to Shaun – but have you? You wrote “People were being asked to state a preference for gambles, but they were not actually taking them” But I haven’t taken any gambles, all I did was (I hope) logically analyse Bets A and B and from that work out a preference for the different bets. Bet A (no point) bet B (fixed ratio) preferred if you can learn from past results, avoided if ratio can be changed without your knowledge.
    I agree that if you don’t know the black-red ratio, there still is a 50:50 chance of you choosing the most numerous ball colour. But this is not the odds of your successfully drawing the winning ball colour. My feeble mind fries as I attempt this intricate task.

    Like

  25. John, I may have misunderstood. Am I playing against a person, or are the number of yellow and black balls generated randomly? Is it a new bag of balls for the second bet or the same bag?

    Like

  26. Shaun,

    It is not normally stipulated whether the game-setter is a person or not, and this has led to speculation regarding the cause of the ambiguity aversion — some think that distrust plays a major role. I am not convinced of this. I think the same effect would be present in both cases but I am not aware of any research in this direction.

    As for new bag or not — I think, given the point Ellsberg was trying to make, new bags can be assumed.

    Like

  27. Thanks John, I have just read up on Ellsberg, very briefly, and I see that in his version you do not lose a stake therefore the cost of losing is zero. I have to think about this to see if it makes a difference, but I think it does. My point was that a professional gambler will always bet on a situation where the known odds are in his favour and D provides this always. C Does not inevitably provide this. But if there is no stake I think it changes things. Ellsberg appears to use the zero cost of attempting to win in his utility function which demonstrates the paradox.

    Like

  28. Should have added that C is an equally good bet if the balls are chosen randomly, but not if playing a climate scientist.

    Liked by 1 person

  29. I am pleasantly surprised by the lack of Ricean snark in this thread. It probably means he recognises that he is outgunned here, which is not normally the case. By which I mean that he rarely recognises his limitations rather than that he is rarely outgunned

    Like

  30. Alan,

    I think speculating about repeated gambles and behind the scenes manipulation is irrelevant to the basic set up for Ellsberg’s paradox. The problem is this: If you are offered a single gamble with known, objective probabilities or a single gamble where some of the uncertainty is due to gaps in knowledge (thereby causing the gambler to resort to subjective probability) the human brain seems to be wired to believe that the latter presents a greater risk, even when there is insufficient reason to believe so. Mathematically, the risk calculation is the same, but the average brain refuses to accept it.

    Would it be too cheeky of me to suggest that the ingenious arguments that have been offered on this thread to justify considering the gambles to be unequal are testimony to that fact? Probably.

    Like

  31. John, I am kind of coming around to your way of thinking about Gamble C. The issue I raised was one of whether Gambles C and D are mathematically equivalent and I bow to your greater knowledge of probability theory to assure me that they indeed are. However I’m still not certain that it is irrational to go for Gamble D, where the odds are good and easily proven without recourse to rather more complex laws of probability theory. Perhaps instead we should be debating about what constitutes a rational or irrational decision, which takes us into the minefield of human psychology!

    Like

  32. Jaime,

    You are quite right. There is a rationale for preferring Gamble D over Gamble C, and Gamble A over B. However, by sticking, in both cases, to that rationale, one is lured into contradicting oneself regarding the calculated utility. Economists have historically placed a great deal of faith in utility theory and its assumption of the rational decision-maker (i.e. someone who always acts logically, consistently, and in their own best interests), but then along came prospect theory, and everyone then realised that real people perceive things quite differently to mathematicians and economists.

    If you still think there is something fishy about all of this, don’t worry, you are in good company. There has been a great deal of debate regarding the nature or causes of the irrationality exposed by Ellsberg’s paradox. Some of it has inspired some very heavy analysis. Take for example, the following paper that suggests that only a quantum-like, non-Kolmogorovian approach to probability calculation can explain the phenomenon:

    Click to access 1105.1814.pdf

    Read it and weep!

    Liked by 1 person

  33. John, you’ve piqued my curiosity here. I’ve skimmed the surface re. Utility Theory and it just seems a bit iffy to me, so, when I get the time, I think I’ll delve a little deeper, starting with the paper you linked to. Maybe the comparisons with quantum reality are more than just casual. It did strike me that there was a whiff of something Schroedinger-wave-function-like about the uncertainty involved with Gamble C.

    Like

  34. Jaime:

    “…without recourse to rather more complex laws of probability theory”

    See below where ‘complexity avoidance’ was investigated as a cause, by having a more complex form of the same problem. I’m not sure this was a good way to find that out, but anyhow their finding was that as long as the form of the problem is not so complex that people fail to recognise the ambiguity even exists in the first place, they will still avoid ambiguity.

    You can compensate yourself with this from the same link:

    ‘To our surprise, we find that “ambiguity-minded” subjects [i.e. those who most exhibit the issue] tended to be more reflective, educated and analytic than “probability-minded” subjects [who least exhibit the issue] who most resembled Bayesian decision makers. This runs counter to the intuition that sophistication and a Bayesian approach to decision-making are positively correlated among “high-comprehension” individuals.’

    Maybe you’re just in a higher league than John and I 😉

    Plus this is relevant too:

    ‘…instead of considering ambiguity aversion as a uniform phenomenon, it may be more appropriately viewed as an aggregation of both task-level variation in subjects’ understanding of the choice problem, and individual-level variation in how subjects reason about probability. Any evaluation of the normative appropriateness of a response must be done relative to a particular individual-level interpretation of the task.’

    http://blogs.lse.ac.uk/businessreview/2018/12/12/the-ellsberg-paradox-and-the-ambiguity-and-complexity-of-decision-making/

    Like

  35. John, the statement of the red/black/yellow ball problem is missing some vital information, which causes the ambiguity that makes the choices non-equivalent.
    Choices A and B, and C and D are only equivalent if the black/yellow choice was itself random.
    From the description, we know that there are 30 red balls. And that there is no indication how the black and yellow ones have been chosen – there is no statement that preludes the case that the setter put one (or no) yellow balls in, and the remainder are black.
    Without this clarification, it would seen reasonable to avoid the possibility that the setter has chosen the yellow/black mix to minimise the payout.

    Like

  36. Is it not just human nature to make choices that avoid having to think deeply or make even simple calculations?
    When deciding between different bets you normally require more information than you are prepared to accept are necessary to demonstrate the paradox.
    Consider another wrinkle to the first choice. Suppose you are asked to choose between Bets A and B, but this time you must choose black.

    Like

  37. Hayden, if you’re worried about that we could add in the option of Gamble B2 to the first game – you win if you choose yellow.

    And for game 2, introduce Gamble D2 – you win if you choose a red or a black.

    I think most people would still go for A in the first game and D in the second game.

    Like

  38. John. Upon re-reading I realize that I have made a significant error. Where I write about bet A and Bet B in earlier posts I was actually discussing the first bets mentioned – those discussed between you and your good wife. So I was discussing the need for additional information in order to evaluate the merits of the bet where you do not know the black-red ratio. So my variant is where you do not know the ratio but are forced to bet that you will chose a black ball. Now it is essential to know exactly how the person set up the bet. Clearly where the ratio is 50:50 you have a 50% chance of being successful. Where you don’t know the ratio your chances of success are controlled in large part by the ratio (determined by the person setting the bet). So what exactly is the difference between where you have a choice of colour and that where the choice is forced upon you?

    Like

  39. Alan,

    I’m not sure what can be learned by changing the rules to the extent you suggest. If the game-setter is dictating how you bet, then you are no longer betting upon the draw of the balls, you are betting upon the honesty of the game-setter. In the game defined by Ellsberg the honesty of the game-setter is an irrelevance since there is nothing the game-setter can do to improve their chances of winning (presupposing a zero sum game). In the set-up you propose everything changes and it is all about the game-setter’s motives.

    That said, if I felt a compulsion to bet a particular way because, let’s say, I have a phobia against certain colours, it would not affect the way I felt about the gamble. That’s not because I am immune to ambiguity aversion; it’s because I spent a lot of time in my professional career thinking about uncertainty and its relationship to risk, and I can call upon that experience to help me overcome my instincts. I hope that did not come across as being too arrogant.

    I trust you’ll let me know if I have misunderstood your point.

    Like

  40. Andy,

    At last, I have the time to respond to your comments regarding ambiguity aversion and cultural difference (rest assured my distractions have owed more to the travails associated with the construction of garden fencing than to domestic strife).

    Firstly, just to get it out of the way: When you used the expression ‘good practices’, I had assumed (not unreasonably, I would protest) that you were referring to the practical aspects of Lempert’s advice, i.e. the application of robust decision-making, with its recognition of ambiguity forming part of the imperative for action. Certainly, those who maintain their shrill warnings of certain doom are taking no notice of this advice, but there is plenty of evidence that many (at least within the scientific mainstream) are not looking to deny ambiguity for the sake of the cause. When it comes to the matter of ambiguity and how to deal with it, I suspect you and I still have very different views regarding the relative strengths of the deny, exploit or embrace it camps. Or it could be that, despite our best efforts, we still fail to properly understand each other’s position. Either way, I’d rather not resurrect that debate.

    Of more interest to me are your comments regarding the ‘high level of identity investment in ‘certainty’…”. You say that “the main ‘flavour’ of the bias, i.e. the aversion to uncertainty, is revealed, and is owed to cultural / identity related issues not to normal ambiguity aversion…”. Unfortunately, I do not fully appreciate what you mean by ‘normal’ ambiguity aversion in this context. The only ambiguity aversion that I am aware of is the one that is the subject of this post, i.e. a reluctance to tolerate ambiguity as it appertains to decisions to be made. I see this as potentially being a significant motivator that would drive the individual towards membership of a group for which aversion to ambiguity is it’s leitmotif. I am not saying that such motivation is sufficient for such a group to emerge, or that it even provides the principal social pressure, but surely it provides an identity that constitutes the germane characteristic for such a group, i.e. the common trait that separates it from groups characterized by ambiguity tolerance. Critically, I see the certainty-mongers, and those who advocate precaution because of ambiguity, as the two principal manifestations of the same group, i.e. the ambiguity averse group.

    I think that members from cultures that have been shown to be characterized by ambiguity averse thinking would show a proclivity for joining the ambiguity averse camp of climate scientists but whether they would join the certainty-mongers or those advocating precaution requires further factors to be taken into account (both sub-groups would form as a result of additional cultural/identity related issues). Also, I think the statistic one should be interested in is not the East/West cultural spilt amongst ambiguity averse climate scientists but the averse/tolerant split within each of those cultures. I very much doubt that the latter split has been studied, principally because it would be a monumental waste of someone’s time. If I am wrong about this, and such studies have taken place, I’m confident that you will be able to provide the appropriate references, for which I thank you in advance.

    Finally, when talking purely about the certainty mongers, I think perhaps we should be careful not to ascribe their rhetoric to genuinely held beliefs. Are they really ambiguity averse, or are they just keenly aware of what to say to influence those who are?

    I look forward to hearing your further views on this.

    Like

  41. John,

    I don’t know Lempert so my comment merely referred only to the limited context of your quote of him, i.e. that it’s good practice not to ignore uncertainty and indeed potentially dangerous to do so. While indeed there are many that do heed such (to greater or lesser extents), and my recent Catastrophe Narrative post at Climate Etc acknowledged this in pointing out that mainstream science / AR5 does not actually support the catastrophe narrative (i.e. it does *not* support a certainty of catastrophe), you wouldn’t know this from listening to the narrative from primary (and very many other) authorities over many years; the collective temperature (no pun intended) so to speak global societal leadership on the issue, is still a certainty of catastrophe (absent dramatic action), and so this is still driving main policy worldwide. In other words, the advice has not yet been heeded enough to halt the juggernaught.

    “The only ambiguity aversion that I am aware of is the one that is the subject of this post…”

    There are other biases that lead to an inappropriate acceptance / promotion of certainty over uncertainty, of which subconscious group co-ordination mechanisms is a very old and very deeply embedded behaviour (back from before we were even human), driving the modern incarnation of (narrative based) cultural consensuses. But some biases may enable others (see the asterisked lines above), so by ‘normal’ I do indeed mean strictly and only the context as presented here in your post, and not either other biases, or indeed the same bias as you describe when it is *not* enabled by or entangled with, another primary behavioural driver.

    “I see this as potentially being a significant motivator that would drive the individual towards membership of a group…”

    Notwithstanding my asterisked lines about acknowledging a potential part played, the evidence of social data simply does not match this as a primary driver (or possibly, any driver). As noted in the LSE article, individuals within the same society have greater / lesser extents of susceptibility to this effect, and across wider geographic separation the average of whole societies can even be skewed (as you note regarding Eastern versus Western). But there is no evidence that these differences in any way match the identities of the groups who actually do support / oppose the above social consensus on catastrophic climate change. So taking the US as the most researched example (and easier to navigate for other reasons too), both polarised groups facing off on the issue of climate change each contain all of the low comprehension / high comprehension, ambiguity minded / probability minded individuals. And the more systemic differences that could potentially come into play with say an Eastern population versus a Western, don’t even figure in this strong polarisation of a country that doesn’t actually have a (major / coherent) Eastern population.

    Further, the creation of certainty by a cultural consensus will occur in any kind of context, so it does not need a specific numbers / probability scenario, and indeed this is likely not how most folks perceive climate change anyway (the ‘task interpretation’ element in the LSE article is somewhat relevant here too) – powerful emotive belief driven by membership of a cultural group can occur on any arbitrary topic / narrative, and cultural belief is also one of the most powerful mechanisms by which our reason can be, and frequently is, subverted.

    So here’s a half thought experiment / half observation. Regarding the latter, there is a major skew between Rep / Con and Dem / Lib adherents on belief in climate change (this itself is due to a cultural alliance effect, but in any case it is about cultural identity). Regarding the former, I doubt that there’s any survey testing whether Rep / Cons and Dem / Libs exhibit different levels of ambiguity avoidance via strictly the standard test / context you note here, but I’m willing to bet (heh) that experiment would find it to be statistically the same. If so, this means that ambiguity avoidance is not playing a primary role in the support / opposition to climate change in the US population. [Note: given that ~92% of US social psychologists are Dem/Libs and only about ~3% Rep/Cons, there have been various attempts to show that ‘conservative brains’ are different, which is to say less capable in some way, but these have been very weak and are typically torn to shreds by liberal colleagues who point out why the F*** are we trying to torture the data into this if not because of our own bias, so any difference in ambiguity avoidance strictly via the test here is highly unlikely].

    “I am not saying that such motivation is sufficient for such a group to emerge, or that it even provides the principal social pressure…”

    Indeed.

    “…but surely it provides an identity that constitutes the germane characteristic for such a group, i.e. the common trait that separates it from groups characterized by ambiguity tolerance.”

    Well unless the above thought experiment is carried out for sure, we can’t tell. But we do know that the social data regarding identities matches the much more generic and deeply embedded behaviour associated with membership of a cultural group (or associated behaviours like innate resistance to over-culturalisation, and notwithstanding that in some other countries like the UK, the mapping is less obvious because the climate cultural boundary is much less strongly aligned to older politico-cultural boundaries).

    “Critically, I see the certainty-mongers, and those who advocate precaution because of ambiguity, as the two principal manifestations of the same group, i.e. the ambiguity averse group.”

    Then you will need some social data to support this case. And it would need to be strong / comprehensive to outbid the many public and academic surveys over years that support the primary case (in fact you note yourself that this effect does not likely provide the principal social pressure) that, as Kahan puts it, belief in climate change (and other socially conflicted topics) is not about ‘what you know’, ‘but who you are’. Hence the absence / presence of doubt aka uncertainty on the issue also. Ambiguity avoidance, whether particular individuals are subject to it or not, is about what these folks think they know, so falls into the former bracket. And the certainty created by cultural identity can be unshakeable (very early MRI explorations suggest that regarding religious belief for instance, it utilises the same area of the brain that hypnotists leverage, which leads to an entirely speculative so far, possibility that this is why that brain function exists, for necessary adherence to the group narrative / consensus).

    “I think that members from cultures that have been shown to be characterized by ambiguity averse thinking would show a proclivity for joining the ambiguity averse camp of climate scientists but whether they would join the certainty-mongers or those advocating precaution requires further factors to be taken into account (both sub-groups would form as a result of additional cultural/identity related issues).”

    Well there’s far more social data for the US than anywhere else. What data do you have that says ambiguity averse folks in the strict context of your post here, are anything but distributed evenly across the stark polarization on the issue in that country, for the public or for scientists? Of course there is far less social data on scientists in any country anyhow, but as even Lewandowsky (avid climate consensus supporter, so not biased to skeptic positions on CC) notes: “Nonetheless, being human, scientists’ operate with the same cognitive apparatus and limitations as every other person”. So we’d expect the same cultural / identity behaviours as a default. Further, Kahan’s data shows that those who are more knowledgeable about the climate domain, and more cognitively capable, are even *more* polarised on CC, not less, i.e. their capabilities are in service to their cultural identity (where again there is no evidence that this cultural identity maps in any way onto being ambiguity averse per this post). While this doesn’t necessarily extrapolate to true experts (not measured in the survey), it is another barrier to the possibility of your proposal above being borne out in reality.

    “Also, I think the statistic one should be interested in is not the East/West cultural spilt amongst ambiguity averse climate scientists but the averse/tolerant split within each of those cultures.”

    As noted above there is very little direct social data on climate scientists. But we have much social data for the public and in various categorisations too, and we have less but still highly useful (academic) data on subtleties like belief variance with cognitive skill etc. None of it (that I know of anyhow) matches your proposition, but does match cultural identities. Of course, no one (well, I assume), was actually looking for your proposition (although for sure they started by looking for anything), and there could be tests more suited. Plus, as noted above, cultural bias can likely enable a raft of other biases, yet also *disable* them for resistance to a cultural group (in which case ambiguity avoidance could be switched on / off like others, but isn’t prime anyhow).

    “I very much doubt that the latter split has been studied, principally because it would be a monumental waste of someone’s time. If I am wrong about this, and such studies have taken place, I’m confident that you will be able to provide the appropriate references, for which I thank you in advance.”

    To my knowledge, indeed nothing has looked for ambiguity avoidance as a specific / isolated bias in the CC domain, and how it plays, if it does play at all. Which for sure leaves a door open. But from what I know of the stuff that does exist (and indeed I’m far from an expert), it doesn’t look good in that for the public at least in the US (or indeed other countries albeit with some more inferences), this shows a map that appears to conflict with your proposition. Climate scientists don’t tend to subject themselves to social data collection, but looking at the long list of catastrophe quotes from them (and other scientists), it’s not hard to see an extremely high emotive conviction to the catastrophic, which is a powerful effect in its own right that would likely steamroller (the relatively minor emotive state, one presumes, of) ambiguity avoidance.

    “Finally, when talking purely about the certainty mongers, I think perhaps we should be careful not to ascribe their rhetoric to genuinely held beliefs. Are they really ambiguity averse, or are they just keenly aware of what to say to influence those who are?”

    I don’t buy this (for the great majority). The level of emotive conviction is manifest in their words. This indeed implies belief, on a par with religion. A big enough barrel will always have some bad apples, and avid belief produces a fringe of noble cause corruption too. But to propose that the majority are touting the certainty of catastrophe knowing that this is wrong, is essentially a conspiracy theory that also means there would have to be a massive co-ordination of lying. Cultural membership provides instinctive and subconscious co-ordination, so no lying is needed for all adherents to still sing off the same hymn-sheet (which saying arose precisely regarding religious cultural alignment).

    I wouldn’t regard this as a mature area, and the heavy bias due to political alignment in social psychology noted above is hampering most work anyhow (much of the discipline wastes endless years just trying to find out what’s wrong with deniers and a resistive public, without of course ever being able to reach any firm conclusions). So the door is open, but imho, only open the tiniest crack for any primary role, though a wider crack for some secondary role.

    Liked by 1 person

  42. P.S.

    missing ‘of’: ‘so to speak of global societal leadership’, of which the current US admin in the last 2 years (or at least parts of it), is an exception, albeit probably not for reasons of technical advice on uncertainty.

    re ‘the long list of catastrophe quotes’: obviously only for those minority not mainstream scientists who propagate same per the many examples in the Catastrophe Narrative post.

    Like

  43. Andy,

    Well I’m glad that Geoff liked your last comment but from my perspective there was far too much misrepresentation of my views.

    Firstly, let me make it clear I have never disagreed with your assertions that there is a vocal and influential set of individuals who maintain a narrative of certain climate catastrophe, and who either falsely believe that the mainstream science supports their view, or they falsely claim that it does. This has never been in dispute.

    Secondly, by focussing upon the group that professes certainty of impending doom you overlook the main point of the article. Ambiguity aversion is a distrust of epistemic uncertainty and a preference for aleatory uncertainty. In Ellsberg’s paradox this manifests as a preference for gambles that can be assessed purely using objective probabilities. In climate science, there are very few objective probabilities to be found – the uncertainty is almost entirely epistemic. Therefore, an aversion to subjective probability is tantamount to an aversion to probability, and the ambiguity aversion therefore morphs into probability blindness. There are two categories of climate policy advocates who indulge in probability blindness: Those that set all their probabilities to unity (i.e. the certainty–mongers), and those who argue that it is in the nature of the risk that one should proceed as if the probabilities are unity (i.e. the precautionary folk). The only people who are not in the group represented by these two categories are those who embrace the ambiguity in the climate science evidence and therefore advocate robust decision-making.

    I don’t need social data to make the above argument. Besides which, any social studies that focus upon the first category (those proponents for action who are probability-blind) whilst ignoring the second (the advocates of robust decision-making) are not studying what I am talking about. If you want to talk about relevant social studies, they should be more focused upon the dichotomy existing between advocates of precaution and advocates of robust decision-making. Why? Because the precautionary folk are quite explicit in saying that the precautionary principle applies only when objective probabilities are not available. Robust decision-making, on the other hand, makes no presuppositions regarding the nature of the probabilities.

    Thirdly, I don’t recall at what point I was supposed to have failed to recognise that all group emergence is driven by the overarching need to gain subjective certainty through belonging to a consensus. This is how people gain confidence in their confidence. My previous article was all about that effect, so why do you feel the need to lecture me on it now? Remember, there are three groups that enjoy certitude here. The first is certain that there is no uncertainty, the second is certain that there is uncertainty but doesn’t think it should be used as an excuse to avoid drastic action, the third is certain that there is uncertainty and think that it should influence the type of action to be taken, i.e. that it should be pragmatic and adaptable. All I have said is that one’s tolerance of ambiguity should influence the group one chooses to belong to – not an unreasonable assumption, I would have thought, given that the groups in question can be characterised in terms of their tolerance towards ambiguity.

    Finally, even when I speculate that not all of those who profess to be in the first category of the probability-blind (i.e. the certainty-mongers) can be taken at face value, you manage to completely misrepresent my position by introducing the word ‘majority’. This is your word, not mine.

    As for all your stuff about social studies into climate change attitudes, I fail to see how any of it matters when it is clear that you and I have very different ideas about what an ambiguity averse climate policy advocate would look like. You say that “Ambiguity avoidance, whether particular individuals are subject to it or not, is about what these folks think they know”, and you imply that it is not about “who they are”. So how do you explain the studies that demonstrate that women have a tendency to be more averse to moderate ambiguity when compared to men? (I say ‘moderate’ only because the difference lessens as the ambiguity level increases.) Ambiguity aversion is a personality trait, as is risk aversion. As such, it is all about who you are, and studies show that this includes your cultural background.

    Like

  44. John,

    “… there was far too much misrepresentation of my views.”

    What misrepresentation? None intended, I assure you, can you provide exact quotes? I don’t see any such.

    “This has never been in dispute.”

    Where did I say it was? To make my points it is necessary for clarity to each time characterise carefully what I’m saying, so the context is clear in each case. Misunderstanding is generally the biggest problem in debates. While this may seem repetitive, it doesn’t mean that every time I fill in said context which in some way has been said before, it means I think you dispute it. In general, I make clear what I specifically disagree with, in the above response being issues / evidence regarding the role of ambiguity avoidance within the CC domain. But the context is still necessary such that it is more obvious as to precisely why; this avoids misunderstanding.

    “Secondly, by focussing upon the group that professes certainty of impending doom you overlook the main point of the article. Ambiguity aversion is a distrust of epistemic uncertainty and a preference for aleatory uncertainty. In Ellsberg’s paradox this manifests as a preference for gambles that can be assessed purely using objective probabilities.”

    None of my text disputes Ellsberg’s paradox or its characteristics in any way. If you think this is the case, can you please quote me.

    “There are two categories of climate policy advocates who indulge in probability blindness:”

    On each point where I have responded with what I believe are the issues, I have first replicated your relevant sentence(s), and as you see I kept your reference to both groups in each case. To a first order my response is identical for both variant cases, so rather than focusing on one, my replies are each relevant to both, which indeed I should have made clearer. It’s a layer more complex, but there are gradations of belief in cultural consensuses (so bias but not full / core belief, to varying extents), and via alliances there can also be weak belief that even switches allegiance depending upon the context. The ‘precautionary folk’ as you label them, are in here somewhere, but will still owe their bias to cultural identity in the main.

    “I don’t need social data to make the above argument.”

    Say what?? You don’t need social data to make a social psychological argument? Well for sure you can make a proposition without data, but it holds no weight without some means to demonstrate it. What if the data disagrees with the proposition? My main point is that such data as we have makes it hard to see how this could be any kind of main effect, more below…

    “Besides which, any social studies that focus upon the first category (those proponents for action who are probability-blind) whilst ignoring the second (the advocates of robust decision-making) are not studying what I am talking about.”

    The various studies and public surveys are worded in very many ways and cover many different questions to various demographic categories, and are asked both separately and in conjunction with other issues (which gives more information), sometimes with various screenings etc, etc. But they are essentially all measuring (and in the academic case specifically targeting) bias. They are not looking specifically for either of your categories, which would just be represented by stronger or weaker bias expressed for anthropogenic / dangerous / whatever climate change (or however it is termed, this is also varied), or against it, and what variables are causal for the bias (which turns out to be cultural identity).

    “If you want to talk about relevant social studies, they should be more focused upon the dichotomy existing between advocates of precaution and advocates of robust decision-making. Why? Because the precautionary folk are quite explicit in saying that the precautionary principle applies only when objective probabilities are not available. Robust decision-making, on the other hand, makes no presuppositions regarding the nature of the probabilities.”

    Well then if you want any confidence in this you must either design / execute your own studies, or find where other academic or public surveys might have gems within that may provide some support (some are essentially fishing expeditions, so can come up with puzzling stuff or at least not what was the expectation, so such gems may exist). I think merely saying what social studies ‘should’ be focused upon is not likely to get you anywhere at all, unless you spend some time in that domain to the point where you feel confident enough to make such a suggestion to a relevant academic who could do this. However, further to your proposition that these two groups are founded upon ambiguity avoidance (as aa is set out in your head post), per the example of the US above, for both variants this is nevertheless still in conflict with the cultural identities into which support / opposition falls (and is agreed to fall by those who themselves believe CC skeptics are wrong), *unless* you can show that Rep / Cons and Dem / Libs have systemically different scores for ambiguity avoidance (in the strict sense of the test you set out here and with no contamination from any socially conflicted issues). Cultural adherence is not black and white, not digital, there are many shades of grey, so the cultural data is certainly consistent with covering both categories (and more). This is not the same as absolute proof via a crafted survey that specifically captures the narrow category you are particularly interested in, yet nevertheless, you’re essentially proposing a completely different mechanism for one variant than another, given that the most polarised extremes can’t really on current knowledge be anything other than primarily cultural, and indeed a spectra too that unless there’s a very good reason, is highly likely to encompass your category. While it’s not impossible that different main causes are responsible for different parts of the social phenomena, it’s much more likely that there’s a common driver that has fan-out effects (though indeed a strong cultural bias may be able to enable others). [The drivers are the same more globally, but where climate cultural values have aligned to older political boundaries, its easier to see via the big red and blue flags plus there’s far more social data already in the US].

    “Thirdly, I don’t recall at what point I was supposed to have failed to recognise that all group emergence is driven by the overarching need to gain subjective certainty through belonging to a consensus.”

    Sorry? Where have I said that you fail to recognise anything, can you quote me please?

    “…so why do you feel the need to lecture me on it now?”

    What are you talking about? The specifics on social evidence that are relevant to, and raise issues regarding, your proposition about ambiguity avoidance being a founding bias for specific group behaviour, are completely hand in glove with the wider principles, which would make no sense to any third party readers unless I always give context, and due to the always lurking demon of misunderstanding, may make no sense to you either unless the particular aspect / angle being followed is always tied to its place in the big picture, which picture necessarily therefore gets multiple mention. I think some modest repetitiveness is a small price to pay for that clarity, and is common in such debates.

    “All I have said is that one’s tolerance of ambiguity should influence the group one chooses to belong to – not an unreasonable assumption, I would have thought, given that the groups in question can be characterised in terms of their tolerance towards ambiguity.”

    As noted, not least due to the poor state of the social psychology on the issue (with some notable exceptions), the door is open, but it is my opinion that it is open only the tiniest crack for any primary role, though a wider crack for some secondary role. I never said that this was anything other than my opinion, but I have set out the issues that from my (limited) knowledge of the domain, I can see are a problem for this being anything much more than lost in the (cultural) noise. However, you don’t have to take any notice of my opinion, although you did invite it 😉

    “Finally, even when I speculate that not all of those who profess to be in the first category of the probability-blind (i.e. the certainty-mongers) can be taken at face value, you manage to completely misrepresent my position by introducing the word ‘majority’. This is your word, not mine.”
    The only place I mentioned ‘majority’ in the last response was in answer to this question: “Finally, when talking purely about the certainty mongers, I think perhaps we should be careful not to ascribe their rhetoric to genuinely held beliefs. Are they really ambiguity averse, or are they just keenly aware of what to say to influence those who are?”

    How is using the word ‘majority’ misrepresenting you here? I am referring to the majority of the ‘certainty mongers’, i.e. those whom you specifically reference in this question. Your deployment of ‘their’ / ‘they’ implies inclusiveness. While I didn’t assume you actually meant 100%, for sure you have included nothing to indicate that you only meant certain individuals or a minority within that camp, rather than a generic thing. I apologise if I have misread you, for sure no misrepresentation is intended, why on Earth would you think that? Even knowing what you have now said, I still find it hard to read that sentence as anything but a communal characterisation. Maybe that’s just me, but anyhow I request some understanding regarding that; and indeed there will for sure be some bad apples as I noted.

    “As for all your stuff about social studies into climate change attitudes, I fail to see how any of it matters when it is clear that you and I have very different ideas about what an ambiguity averse climate policy advocate would look like.”

    Well social studies, imperfect as they are, are all that we have regarding the data on what motivates various groups of people to do what they do in this domain (or any other conflicted domain). Indeed they are only applicable to groups, and indeed to those groups actually measured, of which climate change policy makers as a very narrow and specific sub-group, are hardly likely ever to get measured. And in theory any individual can buck a trend. But as noted above for scientists, policy makers are also human in every way, and fully embedded in our society, not somehow separated, and so highly unlikely to be disconnected from the main biases known to be in play, especially culture because no-one is known to be free of this and it is a mechanism that appears to marshall a whole raft of biases to a single end. But above all, no proposition can possibly be confirmed without data, and if you have better data to confirm this one or indeed any other proposition in the domain, I’d be most happy to see it 🙂 There are many potential bias effects that work separately or in context; in some posts over at WUWT a few years back I note psychologists trying to marshall dozens of them (that you wouldn’t normally see glued together) in a kind of crafted coalition that might explain CC skeptics; needless to say they can’t find data to support the case, but what goes for them does for any other investigation too; no data = no answer.

    “You say that “Ambiguity avoidance, whether particular individuals are subject to it or not, is about what these folks think they know”, and you imply that it is not about “who they are”.”

    ‘Who they are’ in the context of this quote, means what main cultural allegiance they have (relevant to the conflicted domain)…

    “So how do you explain the studies that demonstrate that women have a tendency to be more averse to moderate ambiguity when compared to men?”

    …so this is not in conflict with the above quote in any way.

    Like

  45. This is the way that all debates between us go. I start by making a statement that is border-line tautological. You respond by claiming that my statement is refuted by all the world’s knowledge and I need to present my own studies to overthrow the wealth of knowledge you imply I have overlooked (even though the point I am making rests and falls upon the strength of my logical reasoning and has sod all to do with data). I respond by saying that I sense you must be misunderstanding me or misrepresenting my point. You respond by flatly denying it and demanding specific quotes (whether I have already supplied them or not). I respond by saying “honestly, I’ve got better things to do than to keep this debate going”.

    I don’t know what it is about us, but we seem terminally incapable of understating each other, and whilst you maintain the default position that all the misunderstanding is on my part, we have no basis for continuing the discussion. I could try to respond to your specific responses, but bitter experience has shown me that it will only lead to more of the same.

    So that’s where it ends.

    And by the way, you don’t have to go “a few years back” on WUWT to find an article that attempts to summarise the many cognitive biases that feature in the climate debate. Take my own tongue-in-cheek contribution posted November 2017:

    https://wattsupwiththat.com/2017/11/23/playing-the-cognitive-game-the-climate-skeptics-guide-to-cognitive-biases/

    Like

  46. John,

    “I start by making a statement that is border-line tautological.”

    You say, for instance: “Critically, I see the certainty-mongers, and those who advocate precaution because of ambiguity, as the two principal manifestations of the same group, i.e. the ambiguity averse group.” Which notwithstanding the qualifier that they “would form as a result of additional cultural/identity related issues”, attributes a significant role to ambiguity aversion that is worthy of discussion and consideration in relation to what social data currently exists.

    “You respond by claiming that my statement is refuted by all the world’s knowledge…”

    Not at all. I responded with what I know of the domain, clearly pointing out that this is my opinion, and indeed adding that I would not consider myself an expert. But all the points I did make are provided with a chain of logic and inference that is there for you to challenge if you think it is unsound in any way, and indeed I pointed out some limitations myself regarding what the data doesn’t say as well as what it does.

    “…and I need to present my own studies to overthrow the wealth of knowledge you imply I have overlooked.”

    I didn’t say anything about ‘wealth of knowledge’ or what you may or may not have overlooked. In response to your specific request regarding relevant social studies, i.e. that “they should be more focused upon the dichotomy existing between advocates of precaution and advocates of robust decision-making.” I point out that as far as I know, these don’t exist, so indeed would need creating, or as I indicated, could potentially be filtered out of stuff that was made for different purposes but may have relevant data within. The latter approach tends to mean many hours of tedious searching, but who knows, it could bear fruit.

    “(even though the point I am making rests and falls upon the strength of my logical reasoning and has sod all to do with data).”

    My response essentially being that your reasoning is fine and dandy, but that in the end all reasoning must be confirmed / denied by relevant domain data. Which isn’t to say that the current data rules out the proposition either, so it survives until more data appears (indeed per above about the poor state of the discipline, much could yet emerge), but that in my opinion (which I made clear in summarising) the existing data does present significant difficulties for the proposition. I know you would never suggest that we roll without data in any science, what I’m not grasping is why this case would be an exception. I presume indeed it’s my misunderstanding that it would be, but in any case all that I have contributed here is challengeable of course, but it does need challenge to rule out.

    “I respond by saying that I sense you must be misunderstanding me or misrepresenting my point.”

    I may well be misunderstanding you, and indeed pointed out that misunderstanding is generally the biggest problem in debates. I didn’t allocate any sides to this statement. And you didn’t use the word ‘misunderstanding’, you said “…but from my perspective there was far too much misrepresentation of my views.” No way would I knowingly misrepresent you, hence it’s not unreasonable to ask for clarification, as indeed by misunderstanding one can appear to do so, and focusing on the actual text in question may hopefully reveal the misunderstanding.

    “…you maintain the default position that all the misunderstanding is on my part…”

    I do not. However, when I raise what I perceive to be issues, you neither challenge the issues with counter-points (e.g. highlight flaws / potential flaws in my logic chain and / or inferences) or explain why my application is inappropriate or indeed why it might be a feature of misunderstanding. I would be most happy for you to make clear my misunderstandings, because thereby my store of knowledge is corrected / improved.

    “And by the way, you don’t have to go “a few years back” on WUWT to find an article that attempts to summarise the many cognitive biases that feature in the climate debate.”

    It’s a fine post, which I recall from the time. I was being more specific regarding the fact that the issue remains highly puzzling to them which I noted back then, because the data doesn’t match their various propositions (though they doggedly pursue anyhow). But in any case, not mentioning this post does not in any way imply any bias against it, or yourself.

    Like

  47. Andy,

    The way I want to leave it is this:

    I have great respect for your understanding of the important role that cultural dynamics has in explaining why people hold the views they espouse on climate change policy. But whenever we engage in debate on issues related to that topic,the more we try to reconcile our differences, the more we become mired in dispute. Normally, when I am in debate with someone, the discussion starts out with a good deal of mutual misunderstanding, and then, through a process of exchange and clarification, the misunderstanding is finally ironed out and a reconciliation or compromise emerges. However, I never find this happens when debating with you. It’s not your fault; it’s not my fault. There just seems to be a personality clash that is getting in the way. When I look at your most recent comments, it is obvious that you are sincerely flummoxed by the remarks that I am making, but please rest assured that the feeling is entirely mutual. Regrettably, I do not think any amount of further discussion will resolve this and so I suggest we leave it here, hopefully with mutual respect still intact. All that remains is for me to sincerely thank you for your patience. If you want more from me than those thanks, then I’m afraid I am going to have to let you down.

    All the best.

    P.S. I wasn’t suggesting that your failure to acknowledge my WUWT article was a snub; I know you are above that sort of thing. I just wanted to make sure you appreciated that I was already aware of the issue you had raised.

    Like

  48. Here would seem as good a place as any to acknowledge the passing of the father of all whistleblowers, Daniel Ellsberg. When asked if the risks of whistleblowing are worth it, he replied:

    “When we’re facing a pretty ultimate catastrophe. When we’re on the edge of blowing up the world over Crimea or Taiwan or Bakhmut. From the point of view of a civilization and the survival of eight or nine billion people, when everything is at stake, can it be worth even a small chance of having a small effect? The answer is: Of course… You can even say it’s obligatory.”

    https://www.bbc.co.uk/news/world-us-canada-65932944

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.