When I ran risk assessment workshops during my days in software development, I was always keen to remind those participating that the risk that is most likely to dominate is the one that did not make it onto the risk register. This annoying detail is due to the novelty that often accompanies software development work, and therefore the extent to which epistemic uncertainty features. You can do as much Monte Carlo analysis as you want in order to model schedule risk, but the brute fact is that you cannot account for the impact of your ignorance, and so you are likely wasting your time. Another way of putting this is that the risk profile is dominated by black swans and not the logic of the gaming table.

The question I have posed in the title to this article is whether or not climate change is like that. When we look forward, are the risks primarily determined by the unknown unknowns? Is that where the greatest potential impact lies? And if so, does that justify the drastic approach currently proposed by advocates of an emergency transition to Net Zero? After all, in some important respects, we have never been here before and the placards tell me there is no planet B.

Before I attempt to answer that question, it is very important that we all have the right idea of what a black swan is and what it is not. Yes, they are rare events that have huge impact and were not predicted, but is that all there is to it?

In a word: No.

This is not roulette

The idea of the black swan was first introduced by Professor Nassim Nicholas Taleb in his book, Fooled by Randomness, and further developed in his now famous book, The Black Swan – The Impact of the Highly Improbable. In that book he wastes no time in introducing the reader to the three essential characteristics of a black swan event. On the very first page of the prologue he states:

First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.

From this, one can discern that improbability and extreme impact are necessary but insufficient conditions for an event to be called a black swan. Nor is it sufficient that the event had not been anticipated. The essential feature is that it could not have been expected ‘because nothing in the past can convincingly point to its possibility’. Black swans are therefore epistemic phenomena in so far as a lack of information not only prevents the accurate calculation of probabilities, it even prevents the conception of the possibility. As an illustration, consider the following two examples, neither of which can be termed a black swan.

The first is the outbreak of a nuclear war. Certainly that would be an all or nothing event that, if it were to happen, would rather render irrelevant any other things we might have included in our risk register. It is also very difficult to predict exactly when such an outcome may befall us. However, there is plenty in the past that must surely convince us of the possibility. In fact, many political and military analysts would suggest that a global nuclear war will probably occur at some stage; it’s just a matter of time. The fact that it hasn’t happened yet is of no comfort. As for rationalising after the event – chance would be a fine thing.

The second example is the famous 1913 run on the roulette wheel in the Monte Carlo Casino, in which the ball landed on black 26 times in a row. A moment’s thought should be enough to recognise that, despite this being a most unexpected outcome, with potentially extreme significance for anyone who may have bet on it, the event, nevertheless, fails as a black swan example. This is once again due to the all-important required condition that ‘nothing in the past can convincingly point to its possibility.’  This is not the case for roulette because, in fact, the probabilities are childishly simple to calculate, and the possibility was established the moment the roulette wheel was invented. It just took a long time, and a vast number of spins in casinos across the world before it happened. The event taken in isolation seems almost miraculous, but within the context of the full history of gaming, it was much less so. It was pure dumb luck and any attempt to portray it as a black swan would be to allow oneself to be fooled by randomness.

The roulette example is also important because it is a timely reminder not to look to the casino for examples that illustrate how uncertainty usually works in the real world. Taleb was so anxious to warn against making such a mistake that he dedicated a full chapter to it in his book (chapter 15, The Bell Curve, That Great Intellectual Fraud) and coined a term for it: the ludic fallacy. Casino games, both in theory and in practice, are a misleading model for the rest of real life, and the reason for this is that the uncertainty in games, such as roulette, is hugely dominated by the aleatory, whereas in real life the epistemic basis for the uncertainty is usually significant. If one insists on using games as a model for real life, one has to introduce a significant level of uncertainty that the game may have been rigged in some unknown fashion. As the Forbes magazine article, Black Swan Bets, puts it:

“He [Taleb] fumes at finance professors who have treated investing like a roulette wheel where we know the odds. We don’t.”

Clarifying the question

Now that we have a better idea regarding the full range of conditions that have to be met in order for an event to qualify as a black swan, we can return to the headline question with a better understanding of what it means exactly. Stated fully, the question is as follows:

Knowing what we know of the processes that drive climate change and the relevance of man’s emission of carbon dioxide, are there any gaps in our knowledge that may lead us to suspect that a future black swan event may radically change our understanding and lead to significant, unpredicted impact?

I think that is a very valid question and, to be honest, I think the answer has to be yes. I just don’t think our current level of understanding can rule out such an event. In fact, black swans, by definition, cannot be ruled out in advance, since even a belief that there are no significant gaps in our knowledge is not a provable belief. However, whilst one must accept the possibility of future black swans in principle, the problem comes in trying to provide plausible examples, since as soon as a specific possibility has been conceived it is no longer a black swan. The best that such an example could then be is a so-called grey swan, i.e. a low probability, high impact event for which the possibility has been conceived, but for which reliable probabilities may or may not have been calculated. If such examples entail ruin scenarios, there may be reason to apply the precautionary principle, but calling them a black swan would still be a misnomer.

Take, for example, the tipping points that have been posited for climate change, such as speculation of a Greenland ice sheet collapse. Even where these may be legitimate, scientific concerns, they are still not potential black swans, if only because we have had the foresight to anticipate the possibility. For each given tipping point example, there has been something in the past that has convinced someone of its possibility, albeit perhaps not its probability. The true black swan would be the tipping point that no one has yet thought of. It is the risk that did not make it onto the risk register.

On the other hand…

The spectre of the black swan may be something that keeps many climate change activists awake at night but it actually works both ways. One might, for example, rephrase the question thus:

Knowing what we know of the processes that drive climate change and the relevance of man’s emission of carbon dioxide, are there any gaps in our knowledge that may lead us to suspect that a future black swan event may radically change our understanding and lead to far less impact being attributable to man?

Having honestly answered in the affirmative to the original question, I would have to ask myself why I should not do so again. And if one argued that the current gaps in knowledge render such a black swan much less likely, that would be to fail to understand what a black swan is. We can’t talk in advance about one being more likely than the other. We can only assess in hindsight whether one should have been more predictable. And that retrospective will just be a story we tell ourselves.

Handling the truth

The climate change debate is essentially all about determining the approach to be taken when making decisions under uncertainty. To engage in that debate, one must have a good grasp of uncertainty’s conceptual framework and how uncertainty influences the perception of risk. Failing to understand the important distinction to be made between aleatory and epistemic uncertainty can lead to the ludic fallacy. A failure to understand how epistemic uncertainty works in real life can lead to a failure to appreciate the important distinction between black and grey swans. The truth is, however, that all this talk of black swans and grey swans somewhat misses the point. In life there will always be the event that comes out of the blue and changes everything, such as 9/11. And there are already plenty of known threats, such as pandemics, that have the potential to bring things crashing down. Quibbling over the correct term to use for such events is not nearly as important as knowing what to do about it.

It is often said that uncertainty is not the sceptic’s friend since it allows for the possibility of ruin scenarios that cannot be ruled out. The less one can be certain, the more one has to take the possibility seriously since it dominates the risk profile. The reality, however, is that uncertainty is nobody’s friend, since the prospect of unforeseen ruin may reside both in the problem and solution domain. The issue, if you ask a climate sceptic, is that the accelerated transition to Net Zero does not look like a straightforward proposition, and swans of all shades can be expected on that road. In fact, many of the swans look decidedly white. Whilst Net Zero advocates talk of the costs necessarily incurred to remove risk, it looks to the sceptic more like a case of swapping one set of potentially ruinous scenarios with others that are just as ruinous and much more likely. Fear is prompting the speed of the Net Zero transition, and unrealistic optimism seems to be giving it a free pass when weighing up the pros and cons. And then, of course, there is the ultimate concern:

What if the ruinous climate change black swans turn out to have been flightless birds all along?

94 Comments

  1. Thanks for clarifying what a Black Swan is and isn’t. Basically, if it falls way outside the statistical distribution (determined by past events/observations or, in the case of the Monte Carlo wheel, simple calculations of probabilities which predict that the event is possible given the required number of iterations) then it is a black swan. Clearly, in the case of the Monte Carlo wheel, the 26 blacks do NOT fall outside of the known statistical distribution, therefore it’s not a Black Swan. I think I’ve interpreted that correctly. So, to return to climate and weather, it is often said that the Pacific Northwest heatwave was a 4.5 sigma Black Swan event which could not have been predicted by looking at the past. It fell way outside the statistical distribution determined by past observations. Extreme weather attribution ‘scientists’ tried to make it fit a very dubious statistical distribution, meaning it was still a very rare event, but could be expected to happen given enough time. That didn’t really convince anybody. Others have tried to explain the event in terms of its proximate meteorological (dynamical) causes, in addition to other factors and they have come up with reasonable hypotheses which again, tend to explain the heatwave as an extremely rare combination of physical occurrences, therefore meaning it was not a true Black Swan. In both the aforementioned cases we come up against the practical limits of knowledge and observation. The coupled ocean atmosphere system is extremely complex and difficult to predict accurately more than a few hours ahead. In that sense it is ‘quasi-chaotic’, i.e. it might SEEM to be chaotic and unpredictable, but if we could know all there is to know about the physics and chemistry of the oceans and atmosphere and if we had access to all the data, then we could predict what the system would do, days, months, even years ahead. That’s never going to happen. So black swans ARE going to happen; events will occur which we cannot possibly predict because they lie outside our current expertise and our available knowledge of the past.

    Liked by 2 people

  2. Yes, as Jaime said, thanks for the clarification. It’s very helpful to have concepts such as these set out in terms that even I can understand (even if , perhaps, Willard can’t). 😉

    Liked by 1 person

  3. Jaime,

    Black swans are epistemic phenomena and weather systems are inherently variable. Put the two together and one can be left with a puzzle. When an extreme weather event occurs it may be an outlier that represents the extent allowed by the variability as currently understood, or it could lie beyond what we thought possible, suggesting gaps in our knowledge regarding the processes that cause the variability. The latter would be a black swan but the former would be a grey swan. The problem is that the two cases are not necessarily clear cut. An extreme weather event may be caused by circumstances that could not have been anticipated given current knowledge and yet still fall within the range that had previously been thought possible. Severity and rarity cannot by themselves be used to define the black swan. I know that claims have been made for extreme weather events being black swans, but I suspect most of them are grey. It can be very difficult to tell when dealing with an incomplete understanding of such an inherently variable system.

    Liked by 2 people

  4. Mark,

    This article was obviously inspired by recent discussions with Willard, but I have no particular interest in re-engaging. Such exchanges have never proven constructive. He has already made it clear that he thinks I am ‘full of it’ and that I talk ‘pompous crap’. I can live with that opinion much more easily than he would like me to, I’m sure.

    Like

  5. It seems to me that the definition of a black swan may depend very much upon the experience and/or imagination of the person evaluating their existence. In other words, one man’s black swan is another man’s grey one. If a person with lesser knowledge evaluates a potential black swan event they are more likely to misidentify one.
    Furthermore if my new understanding of a black swan is now correct they are ephemeral. As soon as one is identified it can have no siblings because others of its type can now be conceived and could be grey at best (=worst).

    Liked by 1 person

  6. Alan,

    Indeed, black swans, by dint of being epistemic phenomena, are necessarily subjective. The discovery of swans with black plumage came as something of a shock only to those in the northern hemisphere. Also, black swans can only be discerned in hindsight. As Soren Kierkegaard said:

    “Life can only be understood backwards; but it must be lived forwards.”

    Liked by 2 people

  7. So “Black Swans can only be recognised backwards, but the same are then no longer possible forwards”. Good grief!

    Like

  8. Black swan, nothing from the past can point
    to its possibility…

    Black swan ebony gleaming,
    Gliding artlessly on
    A mirrored lake, unaware
    That you’re an oddity exposed
    By northern ornithologist.
    Glossy bird, you’d be surprised
    To learn you are compared
    To Hume’s thanksgiving turkey,
    Symbol of the out-liar event,
    The single observation that exposes
    How fragile is our human knowledge.
    Black swan, you have become
    A symbol too-so much
    Less and more than
    Mere blackbird – you.

    Liked by 1 person

  9. In the 1930s design floods for dam spillways were sometimes exceeded and the concept of the Probable Maximum Flood (PMF) was developed as a design objective. Currently the appropriate design flood for a spillway of a large high hazard dam is the PMF which is the largest flood that can be “imagined”. The worst possible combination of factors are considered in developing the PMF including an estimate of the Probable Maximum Precipitation (PMP). The PMP estimate is based on maximizing the meteorological factors that make up an extreme storm event. Essentially a very large storm is analyzed and the analyst considers “how big could it have been?” PMP estimates have rarely been exceeded anywhere in the world. and of course, if they are, the PMP for that location is revised upwards.

    So based on John’s analysis, if a PMP event occurred it would not be a Black Swan. But the event would be so extreme (way beyond normal flood frequency) and the flood damage so significant, climate change would, of course be the culprit.

    Liked by 1 person

  10. >”So based on John’s analysis, if a PMP event occurred it would not be a Black Swan.”

    If analysts thought it possible but highly unlikely that a PMP would be exceeded, then you’re right, it would not be a black swan. But you also spoke of a largest flood that could be imagined. If something subsequently exceeds such imagination, then that would be a black swan.

    That said, in the world of safety engineering most people try to avoid using the word ‘safe’, preferring to refer instead to ‘acceptably safe’. This is when there is a threshold which has a non-zero probability of being exceeded but for which the risk is ‘broadly acceptable’ and the effort to reduce the probability further is considered to be impracticable. The risk is then said to be ALARP – As Low As Reasonably Practicable. Exceeding a threshold that was set in accordance with the ALARP principle would then be a grey swan, i.e. it was thought possible but sufficiently unlikely.

    The other interesting issue that your example introduces is that of non-ergodicity. The breaching of a dam is an all or nothing event, in which a storm will either pass by without disastrous consequences or will cause catastrophic damage. With non-ergodicity, a single person gambling repeatedly is not the same as many people gambling simultaneously, because the serial gambler faces ultimate ruin. If a dam height is built according to ALARP, then non-ergodicity will be a concern. If it has been built to be absolutely safe for all imaginable circumstances, then it is the black swan that you need to worry about.

    To summarise: In all such discussions, there are two things to consider:

    a) The variability of the system and how this affects the probabilities of thresholds being exceeded (the territory of the grey swan)

    b) Any potential gaps in knowledge there may be regarding one’s understanding of the variability (the territory of the black swan).

    Liked by 1 person

  11. The PMF is not the maximum possible flood but the maximum probable flood. So the combination of extreme factors used in the analysis is somewhat arbitrary. Thus when a PMF is updated it is often increased because the next hydrometeorologist tends to be a bit more conservative. When used in a quantitative risk assessment a flood frequency of 1 in 10,000 years is sometimes applied to a PMF but that has no real basis, just a need for a number.

    On the basis of John’s useful comments above, the derivation of the PMP and PMF follow the ALARP principle and thus the occurrence of a larger PMP or PMF would indeed be a grey swan.

    Liked by 2 people

  12. Just as an aside, when you wrote of the largest flood that could be imagined, you got me thinking. A lot is made of the epistemic nature of black swans, i.e. the unknown unknowns. However, I wonder whether it is imagination rather than information that is sometimes lacking, such that the event could have been conceived of in principle but wasn’t in practice. We could all sit down and invent highly unlikely combinations of factors leading to a potential catastrophe, but we can’t and shouldn’t do so endlessly.

    Like

  13. John: As you probably know, engineers use Failure Modes and Effects Analysis (FMEA) to identify potential design flaws. Basically using your imagination! I am fascinated by unusual failure modes for engineering structures, The Vajont Dam in Italy is an interesting case when a landslide into the reservoir in 1963 caused a huge wave to overtop the dam and incredibly destructive flooding downstream. The dam remained intact. Surprisingly the failure mode had been identified earlier but was not adequately addressed. Nowadays the landslide would have been attributed to climate change.
    https://en.wikipedia.org/wiki/Vajont_Dam

    Like

  14. Potentilla,

    Yes, I had exercises such as FMEA in mind, though in my field (transportation) we had our own techniques for garnering expert opinion. I tried to explore the issues encountered during such exercises here:

    When More is Less

    At the end of the day, it was all about opinion, which is where the epistemic uncertainty comes in, I guess. The one thing I will say is this, when one prowls the internet, it becomes all too apparent that there is no shortage of imagination when looking for things climate change could be blamed for.

    Liked by 1 person

  15. I anchored in the St Lucie river, off Stuart, Florida. While there I was visited several times by a black swan. The first time was a black swan event but the others were not.

    Like

  16. In case you were wondering what the guys on Wall Street might be saying about black and grey swans, there is this:

    https://www.wallstreetoasis.com/resources/skills/finance/black-swan-event

    On the subject of what makes an event a black swan, they say:

    “Black swans are not unpredictable because they are random. They are unpredictable because they lie outside our current scope of reasoning. This is why the term ‘black swan’ is used – these swans existed outside what we thought was possible.”

    On the subject of the futility of complex modelling based upon standard probabilistic measures, they say:

    “One key implication is that complex models relying on mathematical probabilities may be pointless. This is because normal distribution and other standard probabilistic measures do not necessarily apply to these [black swan] events, as they rely on several assumptions that do not hold.”

    On the subject of grey swans, they say:

    “Grey swan events differ from black swans in a few key areas. Primarily, the event is known as a possibility. However, this probability is seen as extremely small. The difference, therefore, is that the outcome is known and predictable.”

    On the relevance of grey swans to extreme weather events, they say:

    “There is still the similarity that a grey swan event has the potential to have a significant and widespread impact. For example, natural disasters are predictable outcomes with a small perceived percentage of occurring.”

    And last, but not least, on the ludic fallacy and roulette, they say:

    “The fallacy is related to basing our decisions on probability as if we were playing a game like roulette.”

    Like roulette, indeed.

    Like

  17. The fact that we are here today, debating what constitutes a black swan event and what doesn’t, might all be down to the occurrence of a black swan event way back in our evolutionary history:

    “Like treasured recipes passed down from generation to generation, there are just some regions of DNA that evolution doesn’t dare tweak. Mammals far and wide share a variety of such encoded sequences, for example, which have remained untouched for millions of years.

    Humans are a strange exception to this club. For some reason, recipes long preserved by our ancient ancestors were suddenly ‘spiced up’ within a short evolutionary period of time.

    Because we’re the only species in which these regions have been rewritten so rapidly, they are called ‘human accelerated regions’ (or HARs). What’s more, scientists think at least some HARs could be behind many of the qualities that set humans apart from their close relatives, like chimpanzees and bonobos.

    Many HARs play a role in embryo development, especially in forming neural pathways associated with intelligence, reading, social skills, memory, attention and focus – traits we know are distinctly different in humans than other animals.

    In HARs, these enhancer genes, unchanged for millions of years, may have had to adapt to their different target genes and regulatory domains.

    “Imagine you’re an enhancer controlling blood hormone levels, and then the DNA folds in a new way and suddenly, you’re sitting next to a neurotransmitter gene and need to regulate chemical levels in the brain instead of in the blood,” Pollard said.

    “Something big happens like this massive change in genome folding, and our cells have to quickly fix it to avoid an evolutionary disadvantage.”

    We don’t yet understand exactly how these changes have impacted specific aspects of our brain development, and how they became an integral part of our species’ DNA. Though Pollard and her team are already planning to delve into these questions.

    But their research so far does show just how unique – and unlikely – the evolution of the human brain really is.”

    https://www.sciencealert.com/a-chance-event-1-million-years-ago-changed-human-brains-forever

    Exactly how unlikely is ‘unlikely’? There appears to be no precedent in nature to account for the evolution of the human brain. Are we ourselves the ultimate black swans?

    Liked by 2 people

  18. The invocation of the normal distribution is interesting, because I think using a normal distribution as a model for your variable precludes black swan events. In such a case there is no value of the measurement variable that does not have a non-zero value, such that any result has a probability of occurring, including absurd ones like humans ten feet tall. The probability estimate asymptotes to zero but never reaches it, if you have enough digits on your calculator.

    What I am trying to get at is that if you based your prediction of what is possible on a normal distribution, then anything is possible. So you would have to truncate it by saying that anything more than say 6 standard deviations from the mean is “effectively” impossible.

    Like

  19. Jit,

    Yes, that’s what you get when you try to model epistemic uncertainty with tools from the aleatory toolbox. It’s that alluring curve again.

    Like

  20. Jit: You say “…..anything more than say 6 standard deviations from the mean is “effectively” impossible”

    The Hershfield method of estimating PMP was derived by analysing maximum annual 24-hour rainfall from 2700 stations around the world. Hershfield found that the maximum recorded 24 hour precipitation value at each station could be “enveloped” by the mean plus 15 standard deviations calculated from the total record at each station. For shorter durations than 24 hours, the maximum recorded value could be as much as the mean plus 20 standard deviations.

    This demonstrates the extreme natural variability in meteorological parameters. “Normal” rainfall is meaningless when you have such incredible variability. Climate catastrophists take note.

    For more info see Chapter 4 in https://library.wmo.int/doc_num.php?explnum_id=7706

    Liked by 1 person

  21. potentilla, those are extraordinary numbers.

    For those who are unfamiliar with standard deviations/probability frequencies, this page on wiki shows the probability of obtaining an observation higher than a particular number of standard deviations ( = Z) from the mean of a normal distribution. For example if Z = 2, then the probability of exceeding it is 0.023 (this gives rise to basic confidence limits and the fabled p<0.05 benchmark – which sums the probability on both sides of the normal curve (i.e. for Z less than -2 and Z more than +2, for a two-tailed test)).

    I've done a horrible job of summarising that, but it's difficult to make the explanation both readable and precise!

    Anyway, for 15 standard deviations from the mean, i.e. Z = 15, that wiki page gives the probability of exceeding it as 7.8 E -45, i.e. 0.7 with 45 zeroes between the decimal point and the "7".

    The probability of exceeding a Z-score of 20 has twice as many zeroes (89).

    This appears to be about a billion times less likely than looking for and finding a particular atom in the universe (wiki says there are 10^80 atoms ish).

    I don't know how the Z-scores I have noted compare with such horribly skewed data as rainfall, but they are informative nonetheless. What it says to me is that the benchmark is impossible to exceed, black swan or black hippogriff notwithstanding, unless your understanding of the distribution is hopelessly incomplete.

    In other words I would not recommend taking out insurance against such occurrences!

    Like

  22. Jit: Your calculations are for a normal distribution. As noted in the WMO manual for PMP estimation “The mean and standard deviation of the annual series tend to increase with length of record, because the frequency distribution of rainfall extremes is skewed to the right”. This is why rainfall events keep occurring that are greater than ever recorded feeding the catastrophic climate change narrative.

    Liked by 1 person

  23. What do you call a group of Black Swans? Last year’s 40.3C recorded at RAF Coningsby would appear to have satisfied all the conditions for being a Black Swan:

    “It’s quite clear that the record set yesterday, being over 5C in excess of of the previous record maximum temperature, is a hugely anomalous outlier. Of course, data which only goes back to 1973 is not really adequate to test this hypothesis, but let’s for argument’s sake assume that the hypothesis is correct.

    Yesterday’s brief heatwave as measured at RAF Coningsby then becomes a Black Swan event. In this respect, it is very similar to the anomalous two day heatwave which occurred in the NW Pacific region on June 27-29, 2021”

    https://jaimejessop.substack.com/p/uks-40c-one-day-heatwave-is-a-black

    Now apparently, because it’s happened before and because the odds of a warmer than average summer this year have increased considerably, the Met Office is ‘refusing to rule out’ 40C temperatures being recorded again this year. So another ‘black swan’ is a possibility, but technically I suppose it would not be a black swan because it’s happened before and is no longer unprecedented. But then again, 40C being recorded in two summers in a row when prior to that 40C was unheard of: surely that makes it even MORE unlikely? Two black swans in a row – a super black swan. My brain aches. This is getting ridiculous. All I can say is that what seems to be happening now is alarmists are organising a Hunt for the Black Swan; hence the pressure to find one is that more intense.

    Met Office Refuse To Rule Out 40C

    Liked by 2 people

  24. Jaime,

    You raise some challenging questions. I think the key is to focus on the fact that a black swan is an epistemic phenomenon (in this case it is epistemic uncertainty regarding aleatory uncertainty). But what was the basis for the ignorance? If we rely soley upon a statistical record to determine what can be expected, then we can say that Coningsby was a black swan purely on that basis. But surely there is more to it than that. Conception of the possible is determined by our scientific understanding of the dynamics of climate and the physics of the forcing to which it is being subjected. If Conningsby defied that understanding then we have a more fundamental epistemic basis for calling it a black swan. I have to say that Coningsby was such an extreme outlier that it looks to me like we did not understand the physics after all. Do we now know better, or would we be even more shocked by a recurrence because we are sticking to our previous understanding of the science behind climatic variability? I don’t think statistics alone can help us understand anything. Saying it has happened once so it may happen again does not help us calculate the probabilities. Improving our theories is the road to a better understanding, and if that might mean recognising that Conningsby was not driven by anthropogenic climate change but something else instead, then so be it.

    Liked by 1 person

  25. Jaime,

    I am reminded of the twitter spat that happened between Schmidt and Cliff Mass over the North West heat dome. Mass had a ‘golden rule’ that Schmidt said was pants. It seemed to me, however, that Schmidt was misunderstanding that the rule alluded to causal sufficiency and not causal necessity. I’m sure I covered this on Cliscep at the time but I can’t remember where exactly.

    https://www.seattletimes.com/seattle-news/seattle-meteorologist-cliff-mass-sparks-controversy-by-diving-into-heat-wave-climate-science/

    Liked by 2 people

  26. I’ve also written about that specific spat quite recently John. Schmidt made the absurd claim that the more extreme the temperatures became, the greater the attribution to climate change. It turns out that Cliff Mass was correct and that atmospheric dynamics was by far the most significant driver of the extraordinary heatwave, with climate change relegated to a very minor supporting role. Cliff and others have been trying to get a paper published on the Pacific NW heatwave, but are having difficulty of course finding a publisher. Hopefully, it will be out soon.

    Liked by 1 person

  27. Jaime,
    I think you have Gavin’s argument somewhat the wrong way around. He is pointing out that Cliff Mass’s argument is that the bigger the extreme, the smaller the contribution due to global warming. However, this would imply that an extreme that essentially could not happen in the absence of climate change would have a smaller global warming contribution than an event that was more common. This doesn’t really make sense.

    Like

  28. ATTP,

    Before we can take this debate any further, you are going to have to go away and learn the basics of causal theory. I suggest that you start by reading Judea Pearl’s ‘The Book of Why’. Once you have done that, you can return to the fray, although I would hope that by then you could already see for yourself why your argument is too ill-formed to be wrong.

    Like

  29. ATTP,

    No, I didn’t say that. I said come back when you have done the groundwork. Coming back was the important bit. What is it that you don’t fancy? Is it the coming back or the learning of the basics?

    Like

  30. No Ken, I was quoting him directly. This tweet is unambiguous:

    Liked by 1 person

  31. Jaime,

    I’ve just noticed the following statement in the Realclimate article that Schmidt references in his tweet:

    “What this shows first of all is that extreme heat waves, like the ones mentioned, are not just ‘black swans’ – i.e. extremely rare events that happened by ‘bad luck’.”

    For Christ’s sake!

    Liked by 1 person

  32. Jaime: You note:”It’s quite clear that the record set yesterday, being over 5C in excess of of the previous record maximum temperature, is a hugely anomalous outlier.”
    Well yes it is but outliers occur remarkably frequently in meteorological and hydrological records. This is basically because records are so relatively short. We are lulled into thinking that what has been experienced since, say,1973 is “normal”. Outliers are caused by physical processes and combinations of processes that haven’t occurred in the short record. A classic example is Hurricane Hazel in Southern Ontario in 1954. Maximum annual flood peaks in that area always resulted from spring snowmelt and the statistics of flood frequency analyses reflected that process. Then along comes a different process that wasn’t expected causing an extreme outlier in the flood records. I suppose Hurricane Hazel was a black swan but only because we don’t have 1000 years of records. Then we might have known how frequently a hurricane reaches southern Ontario. If we had 1000 years of records at Coningsby with a stationary climate, it would put that 40.3C temperature into a better perspective.

    I really like John’s observation:”Conception of the possible is determined by our scientific understanding of the dynamics of climate and the physics of the forcing to which it is being subjected…..Coningsby was such an extreme outlier that it looks to me like we did not understand the physics after all. Do we now know better, or would we be even more shocked by a recurrence because we are sticking to our previous understanding of the science behind climatic variability?”

    As they say about financial investments: Past performance is not necessarily indicative of future results.

    Liked by 4 people

  33. Yes indeed potentilla, such a short observational record is not a reliable yardstick to assess whether the event is a true black swan or not, which is why I followed up with the sentence:

    “Of course, data which only goes back to 1973 is not really adequate to test this hypothesis, but let’s for argument’s sake assume that the hypothesis is correct.”

    It’s probably the case that most of our observational weather records, on their own, are not sufficient to conclusively determine whether a particular extreme event is a black swan, or not, for that very reason. So we look to other determinants like our existing knowledge of the physics, as John rightly points out. In such circumstances, the designation ‘black swan’ must by necessity almost always be provisional.

    Like

  34. Very good Jit, that puts all the 40C+ records into perspective.

    Interestingly, a met office paper in 2019 authored by Christidis, McCarthy and Stott, following the dubious 38.7C ‘record’ measured at Cambridge Botanical Gardens, found that the chances of reaching or exceeding the 40C threshold anywhere in the UK was once in every 100-300 years in the current climate and would be once every 15 years by 2100 under RCP4.5 scenario and once every 3.5 years under RCP8.5.

    https://www.nature.com/articles/s41467-020-16834-0

    Three years later, we got 40C plus at three airports and two other locations. What are the chances of that happening? And now, rather than saying that 40C temperatures are still extremely unlikely, the Met Office is saying they ‘can’t rule out’ 40C AGAIN this year because it’s predicted to be a warmer than average summer. Something is definitely off here. The Met Office are telling us we could be seeing 40C+ two years in a row, in the current climate, when just three years ago they told us that we could expect to see 40C+ every 3.5 years in 2100 under the ridiculously unrealistic RCP8.5 emissions scenario. The Hunt for Black Swans has begun in earnest.

    Liked by 1 person

  35. Jaime,

    Judging from ATTP’s response, it looks like my somewhat brusque advice was taken as a simple ‘sod off!’ It wasn’t meant that way. On the contrary, I would be more than happy to discuss the issues with Dr Rice, but first there has to be an understanding on both sides that precise causal answers about climate events require precise causal questions. In saying this, I am directly quoting Professor Friederike Otto:

    “The answer to such an open question as have CO2 emissions caused the 2003 European heatwave is thus dramatically affected by (i) how one defines the event 2003 European heat­wave and (ii) whether causality is understood in a necessary or sufficient sense. Precise causal answers about climate events thus require precise causal questions.”

    The reason why Cliff Mass and Gavin Schmidt got into a spat is because they were attempting to settle the debate without clarifying the distinction between necessity and sufficiency. Dr Rice’s attempt to support Schmidt was just adding to that confusion. I wouldn’t mind, but I’ve been through this with him before and he hasn’t shown the slightest inclination to take on board anything I’ve said. The presupposition that a climate sceptic has nothing to teach anyone regarding uncertainty seems very strong within those who enjoy the comfort of consensus. This makes it very difficult to engage in a good-faith debate. Instead, one gets the sort of bluster, bile and bullshit that Willard recently left on this blog. I’m done with it.

    Like

  36. Andy,

    Thanks for the plug, but reading back through the comments for my Brief Primer has only reminded me of just how painful it was for me when I tried to take a causal theory approach in a debate over at ATTP. To cut a long story short, the response was hostile and dismissive, culminating in Willard deleting my last comment and Dr Rice then coming on here at Cliscep to assert that Willard was merely upholding the high moderation standards they have over at ATTP.

    So I’m guessing no one from ATTP will be following your link, and even if they did, they would have nothing sensible to add.

    Like

  37. I have a confession to make about that post of John’s on 14th March 2020. I suggested it should be our WordPress-anointed top post for the foreseeable future. (This was under our old interface/theme designed by Ian Woolley, who was pretty new to custom CSS. But that’s a whole other story.) Anyway, I though John’s post on causation could stand there as we all went very quiet, quite understandably, during the pandemic. What did we know about viruses, antivirals, lockdowns, masks, vaccines etc that was worth talking about? We would surely go very quiet and John’s post would stand triumphant for however long it took.

    As a failed prediction that I judge to be one of my best. The X, Y and Z, their arrows and their numerous siblings, in my model of Cliscep, were sadly lacking.

    Like

  38. Richard,

    I’ve always thought of we Cliscepers as being rebels without a causal model, so I wouldn’t beat yourself up over any failed predictions. I just remember that my ego was eternally grateful for the publicity resulting from your decision.

    On a related issue, you will have noted that Professor Fenton has been recently cancelled by having a proposed presentation at a NHS conference being rejected after it had been initially accepted:

    https://wherearethenumbers.substack.com/p/blasphemers-begone

    Why is that particularly relevant here? Because his talk was to be on the Bayesian foundation of structural causal modelling and its application in the diagnosis of chronic conditions.

    Liked by 2 people

  39. John, you quote Otto who talks about “climate events”. What is a climate event? I have no idea. I know what climate is, and I know what a weather event is. Otto calling an episode of extreme weather a ‘climate event’ to my mind biases the question a priori by conflating weather with climate, thus presupposing that there is some ‘causative’ link between climate and weather.
    As I understand it, it is impossible anyway to establish any causative link between an individual weather event and a changing climate – in this case a rise in mean surface temperature. ‘Attribution science’ merely establishes the existence of an enhanced probability of the event occurring in the present warmer climate vs. a hypothetical cooler climate without GHG forcing. That’s the best they can do. The rise in ambient temperature ’causes’ the probability to increase but it does not directly cause the event as we understand causation traditionally. OTOH, atmospheric conditions at the time of the event can be said to directly cause the weather event in question, through direct physical causative processes.
    It is also the case that the outcome of the attribution analysis depends very much upon how you choose to specifically define the event, e.g. maximum daytime temperatures over x number of days, occurring with a specified area etc.
    I don’t know if Ken will come back and answer my response to his initial comment but Schmidt’s tweet is not really open to interpretation – he really did say that the hotter it gets, the greater the attribution to climate change. That to me is utterly absurd.

    Like

  40. Viewing my local weather forecast I see numerous future hours characterized by predicted lightening strikes. But when those hours come and go, not a hint of any precipitation appears and my garden withers. This cannot be an Anatid of any hue so, could we refer to these false predictions as flamingos, preferably of the pink variety?

    Like

  41. Jaime,

    I think you are quite right in picking Otto up on the use of the term ‘climate event’. They are weather events that are, through the magic of attribution, suggestive of climatic trends. It’s another example of the abuse of language that Mark identified in his latest post. And, of course, it is all part of the exploitation of the availability heuristic that I accused the IPCC of encouraging in AR5.

    You can also discuss this in terms of causal theory. I have noted that Otto and her colleagues always emphasise the probability of necessity (thereby focusing upon the climate change signal) when reporting upon extreme events and never the probability of sufficiency (which would remind everyone of the natural variability signal). Both are required for a full analysis, and Otto knows this:

    Friederike Otto, What’s Your Game?

    Liked by 1 person

  42. Alan,

    Good question. What do we call the event that everyone predicted but did not happen? As in, no one saw that one not happening. Could that be the Mute Swan?

    Liked by 1 person

  43. John,

    I’m not that familiar with using the ‘necessity’ and ‘sufficiency’ tags and prefer to look at attribution analyses in terms of the feasibility of assigning climate change as a significant, even dominant contributor to the occurrence of the event vs. other physical/meteorological/geographical factors which cannot be linked to a long term increase in global/regional mean surface temperature ostensibly attributable to GHG forcing. Certainly, with the Pacific NW heatwave, there was a huge lack of sufficiency and it’s arguable if there was even the necessity to invoke climate change. For instance, a study found that recent warming contributed only 6.4% to the observed extreme temperatures:

    “So, atmospheric circulation (dynamics) accounts for 82.4% (>13.4C)of the observed anomalous high temperatures during June 26-30, soil moisture deficits account for 10.2% and recent regional warming (concurrent with the global warming trend) accounts for just 6.2% (0.9C).”

    https://jaimejessop.substack.com/p/revisiting-the-exceptional-pacific

    Then another research paper demolished the WWA rapid attribution study (subsequently published in a peer-review journal). The authors seem to be arguing that the attribution study did not even demonstrate necessity:

    “Given that an in-sample GEV distribution is a poor fit to the GHCN data and that the combined effects of the atmospheric blocking pattern and anomalous AR were likely very rare if not unique, we conclude that there should be little confidence in attribution statements based on in-sample GEV formulations. Philip et al. (2021) argued that the temperatures reached during the PNW heatwave were “virtually impossible” without climate change. However, this conclusion is not supported from a purely Granger causal inference perspective (Ebert-Uphoff & Deng, 2012; Hannart et al., 2016). Granger causality in this sense means that knowledge of greenhouse gas concentrations would inform about the probability of the 2021 heatwave temperatures. But due to the failure of the non-stationary GEV methodology to construct a well-fit in-sample distribution that includes the 2021 temperature values, and the fact that the out-of-sample distribution does not reach the magnitude of the 2021 event, no statement about the role of greenhouse gases should be made from this technique. The statistical analysis presented here only supports an attribution statement that these temperatures were virtually impossible under any previously experienced meteorological conditions, with or without global warming.”

    https://jaimejessop.substack.com/p/revisiting-the-exceptional-pacific-084

    Liked by 1 person

  44. Jaime,

    Interesting stuff. I was also interested to see the name Hannart cited. He has co-authored with Judea Pearl. In fact, if memory serves, he may be the guy who demonstrated that a PN is equivalent to an FAR if certain, reasonable assumptions are made. That’s why I keep saying that Otto’s FARs are tantamount to calculations of the probability of necessity.

    Liked by 1 person

  45. Jaime,

    Actually, I’ve just realised that the Hannart et al 2016 paper cited in your second quote is the one co-authored with Judea Pearl and Friederike Otto:

    Click to access r451-reprint.pdf

    In fact, I took the Otto quote that I gave earlier from that paper. Another prize quote I could have given is this:

    “We have shown, with simple examples, that it is important to distinguish between necessary and sufficient causality. Such a distinction is, at present, lacking in the conventional event attribution framework. Any time a causal statement is being made about a weather or climate-related event, part of the audience understands it in a necessary causation sense, while another part understands it in a sufficient causation sense, which can give rise to many potential misunderstandings. Introducing the clear distinction may thus clarify discussions.”

    Are you still there Dr Rice? Do you see what I’m getting at now?

    Like

  46. John:

    I’ve always thought of we Cliscepers as being rebels without a causal model

    Haha. With you as our James Dean huh?

    my ego was eternally grateful

    CSS emergency wrangling and Ridgway ego boosting. It’s what I do.

    The point about the latest Fenton cancellation is interesting and infuriating in equal measure.

    I did read that with further incredulity. I thought Jaime might be right about it being a set-up from the start. But that’s another causal model that can’t be reasonably tested. I decided to keep my putative arrows in their quiver.

    Liked by 1 person

  47. John, if Mute Swans are things or events that no one (or few) saw not happening could Climate Armageddon turn out to be a prime example? Then we happy band of sceptics will have to be.

    Like

  48. Alan,

    The problem is that, if it fails to happen, there will be those that will say it was only because of what we did to prevent it. The millenium bug is a favourite example in this regard. Many say that it proved to be a false alarm. Others scoff at such a suggestion by pointing to all of the effort expended in preventing it.

    Except that there is oodles of evidence that Y2K proved to be a fuss over nothing, and no evidence to the contrary.

    Like

  49. John, I suspect that Robin may have something to say about Y2K (I am agnostic on the subject).

    Like

  50. Mark,

    Given his expert authority, anything Robin might want to say here on the subject would be most welcomed. In the meantime, there is this Guardian article on the subject in which Robin’s key role at the national level is made clear:

    https://www.theguardian.com/technology/2000/jan/04/y2k

    My own rant on the subject can be found in the comments section for the following:

    Another Miracle Just Happened

    All I will say here is that I took the problem every bit as seriously as Robin would have wanted me to and took personal charge of the Y2K project within the software development company I worked for. My experience left me with the conviction that the problem had been hyped, and assertions that disaster had been averted due to the collective efforts of the software development community were placing far too much faith in the efficacy of the average software test regime.

    Liked by 1 person

  51. Robin,

    Thank you for the two papers. Both made for very interesting reading.

    I guess my issue is that, given the degree to which the world is computerised and the vast scale of the Y2k test programme, it is difficult to judge the significance of the volume of faults found, as the law of truly large numbers will apply. Our company, quite rightly, took the problem very seriously, but found nothing. How typical was that experience? I doubt that is a question that can be easily answered. What was the expected test failure rate, and what was it in practice? To what extent, therefore, was an expected disaster averted by corrective action?

    I am not one of those claiming it was a waste of time. I simply wonder to what extent a great deal of necessary effort was expended in quality assurance rather than quality control.

    Like

  52. Just to add to the above, I have re-read some of the comments I have previously posted on this subject and I see that I have suggested the Y2k efforts proved to be a waste of money. This is a ridiculous assertion. A precautionary approach was always necessary and was always going to cost. However, I still suspect most of the expenditure was on proving the problem wasn’t as extensive as feared. That was my experience, at least.

    Like

  53. John: I’ve got nothing to add to the paper I wrote in 2011. Franky I’m bored by Y2K.

    Like

  54. Robin,

    That’s perfectly understandable. We’ll leave it there then.

    Like

  55. Mark,

    Just for your benefit, the two papers provided by Robin demonstrated that ‘fuss over nothing’ is definitely the wrong language to use. I should learn to be more careful. What I had meant to say is that the problem proved not to be as extensive as was first feared and that was the main reason that the millennium was a relative non-event. The evidence is the volume of negative test results returned, but that isn’t to deny the importance of the positive results.

    Liked by 1 person

  56. No, the reason the outcome was not as bad as some commentators (mainly the media) had suggested was that warnings were heeded and a vast amount of investigative and remedial work carried out.

    (I’d hoped to avoid another Y2K discussion.)

    Like

  57. Robin,

    I thought you had already declared your innings. It wasn’t my intention to cause you to bat again. My aside to Mark was intended as a courtesy in his direction. I am happy to let you have the last say, if only out of respect for your obvious expertise and evident frustration.

    And yet… 🙂

    Like

  58. Robin,

    And yet, looking at this from a software quality assurance metrics viewpoint, it seems there may still be questions to be answered before claims regarding scale of risk reduction can be made. Normally when approaching a software test programme one may have a view as to what the likely defect density is prior to testing. Given certain assumptions regarding adequacy of test coverage one may use the numbers of defects found as an indication of how many are likely to remain (alternatively, one may turn this around and use the numbers of defects found as an indicator of the adequacy of the test regime). So as far as Y2K is concerned, the questions I have are:

    a) What was the anticipated defect density prior to testing?

    b) What was the test failure rate in practice?

    Without metrics, the arguments regarding whether or not the outcome was attributable to testing effort or the scale of the initial risk will never amount to anything more than anecdotal. If I read your paper correctly (a big ‘if’) you have demonstrated that the problem was real and that the outcome would have been worse without the considerable efforts expended, but that alone does not settle the issue.

    As far as I am concerned, there was never any question regarding the existence of a threat – the problem was very well defined. Furthermore, software quality assurance is not just about finding bugs, it is about demonstrating the fitness for purpose of the end product. That fitness for purpose had to be demonstrated rather than assumed and, in that respect, the number of defects found isn’t the only measure of the usefulness of the exercise. I have been guilty of lazy rhetoric in discussing this before now on this website, and for that I apologise. Even so, I suspect that the causation here is still an open question and it would probably require a good Bayesian network supported by better data in order to answer it.

    Like

  59. I’m gratified that you think so, Robin. I wouldn’t want you to think that I was mindlessly repeating the fallacy argument that the lack of drama on the day proves it was never an issue, although I concede that I may have let myself come across that way. The truth is that I took it all very seriously. As soon as I read about Y2K I went back to my desk and drew up a detailed proposal to present to senior management. They just ignored it, so I used what authority I had as the quality manager to proceed anyway. I personally saw to it that all project managers developed detailed test plans, compliant with government advice, to address the full family of related problems. I then oversaw their implementation and audited the test results. At no stage before, during or since, did I question the validity of the exercise, although the null results did lead me to question whether the scale of the problem had been overestimated. No doubt, if my company’s software products entailed more date-handling, or were based on more legacy code, I might have come away with a different impression. Either way, I would still just be peddling anecdote and, in the absence of better metrics, I’m afraid that might be where we all must now stand.

    To get back on topic, I would like to draw the parallel with climate change. If the warming does not prove as dramatic as predicted by the models, do we conclude that the models were wrong or put it down to Net Zero efforts? If it comes to that, I’m sure we would struggle to provide a conclusive answer, but I can’t imagine that an admission that the models were wrong would be on the cards.

    Like

  60. Just to qualify what I have said above regarding tipping points as grey rather than black swans: if an identified tipping point were to happen much sooner than was thought possible, it would then qualify as a black swan. However, even that possibility of surprise has now been ruled out by scientists at Bangor University:

    “Catastrophic climate ‘doom loops’ could start in just 15 years, new study warns”

    https://www.livescience.com/planet-earth/climate-change/catastrophic-climate-doom-loops-could-start-in-just-15-years-new-study-warns

    More modelling, I’m afraid. It’s difficult nowadays to conceive of a doom that hasn’t already been thought of and modelled.

    Liked by 1 person

  61. John,

    ‘Tipping points’ is becoming so passé. It can’t be long now before the Guardian informs us that in order to be more scientifically accurate, it is updating its new style guide to recommend that ‘climate tipping points’ now be referred to more appropriately as ‘catastrophic climate doom-loops’.

    Like

  62. Or perhaps they’re destined just to become a tasty snack? How long before we see packets of extra hot and spicy ‘Catastrophic Climate Doom Loops’ go on sale in the UK?

    Liked by 1 person

  63. Jaime,

    Thank you for the link. I found it a very interesting read. The closing paragraph seemed particularly germane to my article since it highlights the importance of current gaps in knowledge:

    “Dirk Sachse from the GFZ adds, ‘Although there is increasing evidence that sudden climatic changes have occurred in the past, current climate models cannot reproduce such abrupt shifts in the mean state in the tropical Pacific. This highlights that the understanding of the underlying mechanisms is still limited. In the context of anthropogenic climate change, a better understanding of the drivers and consequences of the complex dynamics of the mean state of the tropical Pacific is of great importance. For this, the integration of paleoclimatological data into modern climate models plays a crucial role’.”

    Liked by 1 person

  64. That’s misinformation John, maybe even disinformation, which has the potential for real societal harm. Man-made climate change is settled science, as we know. Thankfully, the Online Harms Bill is being drafted so as to put an end once and for all to such climate denier monkey business.

    Liked by 1 person

  65. Jaime/John – from that paper –
    “Against the backdrop of global warming, El Niño is expected to bring record-breaking high temperatures and various extreme climate events globally such as droughts, floods and wildfires, which will significantly affect the lives and well-being of millions of people.”

    Liked by 1 person

  66. dfhunter,

    Against a backdrop of global cooling during the Little Ice Age, THE most severe drought and heatwave ever to affect northern Europe before or since, occurred in 1540. It’s unclear whether El Nino or La Nina conditions were prevalent at the time, but undoubtedly ENSO must have played a part.

    “Occurring during a stretch of unusually warm summers in the midst of Europe’s “Little Ice Age,” a period of global cooling and extreme weather that affected the continent between the 14th and 19th centuries, the 1540 drought’s heat was so extreme that even state-of-the-art climate models could not predict it when fed nearly 1,200 years of climate data.”

    https://www.smithsonianmag.com/history/this-summers-drought-is-europes-worst-in-500-years-what-happened-last-time-180980711/

    Like

  67. From your link – “The resulting “late harvest,” or Spätlese, wine was deliciously sweet. A bottle of the 1540 vintage became so coveted that Swedish soldiers tore apart the German city of Würzburg looking for a barrel of it almost 100 years later; the previous inhabitants had hidden the wine in a wall. In 1540, however, the drink was cheap and abundant. Chronicler Hermann von Weinsberg, writing in Cologne, described people all over the city lying in the gutters, dead drunk, “like pigs.”

    sounds familiar for some reason – “Happy Hour” ?

    Like

  68. “This highlights that the understanding of the underlying mechanisms is still limited. In the context of anthropogenic climate change, a better understanding of the drivers and consequences of the complex dynamics of the mean state of the tropical Pacific is of great importance. For this, the integration of paleoclimatological data into modern climate models plays a crucial role.”

    for “mean state” I read natural climate change ?

    Like

  69. Would a revelation that the USA hoards alien spacecraft be a swan of black or grey hue (but no longer mute) event? For me the unlikeliness of this has been shown by the absence of any Trump bragging about the matter or documents being shown to favoured guests.

    Like

  70. Alan,

    The US military has been playing up this captured alien stuff since the early fifties. No one else is, which is why alien craft only ever crash near US military installations.

    Like

  71. Black swans just keep happening nowadays. I can only assume that hens have started developing molars. An interesting article from Steve Kirsch:

    “Jay Bonnar saw a lot more black swans than is humanly possible if the CDC is telling the truth
    Basically, Jay Bonnar saw 15 black swans recently even though the CDC claimed that black swans (people who died unexpectedly from the COVID vaccine) are really rare.

    There is no way that the CDC can gaslight this by pointing to studies claiming that there are a small number of black swans out in the wild.

    It happened and it’s verifiable: Jay saw 15 black swans.

    In fact, it’s even worse. Four of the people who died were double-black swans (died on the same day as the vaccine); these are 180X rarer than just black swans!

    The CDC essentially said the vaccines are safe which means they kill fewer than 1 person per million doses. So Jay’s vaccinated friends, with 14,000 doses, should have experienced .014 deaths according to the CDC, but instead ended up with over 1,000 times that number of deaths. The chance of that happening just by random “bad luck” to Jay is 1.2e-40 for Jay to see 15 or more black swans (this is given by the survival function poisson.sf(15-1, .014)).

    Jay’s story is basically unexplainable if the CDC is telling the truth and the COVID vaccines are perfectly safe. \

    The CDC is lying. The COVID vaccines are unsafe. It’s a mathematical certainty.

    In short, Jay saw way too many black swans for us to believe the CDC that black swans are really as rare as they claim.

    Anyone can verify his anecdote (because he lists all the names) and anyone can do the math.

    Conversely, based on my estimates of 1 death per 1,000 doses, we get poisson.sf(14, 14)=0.42956328717262765 which means my explanation is perfectly reasonable whereas the FDA’s is statistically impossible.

    If I’m wrong, simply explain how Jay could see so many black swans among his 7,500 friends.”

    https://kirschsubstack.com/p/jay-bonnars-anecdote-is-statistically

    Apart from the (to me) astounding fact that someone can have 7500 friends, this sounds quite reasonable. I can count mine on two hands!

    Like

  72. Jaime,

    Obviously, the credibility of the 7500 figure is crucial to the significance of the probabilities. So we have to ask ourselves which is more credible?

    a) The idea that someone knows 7500 people with whom he is in direct contact, and who has a sufficient ability to monitor their status, such that sudden deaths will be readily drawn to his attention.

    b) Someone has ready access to news of sudden deaths within a much wider community (i.e. via various news outlets) and can create plausible narratives that portray these people as ‘friends’ (six degrees of separation and all that). Once one has collated such a group of deaths reported in the news, one can work backwards from any particular presumed level of vaccine lethality in order to calculate what the sample size would have to be. And, hey presto, the sample size that fits our narrative happens to be 7500.

    In the second scenario the actual death rate is very low because it is selected from a very large population, but it has been made to look high because of the posited sample size of 7500.

    The second scenario is obviously very cynical, but which of the two theories actually applies would still require further detective work. No amount of statistical analysis could help decide between the two. All I will say is this. One of my relatives died 4 years ago from sudden death syndrome. He was the father’s only son. And yet it wasn’t until 2 years after the death that the father learnt of the tragedy. In real life, keeping tabs on even the closest of blood relatives can be difficult when estrangement is involved. So keeping tabs on 7500 loosely-known people via the internet seems a tall order to me.

    Liked by 1 person

  73. John. A tall order indeed. Contributors to Cliscep have disappeared. What has happened to them? Tired of commenting? Covid? Who knows?

    Like

  74. John, yes, it stretches believability that somebody could be monitoring the life events of so many ‘friends’, which does bring into question the validity of Kirsch’s anecdote. But we do have studies which independently corroborate a figure for vaccine dose fatality rate (vDFR) of approximately 1 per 1000, so maybe Kirsch’s anecdote is legitimate or maybe it’s not but it just happened to coincide with independent analyses. Who knows.

    “On the global scale, given the 3.7 million fatalities in India alone, having vDFR = 1 % (Rancourt, 2022), and given the age-stratified vDFR results presented in this work, it is not unreasonable to assume an all-population global value of vDFR = 0.1 %. Based on the global number of COVID-19 vaccine doses administered to date (13.25 billion 24 doses, up to 24 January 2023, Our World in Data),3 this would correspond to 13 million deaths from the COVID-19 vaccines worldwide.”

    https://denisrancourt.ca/entries.php?id=126&name=2023_02_09_age_stratified_covid_19_vaccine_dose_fatality_rate_for_israel_and_australia

    Like

  75. Jaime,

    If the Jay anecdote isn’t legit, then I am quite sure that correlation with independent calculations of vDFR wouldn’t be a coincidence. 🙂

    As for those independent calculations, I have to admit I haven’t spent a lot of time thinking about them. However, what struck me between the eyes was Fenton’s analysis of how such calculations are critically dependent upon one’s definition of ‘vaccinated’. It is quite an obvious problem when you think about it, and the fact that authorities were making bold claims for vDFR, in the absence of an agreed definition, became a huge red flag to me. It’s at that point that you realise that they are either being grossly incompetent in failing to nail down their definitions, or they’re bullshitting. It’s not that I accept the very high vDFR values (I would have to do a lot more reading before I got to that point), it’s just that I already have no confidence in the very low values.

    Liked by 1 person

  76. Alan,

    Yes, I have to admit that there isn’t the breadth of contribution that there used to be when I first joined Cliscep. Looking at other blogs, that seems to be a general problem. Meanwhile, I see my latest offering on the National Risk Register has attracted very little interest based upon viewing statistics. Perhaps I shouldn’t have put the words’ Risk Register’ in the title. Maybe the rest of the world is not as fascinated by them as I obviously am 🙂

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.