I doubt that there are many aficionados of this website who have not heard the name ‘Oreskes’. This is because Professor Naomi Oreskes is an academic who specialises in debunking climate change sceptics. Yes, she takes people like you and cruelly exposes your naïve acceptance of the supposed obfuscation, distortions and downright lies issued by Big Oil. You can read all about it in her infamous exposé, The Merchants of Doubt, where she documents in forensic detail how Big Oil employed a panoply of propaganda techniques taken straight from the tobacco industry ‘playbook’. And one reason why she is deemed qualified to do this is because she has a firm grounding in the history of statistics and how it should be employed.

Except the truth is that she has neither. In fact, such is her profound lack of understanding in those two vital areas, that one has to wonder why anyone listens to her at all. And it isn’t as though this critical shortcoming has hitherto been hidden from the public gaze. For many years she has been writing articles that others, who are far better qualified to comment, have been quick to destroy. And yet she is still here, exhibiting the same levels of pseudo-expertise that she claims exist only within the ranks of the climate change sceptical. How does that work?

Today, and purely for your entertainment1, I wish to take you back to 2015, to provide you with a prime example of her seriously flawed understanding of how statistics works. Furthermore, I will demonstrate to you how, even then, it wasn’t difficult to find people who were able to expose her junk wisdom. So prepare for a masterclass in debunking, not from me but from a certain Nathan Schachtman, Esq., PC, who for over 40 years has specialised in the application of statistics and causal analysis in order to address scientific and medical legal issues.

In typical fashion, Oreskes set out her intention to deprecate the climate change sceptic by giving her article2 the title, ‘Playing Dumb on Climate Change’. Nevertheless, the statistically trained lawyer, Schachtman, was having none of it and responded with his own article, ‘Playing Dumb on Statistical Significance’. So what is so dumb about the Oreskes article? I’ll let Schachtman explain:

Oreskes wants her readers to believe that those who are resisting her conclusions about climate change are hiding behind an unreasonably high burden of proof, which follows from the conventional standard of significance in significance probability. In presenting her argument, Oreskes consistently misrepresents the meaning of statistical significance and confidence intervals to be about the overall burden of proof for a scientific claim.

To illustrate Oreskes’ intentions, Schachtman provides the following quote from her article:

Typically, scientists apply a 95 percent confidence limit, meaning that they will accept a causal claim only if they can show that the odds of the relationship’s occurring by chance are no more than one in 20. But it also means that if there’s more than even a scant 5 percent possibility that an event occurred by chance, scientists will reject the causal claim. It’s like not gambling in Las Vegas even though you had a nearly 95 percent chance of winning.

And to explain why this misrepresents the meaning of statistical significance and confidence limits, he points out the following:

Although the confidence interval is related to the pre-specified Type I error rate, alpha, and so a conventional alpha of 5% does lead to a coefficient of confidence of 95%, Oreskes has misstated the confidence interval to be a burden of proof consisting of a 95% posterior probability. The “relationship” is either true or not; the p-value or confidence interval provides a probability for the sample statistic, or one more extreme, on the assumption that the null hypothesis is correct. The 95% probability of confidence intervals derives from the long-term frequency that 95% of all confidence intervals, based upon samples of the same size, will contain the true parameter of interest.

To add to this, Schachtman points out:

[A]lthough statisticians have debated the meaning of the confidence interval, they have not wandered from its essential use as an estimation of the parameter (based upon the use of an unbiased, consistent sample statistic) and a measure of random error (not systematic error) about the sample statistic.

All of this might seem a bit convoluted but the essence is this: Oreskes has taken a concept that represents the likelihood of data (i.e. a measure of random error about the sample statistic) and interpreted it as a posterior probability that the hypothesis is true. It’s a classic transposition of the conditional, i.e. treating P(E|H) as if it were P(H|E) . As such, it’s the same gaffe that Professor Fenton pointed out when the IPCC had stated in their AR6 executive summary for policy makers that there was at least a 95% degree of certainty that more than half the recent warming is man-made. In fact, what the body of the report had actually said was that the probability of observing the recent warming was only 5% if AGW was not deemed to be contributing over half. That is a very different statement. To turn the latter into the former requires one to take into account the a priori probability that the climate models are a faithful and accurate representation of the warming processes. And that’s a very open question, despite what Oreskes would have you believe.

However, the catalogue of Oreskean error does not end there, since she dares to venture further into the expert domain of the statistical lawyer, with the following:

But the 95 percent level has no actual basis in nature. It is a convention, a value judgment. The value it reflects is one that says that the worst mistake a scientist can make is to think an effect is real when it is not. This is the familiar “Type 1 error”…The fear of the Type 1 [false positive] error asks us to play dumb; in effect, to start from scratch and act as if we know nothing. That makes sense when we really don’t know what’s going on, as in the early stages of a scientific investigation. It also makes sense in a court of law, where we presume innocence to protect ourselves from government tyranny and overzealous prosecutors — but there are no doubt prosecutors who would argue for a lower standard to protect society from crime.

Once again, the lawyer with the statistics background has to remind Oreskes that you cannot equate the 95% coefficient of confidence in statistical theory with the legal standard known as “beyond a reasonable doubt”:

The truth of climate change opinions do not turn on sampling error, but rather on the desire to draw an inference from messy, incomplete, non-random, and inaccurate measurements, fed into models of uncertain validity. Oreskes suggests that significance probability is keeping us from acknowledging a scientific fact, but the climate change data sets are amply large to rule out sampling error if that were a problem. And Oreskes’ suggestion that somehow statistical significance is placing a burden upon the “victim,” is simply assuming what she hopes to prove; namely, that there is a victim (and a perpetrator).

The bottom line is that if you want to talk about Type I errors and burdens of proof, you need a much better grasp of statistical concepts than would seem to be the case with Oreskes. She goes on to try to rescue the situation with arguments that look Bayesian in nature but even then she falters badly by choosing passive smoking as her example. Schachtman has a long and successful career handling passive smoking claims in the courts, and he takes no prisoners in dismantling her ‘scientific’ arguments – but I’ll let you read that part for yourselves. What I will do here instead is leave you with Schachtman’s own closing statement:

I will leave substance of the climate change issue to others, but Oreskes’ methodological misidentification of the 95% coefficient of confidence with burden of proof is wrong. Regardless of motive, the error obscures the real debate, which is about data quality. More disturbing is that Oreskes’ error confuses significance and posterior probabilities, and distorts the meaning of burden of proof. To be sure, the article by Oreskes is labeled opinion, and Oreskes is entitled to her opinions about climate change and whatever.  To the extent that her opinions, however, are based upon obvious factual errors about statistical methodology, they are entitled to no weight at all.

Ooof!  That’s gonna hurt in the morning.

Epilogue

On his blog, dated 28th February 2023, Schachtman recounts how our intrepid expert, Professor Oreskes, sought to provide her expert testimony in support of Michael E. Mann’s defamation case against National Review magazine, the Competitive Enterprise Institute (CEI), and Mark Steyn. Despite being a professor of the History of Science, Oreskes’ hopes of putting her weight fully behind Mann were dashed when Judge Alfred S. Irving, Jr. decreed that she had no relevant expertise to offer. Oreskes’ opinions, at issue in the Mann case, were on:

  • the general basis for finding scientific research to be reliable, and
  • that “think-tanks” (including the defendant CEI) “ignore, misrepresent, or reject” principled scientific thought on environmental issues.

On the first issue, Irving ruled that her opinions were redundant, given that she is a historian and not a climate scientist. On the second issue, Irving asked what her expert methodology was for deciding whether principled scientific thought had been ignored, misrepresented or rejected. She described for Irving’s consideration something she referred to as a ‘content analysis’ she had performed when investigating Exxon:

We applied a well-established method in social science, which is broadly accepted as being, you know, a reputable method of analyzing something, content analysis, in order to show that there was this fairly substantial disparity between what the company [Exxon] scientists were saying in their private reports and publishing in peer-reviewed scientific literature which was essentially consistent with what other scientists were saying versus what the company was saying in public in advertisements that were aimed at the general public.

Except that, in the Mann case, Oreskes had to admit that she hadn’t actually used ‘content analysis’. Candidly she had to concede:

If you want me to tell you what my method is, it’s reading and thinking. We read. We read documents. And we think about them.

On the basis that it could be assumed that the jury members had already mastered the concepts of reading and thinking for themselves, Oreskes undoubted talents were politely declined by the judge, leaving Mann to do his own bullshitting. At least the courts were spared Oreskes’ botched explanations of ‘statistical significance’ and having to listen to her explaining why the scientists’ fear of Type I errors is making them far too conservative for the public good.

 Footnotes:

[1] I say just for your entertainment because nothing I write here about Oreskes will have the slightest impact on her reputation outside of Cliscep.

[2] I’m afraid that the Oreskes article was published in the New York Times and so it is behind a paywall. However, fortunately for the impoverished readers of Cliscep, Schachtman’s takedown includes extensive quotes from the NYT article, so you needn’t worry about redirecting funds from your jealously protected heat pump savings account.

20 Comments

  1. Did she make the error of including that statistically delusional opinion piece in her academic CV? That’s what she did so in relation to the Merchants movie, on which she admitted being a consultant in all technical matters. The problem is that the consensus fib in that film put her in violation of Harvard’s ethics guidelines. A chance to familiarise a wider audience with Oreskes’ routine, obligate mendacity?

    Liked by 4 people

  2. John,

    Do the observations contained in your epilogue mean that Michael Mann’s case against Steyn and others is finally – after more than a decade, I believe – getting close to trial? I confess I haven’t read anything about that recently.

    Liked by 1 person

  3. It was fun when Oreskes flunked the Daubert standard, which disqualifies would-be expert witnesses whose only contribution to the trial is an opinion any reasonable juror could reach on their own by “reading and thinking.” Also known as the Half a Brain Test, the judge had simply to ask: would anyone with half a brain have figured this out for themselves? Since the answer, with Oreskes, is invariably, “No, nobody with half a brain would agree with her,” her generous offer of assistance was declined to the surprise of the aforementioned nobody.

    Liked by 3 people

  4. Brad,

    It doesn’t bode well when, in a case that hinges on one’s ability to demonstrate scientific credibility, the prime expert witness in your defence is rejected because they failed a test for scientific credibility.

    Liked by 3 people

  5. I heartily recommend reading Brad Keyes’ WUWT article, as linked to in his first comment above. Quite apart from its pertinence and excellence, it is remarkable for two specific reasons:

    a) It asks one of the great scientific questions of our time: How come Oreskes can suffer herself so gladly?

    b) It includes an anecdote regarding an anti-consensus victim called Dan Schechtman, whilst my article’s star expert witness goes by the name Schachtman. What are the odds?

    Liked by 1 person

  6. What has been done to poor Oreskes? In previous incarnations, brought to our attention by Brad, she moved repeatedly (cyclically) and was covered with oil. Now she suffers rigidity and lacks any fluidity. Like her ideas, perhaps she has seized up.

    Like

  7. John, thanks for posting on the critique against Oreskes. I was aware of Nathan Schachtman’s legal acumen some years ago, but not of these unsparing remarks.

    Back in 2015 climatists were despairing of convincing enough voters to enact their agenda, and were turning instead to legal arguments aimed to win over individual judges. Instances of that litigation have multiplied since then. In that context I came across Schachtman’s posts regarding how courts should consider scientific evidence, in his case drawing on experiences with medical products liability, where epidemiological studies are brought by experts employing statistical analyses.

    I can no longer find his post at the time, but a recent one gets the gist of his methodology:
    https://schachtmanlaw.com/category/reference-manual-on-scientific-evidence/

    My post relating to scientific evidence for global warming/climate change is

    Legal Test of Global Warming

    Liked by 1 person

  8. Ron,

    Thank you for the links. Reading the Schachtman link reminds me of just how problematic causal arguments can be and how easy it is to be misled by epidemiological arguments. I am in the process of re-reading Judea Pearl’s ‘The Book of Why’ and, as it happens, I am on the chapter that covers Bradford Hill. All I will say is that, if Bradford Hill still represents best practice when looking at causality in the legal world, then it is a world that has a lot of catching up to do. The science of causal inference has moved on a great deal since the days of the Bradford Hill protocol.

    On the other hand, Mann just needs to be thankful that Steyn hasn’t called upon Schachtman as his expert witness.

    Like

  9. John, apparently the next step in Mann Vs. Steyn (or the other way around) more than two years later would be a jury trial. It also seems that Steyn wants that and Mann is stalling.

    This whole issue of causation does remind of the attribution of extreme weather, which also has the spector of liability for damages and reparations as motivation for the emergency ambulance chasers.

    Like

  10. Thank you, Ron. It’s still dragging on interminably, then. For Mark Steyn it looks very much as though the (expensive) process is the punishment.

    Like

  11. One can hope Mann v Steyn etc. doesn’t come down to the question of temperatures during the Middle Ages, a question on which victory for Mann’s critics would be both forensically difficult and pointless, since even if Mann was wrong about the MWP it wouldn’t follow that he was fraudulent, and even if he was right it wouldn’t follow that he was honest.

    Thanks, John, for the thankless work of writing this, and for your praise in the comments. Thanks also to Ron for his link. Someone had to do it, but nobody had to do it so well.

    Liked by 3 people

  12. Brad,

    Thank you for the offer of a new GIF. However, I will politely decline since I am still undergoing expensive therapy from having seen your previous offering.

    Liked by 1 person

  13. “The Prosecutor’s Fallacy and the IPCC Report”

    Click to access Fenton-Bayes.pdf

    A new paper from the Global Warming Policy Foundation reveals that the IPCC’s 2013 report contained a remarkable logical fallacy.

    The author, Professor Norman Fenton, shows that the authors of the Summary for Policymakers claimed, with 95% certainty, that more than half of the warming observed since 1950 had been caused by man. But as Professor Fenton explains, their logic in reaching this conclusion was fatally flawed.

    “Given the observed temperature increase, and the output from their computer simulations of the climate system, the IPCC rejected the idea that less than half the warming was man-made. They said there was less than a 5% chance that this was true.”

    “But they then turned this around and concluded that there was a 95% chance that more than half of observed warming was man-made.”

    This is an example of what is known as the Prosecutor’s Fallacy, in which the probability of a hypothesis given certain evidence, is mistakenly taken to be the same as the probability of the evidence given the hypothesis.

    As Professor Fenton explains

    “If an animal is a cat, there is a very high probability that it has four legs. However, if an animal has four legs, we cannot conclude that it is a cat. It’s a classic error, and is precisely what the IPCC has done.”

    Liked by 2 people

  14. The standard explanation of the Prosecutor’s Fallacy goes over a lot of people’s heads, understandably so. I prefer to make the example more personal, like:

    Assuming you’re not psychic or anything, ha ha, what are the chances that you, a mere neurotypical, can correctly predict two coin tosses in a row? Answer: 25%.

    So far so good.

    But look, that guy over there DID predict two coin tosses in a row. And we’ve agreed that there’s only a 25% chance of a non-psychic person doing that… so it’s 75% certain that he DOES have psychic powers.

    Liked by 2 people

  15. I’m afraid I’m going to have to put climate scientist Dana Nuccitelli in the corner occupied by Naomi Oreskes. According to Yale Climate Connections, he is something of an expert:

    “Dana Nuccitelli, research coordinator for the nonprofit Citizens’ Climate Lobby, is an environmental scientist, writer, and author of ‘Climatology versus Pseudoscience,’ published in 2015. He has published 10 peer-reviewed studies related to climate change and has been writing about the subject since 2010 for outlets including Skeptical Science and The Guardian.”

    Unfortunately, he is also quite ignorant when it comes to the use of statistics to measure confidence, as demonstrated in this Guardian article he wrote some time ago:

    https://www.theguardian.com/environment/climate-consensus-97-per-cent/2017/may/03/is-the-climate-consensus-97-999-or-is-plate-tectonics-a-hoax

    Quite apart from the inappropriate fixation on counting papers to measure consensus, as if that would add anything to our scientific understanding, there is this to be found:

    “The latest IPCC report, which summarizes our best current scientific understanding, said with over 95% confidence that humans are responsible for most of the global warming since 1950, and most likely responsible for all of it.”

    Yes, once again, we see the IPCC’s transposition of the conditional quoted uncritically by someone who, if they were a real expert on the subject, would have known better.

    I’m thinking of writing my own book “Pseudostatistics versus Climate Scepticism”, in which Nuccitelli would feature prominently.

    Liked by 2 people

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.