Hero or Zero?
If I may, I’d like to take you on a trip down memory lane. In fact, all the way back to 2015 to read an article that appeared on The Conversation website. It started with the shocking revelation that:
“The BBC is about to screen its first climate change-dedicated documentary in some years.”
Oh how times have changed. The same article written today would achieve the same shock value if it were to point out that the BBC hadn’t screened a climate change documentary since lunchtime. But that is of peripheral interest here. Indeed, the Conversation article itself is entirely unremarkable save for one thing: The documentary to which it referred was Climate Change by Numbers, and, as far as I was concerned, that documentary was noteworthy because it featured a certain Norman Fenton, professor of Risk Information Management at Queen Mary London University, who was explaining why the scientists are 95% certain that more than half of recent warming can be attributed to anthropogenic influences.
This was quite unsettling for me because it was Professor Fenton’s work on the application of Bayesian techniques within software quality assurance that had highly influenced my personal approach to that profession and had set me on a road of discovery regarding uncertainty and its quantification. In fact, I would go so far as to say that the insights he had given me lay at the heart of my scepticism regarding the socially accepted climate change narrative. If the person who made me the sceptic that I am was saying that I am wrong to be sceptical on such an important point, then maybe it was time for me to reconsider.
Matters were made worse by the fact that I had actually met the man, and my encounter (brief though it was) had left me with the deep impression that if he ever witnessed any bullshit, he wouldn’t be afraid to make his feelings known. The occasion was an object-oriented analysis seminar held at Oxford University. Two gentlemen on stage had spent the last hour pontificating on ontology and other high-brow stuff before opening up the session for questions. I was stood at the back of the room next to Norman when the microphone was handed to him. He spoke only one word:
And with that, he left.
So it was my no-nonsense hero who I was watching on the BBC, acting as a spokesman for the dark side, and giving me such a crisis of confidence.
The Closet Sceptic
Thankfully, however, things were not quite as bad as they seemed. A little rooting around on the internet unearthed an article written by Professor Fenton, giving some background to his contribution and his personal thoughts on the subject. It was full of the sort of cautious thinking that I might have expected from the man, and it was music to my ears. Firstly, there were his concerns regarding the complexity of climate models and a hint that an alternative approach would be beneficial:
“I found the complexity of the climate models and their underlying assumptions to be daunting. The relevant sections in the IPCC report are extremely difficult to understand and they use assumptions and techniques that are very different to the Bayesian approach I am used to. In our Bayesian approach we build causal models that combine prior expert knowledge with data.”
Then there were other issues he would have liked the program to have covered:
“For example, there has been controversy about the way a method called principal component analysis was used to create the famous hockey stick graph that appeared in previous IPCC reports. Although the problems with that method were recognised it is not obvious how or if they have been avoided in the most recent analyses.”
And one for Climategate aficionados:
“Assumptions about the accuracy of historical temperatures. Much of the climate debate (such as that concerning the exceptionalness of the recent rate of temperature increase) depends on assumptions about historical temperatures dating back thousands of years. There has been some debate about whether sufficiently large ranges were used.”
Then there is model selection bias and the role of systematic error:
“Variety and choice of models. There is no doubt that, although there are variations in the levels of confidence, all of the multiple climate models used in the study support the 95% figure. Indeed most actually have a much higher level of confidence in the impact of human C02 emissions. However, there are many strong common assumptions in all of the models used and it has been argued that there are alternative models not considered by the IPCC which provide an equally good fit to climate data, but which do not support the same conclusions.”
And finally there is my own beef that the ranks of climate science are too obsessed with a frequentist treatment of uncertainty:
“Although I obviously have a bias, my enduring impression from working on the programme is that the scientific discussion about the statistics of climate change would benefit from a more extensive Bayesian approach. Recently some researchers have started to do this, but it is an area where I feel causal Bayesian network models could shed further light and this is something that I would strongly recommend.”
So it turns out that Professor Fenton is well aware of the issues after all. Much to my relief, my hero hadn’t joined the dark side but was still the sort of hard-nosed, deep thinking, highly qualified, sceptical expert that I like. Of course, the same cultural developments that make the Conversation article’s opening statement seem so outdated nowadays have also seen the end of any meaningful, open discussion of the issues raised by Professor Fenton. To keep alive a public interest in matters Bayesian etc., he has had to look elsewhere.
Thank God for Covid-19
In keeping with many statisticians with an expertise in risk issues, Professor Fenton has turned his attentions to the Covid-19 pandemic. And, if anything, he has turned out to be even more of a renegade than even I could have reasonably expected. Firstly, there are his findings when he took a Bayesian approach to the analysis of Ivermectin’s efficacy:
“We show that there is strong evidence to support a causal link between ivermectin, Covid-19 severity and mortality, and: i) for severe Covid-19 there is a 90.7% probability the risk ratio favours ivermectin; ii) for mild/moderate Covid-19 there is an 84.1% probability the risk ratio favours ivermectin. Also, from the Bayesian meta analysis for patients with severe Covid-19, the mean probability of death without ivermectin treatment is 22.9%, whilst with the application of ivermectin treatment it is 11.7%.”
Then there is the statistical basis for believing that 1 in 3 Covid-19 cases are asymptomatic. That one crumbles as soon as one views the problem from a Bayesian angle. In this video, the good professor does little more than demonstrate how the mass testing of an asymptomatic population during periods of relatively low disease prevalence can so easily lead to misconceptions. It’s pretty basic mathematics, and yet that didn’t stop Youtube from pulling the plug simply on the grounds that it undermines a government poster campaign. Apparently, this insurrection placed the basic mathematics into the category of disseminated disinformation. All very Orwellian if you ask me, and just one step away from ‘How many fingers am I holding up?’
The truth (if you think you can handle it) is that the general problem when analysing the Covid-19 pandemic is the manner of the data collation and how it doesn’t seem to have been performed with all of the necessary analyses in mind. Inconsistencies in categorisation for everything from the vaccinated to the ill have rendered the most basic of post-fact data analyses impossible or meaningless. And nowhere is this more apparent than when trying to gain answers to the most fundamental question: Have the vaccines worked?
Here, a major problem has been the decision as to when someone can be considered to have been vaccinated. Is it the day that the injection takes place, or is it after the 7, 14 or 21 days that have variously been cited before the full immunity has kicked in? In the UK it has been the latter, and this matters because it creates a bias towards attributing deaths to the unvaccinated group, leading to the conclusion that the vaccines have been more effective than otherwise. Fenton presents a statistical analysis that appears to show that if one readjusts the classification, the vaccine benefit seems to more or less disappear.
There are a few caveats that should be made to the above. Firstly, the evidence for vaccine efficacy or otherwise, is determined by looking at all-cause morbidity and comparing figures between the two groups: vaccinated and unvaccinated. Professor Fenton is keen to point out that this is the only way to avoid many of the causal confounders that could result from a variety of other ill-defined classifications (such as whether someone did or did not die from Covid-19). However, Fenton readily admits that his analysis fails to entirely eradicate the effects of the greatest confounder of them all: age. As a consequence, he stops short of saying that he has proven that vaccines have been ineffective and merely states that claims for their efficacy are unsafe given the limitations of analysis.
Secondly, causality can only be truly determined if one can answer the key counterfactual questions. So even if the death rate amongst the vaccinated is no better than the unvaccinated (and this claim is highly suspect after one adjusts for age), one still has to ask what would have happened in the absence of vaccination. That’s a big unknown, but I can’t imagine it would have been pretty.
Thirdly, when judging the success of a vaccination programme it isn’t good enough to just do a body count. As I have pointed out in relation to the Smart Motorways controversy, governmental logic on this issue takes into account economic factors. Even if the vaccinated ended up being no better off than the unvaccinated, as long as the government had been able to open up the economy without an accompanying bloodbath, then the programme would be deemed a success. To put it into the jargon of risk management, the most successful programme is often not the one that reduces the risk the most, but the one that is most risk efficient – something that most advocates of an expedited transition to net zero seem to completely misunderstand.
Viva the Irascible
Well, that completes my somewhat selective precis of the life and works of Professor Norman Fenton. As I say, he is something of a hero of mine and I heartily recommend that you look him up for yourselves. He’s on twitter and continues to try his best to challenge the bullshit whenever he sees it. Sadly, however, he has not been immune to the attentions of the censors and the fact checkers, for whom his imperious qualifications seem to count for nothing. It may not be too long now before he is totally cancelled, so make the most of his work whilst you can. In the meanwhile, I leave you with perhaps his most notorious demonstration of chasing the herd rather than following it. I believe it’s something he did whilst out jogging in Richmond Park – some time before he was on Climate Change by Numbers, I seem to remember:
Sorry, wrong Fenton.