This is one of those articles where I have to start off by making a confession. Believe it or not (and I will admit I did take quite a lot of persuading) I am not infallible. The evidence for this shocking revelation can be found in an article I posted recently, in which I wrote the following:

There is absolutely nothing in the mathematics of Bayes’ Theory to suggest that there is a limit beyond which the strength of a prior belief can render the holder impervious to new information.

Okay, so that is not strictly true. In fact, it might be fairer to say that it is strictly false. The truth is that it is in the nature of Bayes’ Rule that any a priori probability that is either zero or unity will remain so after Bayesian updating.1 So yes, there is a point at which the strength of a prior belief can render the holder impervious to new evidence, and that is when one is starting from a belief that there is a zero probability that you are wrong. So, whilst it is true to say there is no limit within the bounds of p=0 and p=1 beyond which imperviousness to new information kicks in, it certainly does so at those boundaries. I should have realised that this was the point being made.

That said, however, you may recall that what I was really complaining about was the following statement:

What Bayes theorem tells us in these situations is that there is not a single piece of evidence, no matter how strong, that will ever shift these hardliners from their convictions.

Well, perhaps Bayes’ Theorem might be telling us that but, if so, it is obviously wrong. The fact is that people who hold their beliefs with certainty can and do change their minds – in fact, it happens very often. This doesn’t have to be a massive reversal either; even going from absolute certainty to being almost certain is still a change of mind. And this evidence of real-life behaviour is the proof that, wonderful though it may be, Bayes’ Rule clearly fails to describe the full reality of decision-making.

This problem with Bayesian inferencing has been formalised within Bayesian Decision Theory as Cromwell’s Rule. It is a rule that states that if you want to use Bayes’ Law to update beliefs in the light of new evidence you must never assign probabilities of 1 to any hypothesis that is not demonstrable by logic to be true, and never assign probability 0 to any hypothesis unless it can be logically shown to be false. Only under such a restriction can one expect Bayes’ Rule to apply, because only under such a restriction can one reasonably expect evidence to sway you. The world of evidence and Bayesian inference is quite different to the world of logical certainty, let alone the world of blind faith.

It was statistician Dennis Lindley who named the rule, and he did so after Oliver Cromwell because of his famous letter to the General Assembly of the Church of Scotland prior to the Battle of Dunbar, in which he wrote:

I beseech you, in the bowels of Christ, think it possible that you may be mistaken.

Although the Reverend Bayes was yet to be born,2 Cromwell’s letter was an attempt to persuade the Assembly to think like Bayesians. The religious decree of the day was the Doctrine of Divine Right, in which the King could do no wrong because he was appointed directly by God. Under the power of such inalienable logic, all the apparent evidence of wrong-doing could, and should, be ignored. In other words, in the case of Charles I, religious certitude was setting the a priori probability to zero, thereby precluding any hope of a judgment based upon evidence-based Bayesian inference. As Lindley was to put it many years later:

Leave a little probability for the moon being made of green cheese; it can be as small as 1 in a million, but have it there since otherwise an army of astronauts returning with samples of the said cheese will leave you unmoved.

So, what is happening – if not Bayesian updating – when someone with an absolute belief in their position actually does change their mind in the light of new evidence? If one starts from the position that there is no possibility that the moon is made of green cheese, what has actually happened when the appearance of extraterrestrial cheese does indeed change minds?

Well, that’s a very good question, and I’m not sure I can provide you with a satisfactory answer. However, the first thing to be noted is that new evidence can sometimes be so dramatic as to completely change the Bayesian model one is working with. It is not a question of updating the probabilities of your hypotheses, but rather the introduction of an entirely new hypothesis and the removal of others that are no longer tenable. It is your classic Black Swan scenario in which new evidence has forced a complete paradigm shift. However, that still doesn’t explain why somebody so certain of their initial position would reject the restructuring of their belief system or, more to the point, accept it.

I believe that when the acceptance of restructuring happens it isn’t because a mode of thinking has facilitated a change of mind, it is because a mode of thinking has been abandoned in favour of a different one. This is not a Bayesian update or restructuring of a belief system, it is a fundamental shift in a way of thinking that entails the abandonment of unwavering faith in a presupposed logic, replacing it instead with an acceptance of the supreme importance of evidence.

This is not so easily done, however. There have been no lunar astronauts recently returning with green cheese, but there was a time, not that long ago, when sailors returning to northern hemisphere ports, bearing fruits grown at the equator, were not believed, simply because Aristotle had decreed that man could not survive at such latitudes – and the logic of the day dictated that Aristotle’s word was final. It took many more years for this faith in Aristotle’s word to be overthrown; in fact, it took the scientific revolution and its insistence upon evidence-led reasoning. Even so, the power of faith, or the power of a compelling logical argument, can still apply even in a scientific context.

To arrive at his General Theory of Relativity, Einstein took a set of profoundly simple and obvious observations and, through a number of seemingly compelling deductive steps, explicated a marvellous explanation of gravity that was supremely elegant and yet far from obvious. It was a work of genius and one can see why Einstein had an unshakable faith in his thinking. So, in 1919, when asked by his assistant, Ilse Rosenthal-Schneider, what he would have done if the Eddington expedition had not confirmed the bending of light by the sun (the key evidence for General Relativity), Einstein famously replied, “Then I would feel sorry for the dear Lord. The theory is correct anyway”.

Note, this doesn’t demonstrate a disdain for evidence so much as a respect for the powers of deduction. In fact, so great was that respect that Einstein wouldn’t even be cowed in the face of a scientific consensus. When told about the 1931 book Hundert Autoren gegen Einstein (“Hundred Authors Against Einstein”), he dismissed the attempt to overturn his theory of relativity through consensus by saying, “If I were wrong, one would be enough”.

Unfortunately, in his later life, that unshakable faith in the power of pure thought would render him a laughing stock in the eyes of much younger and far less capable physicists who, nevertheless, were perfectly willing to place their trust in scientific consensus.3

So, it is legitimate to sometimes take a non-Bayesian attitude and assign a priori probabilities of zero or unity. But you have to be a genius in your prime.

Ed Miliband is no genius and probably never had a prime. He has no profound insights to offer; no unmatched command of pure thought. And yet he is willing to pursue a train of thought with a dogmatic insistence that would put Einstein to shame. He is instead a modern-day Aristotelian; someone who has taken the critical thinking fallacy of arguing from authority to a whole new level. He presides over a policy area that not only requires that the science and engineering realities be well-understood, but one that demands an understanding of the extent to which uncertainty remains, and how that uncertainty should be analysed and dealt with.

At the end of the day, the rights and wrongs of net zero are a question of decision-making under uncertainty. This is not a province where faith in a supposedly compelling logic is all that matters, and so the probabilities assigned to the correctness of a decision should never be zero or unity. Ed Miliband is not Charles I and he does not operate through Divine Right. A complete change is needed in the mode of thinking within government; one that encourages epistemic humility. And for that reason, I think it most appropriate that one invokes the ghost of Cromwell by saying “I beseech you, in the bowels of Christ, think it possible that you may be mistaken”.

Regrettably, I suspect one would be wasting one’s time. It isn’t fear of the ghost of Cromwell or the scare of evidence that will ultimately stop the madness; instead, we must cling to our forlorn hopes that a fear of the ballot box will be enough to shift the mindset and allow good old-fashioned, evidence-based Bayesianism to once again play a role in government policy-making.

Footnotes

[1] Bayes’ Rule states that the posterior probability (the probability of the hypothesis in the light of new evidence) is calculated through multiplying the a priori probability (the probability prior to the discovery of the evidence) by a Likelihood (the probability of the evidence given the hypothesis) divided by the probability of the evidence. But if the a priori probability is zero, it doesn’t matter what the Likelihood and the probability of evidence are; zero multiplied by anything will remain zero. Similarly, if the a priori probability is 1, then the Likelihood becomes the probability of the evidence; hence the posterior probability is 1 multiplied by (x/x), i.e. 1.

[2] Thomas Bayes was born some 40 years after the Battle of Dunbar. However, even if Cromwell and Bayes had been contemporaries, there would have been no way that Cromwell would have been familiar with the work of an obscure Presbyterian minister. In fact, Bayes’ paper, An Essay Towards Solving a Problem in the Doctrine of Chances, was only published posthumously (albeit in a highly modified form) by his friend, Richard Price. Even so, it wasn’t until Pierre-Simon Laplace published Théorie analytique des probabilités in 1812 that the world would be introduced to a form of Bayesian thinking that would be recognised as such today.

[3] Einstein, who never really came to terms with the quantum mechanics he had helped to discover, spent his academic dotage wandering around the gardens of the Princeton Institute for Advanced Study, desperately trying to formulate a Theory of Everything as an extension of his General Theory of Relativity. The programme was doomed to failure, but he was still to be found, on the day he died, scribbling notes on his death bed.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.