Before I start, I must confess that I am no Sherlock Holmes. What is more, my understanding of virology extends no further than can be gleaned from having caught influenza more than once. Nevertheless, such experience alone should be sufficient to instil a healthy fear of what SARS-CoV-2 may do to an ailing and aging male body – no matter how sceptical that body may be. But when one witnesses and experiences the civic and economic damage that a government is prepared to inflict upon its people in order to manage a pandemic, the fear can become anything but healthy.
Given such mental health challenges, one certainly would not welcome any further distress arising from the simple desire to understand the case statistics upon which governments are basing their decision-making. Unfortunately, that is exactly the position I am in. There are things I think I know for certain, and there are things that have happened that appear to flatly contradict those certainties. This is all very destabilizing. I’ll start, if I may, with the widely understood certainties, after which you are invited to follow me down the rabbit hole.
Firstly, when interpreting a medical diagnostic test result, one has to take into account the possibility of false negatives (i.e. tests that fail to detect the presence of a disease) and false positives (i.e. tests that record the presence of the disease, notwithstanding its absence). These are respectively referred to as the sensitivity and specificity of the test. RT-PCR testing is no exception to this rule. Indeed, Lancet has advised that the specificity of RT-PCR testing is such that between 0.8% and 4% of negative cases are likely to register as false positives. When the a priori probability of the disease is high (for example, when testing those who are presenting symptoms or have been in contact with a confirmed case) the number of false positives will be significantly exceeded by true positives, and so a positive test result is highly significant. However, once testing becomes more random, the a priori probability drops and the false positives start to dominate, to the extent that the test results become pretty meaningless. All of this is all very uncontroversial; it is just standard Bayesian statistics and a reminder of the dangers of base rate neglect. Indeed, the British Medical Journal has produced an online tool that enables anyone to try various a priori probabilities to see how this affects the reliability of RT-PCR test results.
So imagine my surprise when the UK’s Office of National Statistics wrote this about their national COVID-19 Infection Survey:
“We know the specificity of our test must be very close to 100%”
Their logic was impeccable. If, as they claimed, only 159 positive test results were found in a sample of 208,000, then the least that the specificity could be was 99.92% — a full order of magnitude more specific than the most optimistic figure quoted by Lancet. Given the random nature of the ONS testing, and the relatively low prevalence of Covid-19 within the broader community, the specificity suggested by Lancet would have meant encountering far more false positive test results than genuine ones, and it seems more than a little convenient to me that this had not proven to be the case with the ONS survey. Even more puzzling was the apparent lack of curiosity within the scientific and journalistic communities. Rather than question these results, everyone seemed happy to assume that the ONS was using some especially accurate test technology, despite there being nothing on the ONS website to justify such an assumption. On the contrary, the ONS academic partners have confirmed there was nothing out of the ordinary about their testing arrangements:
“The nose and throat swabs are sent to the National Biosample Centre at Milton Keynes. Here, they are tested for SARS-CoV-2 using reverse transcriptase polymerase chain reaction (RT-PCR). This is an accredited test that is part of the national testing programme.”
On the face of it, a team of top-class statisticians were working back from their data to deduce a test specificity that flew in the face of all of the known science regarding RT-PCR testing, and no one seemed the least bit concerned about this.
Normally, in these circumstances, it is safe to assume that one is missing something very significant. It would only require someone to point out my mistake and I would be able to move on, albeit somewhat chastened and embarrassed. I have tried to resolve the mystery myself, but the best I have come up with is the rather outlandish theory that the ONS sample size of 208,000 was completely misleading. If (let’s say, due to quality control problems) the effective number was nearer to 50,000, then the small number of positive results can still be reconciled with the expected Covid-19 prevalence and a more plausible RT-PCR specificity. But other than to point to the fact that survey participants from 12 years old upwards were allowed to self-administer the swabs, I could think of no credible excuse for assuming that such a catastrophic failure in quality control had taken place. I had no alternative but to live with the prima facie contradiction and get on with life. But then I came across the New Zealand Ministry of Health’s Covid-19 statistics.
If New Zealand is to be believed, by early May, only 25 of its 1,138 Covid-19 cases had been asymptomatic. That represents only 2.2% of the cases, and it contrasts sharply with the statistics arising in other countries (e.g. 40% in US nursing homes and 90% in Northumbria University). Just as problematic is the fact that the New Zealand figures were determined as a result of extensive community testing, i.e. circumstances where false positives would be certain to dominate the asymptomatic Covid-19 headcount, and single-handedly account for far more than 25 individuals. Not only does New Zealand owe the world an explanation for its low asymptomatic count, it also needs to explain how, like the UK’s ONS, they were able to achieve near 100% specificity with RT-PCR testing. Furthermore, there is this online statement to be accounted for:
“When tests were done on samples without the virus, the tests correctly gave a negative result 96% of the time.”
This is a far from impressive specificity, and one which should result in a significant false positive problem for the NZ Ministry of Health to deal with. But only a couple of paragraphs later they say:
“We expect very few (if any) false positive test results…”
And yet, despite this completely illogical expectation, they are proven correct? This is beginning to make the ONS conundrum look perfectly straightforward in comparison.
I trust that you can now see why I should be left so utterly confused. Two organisations that we should presume to be above reproach are making statements that just do not add up. It is no wonder that I am beginning to doubt my own rationality and powers of comprehension. I am hugely sceptical regarding the ONS and New Zealand figures but I feel obliged to be simultaneously sceptical of my own scepticism. Sir Arthur Conan Doyle famously believed in fairies, so I ought to feel in good company. However, I can’t help but suspect that entertaining such cognitive dissonance for any length of time is the sure path to madness. If someone doesn’t rush to my rescue soon and point out where I am going wrong I may end up in an institution listening to the sceptical voices in my head.
Oh yes I will.