Experts with their Heads up their Rs

Coronavirus: What is the R number and how is it calculated?

By James Gallagher Health and science correspondent 18 May 2020

… scientists work backwards. Using data – such as the number of people dying, admitted to hospital or testing positive for the virus – allows you to estimate how easily the virus is spreading. Generally this gives a picture of what the R number was two to three weeks ago…

If the reproduction number is higher than one, then the number of cases increases exponentially… but if it is below one then eventually the outbreak stops. The further below one, the faster that happens… The reproduction number is not fixed. Instead, it changes as our behaviour changes, or as immunity develops…

In other words, R is an arcane ex post facto constantly varying constant calculated from hunches based on whatever data is available and fed into computer models to help modellers in their travails. Or possibly extracted from computer models. Who cares? As the BBC’s science correspondent says, “scientists work backwards..” If they find it useful, so be it. Explaining it to us mortals over and over what R is is about as useful as explaining the chemical formula of the glue used by modellers who make replicas of the Ark Royal out of matchsticks. We don’t need to know. We look at the results and admire. Or not.

The Imperial College study apparently made its R calculations using information from the British census – good, reliable data no doubt – but how useful is census data for estimating the probability of passing on a virus?

Let’s take as a concrete example three of the earliest clusters identified in France.

1) The first one was caused by an Englishman returning from the Far East who stopped off at a French ski resort in the Alps and infected four other English tourists before returning to his home in Majorca. That’s an easy one. R=4. Or, as far as England is concerned, R=0.

The four infected English people then returned home to England without infecting anyone else. R=0. Average R value =0.8. <1, therefore containable.

Two factors are no doubt at play here which were not allowed for by the French authorities who closed down all ski resorts, or in Ferguson’s calculations:

– Englishmen abroad may share the same chalet, while keeping as far as possible away from other human beings. That’s because they’re English, and the others are foreigners.

– A ski slope is the only other place apart from the United Kingdom where a two metre distance from your neighbour is obligatory.

2) The second major outbreak in France occurred at an Evangelist meeting at Mulhouse, from where it spread as far as French Guyana. Whether this was caused by one R=200 “super-spreader” or a number of spreaders-lite who just happen to like getting up close, joining hands and singing very loud, is an open question. The point is, there was nothing in the data on which the scientific advice was based which would have helped in determining the government’s response. Lockdown came to happy clappy congregations and Trappist monasteries alike, to bare ruined choirs run by the National Trust and the kind of sado-maso nightclubs in vogue in certain political circles, independently of the distancing typically practised by aficionados.

Where was I? Barnard Castle? Surely not. Oh yes.

The third major outbreak in France (possibly the first chronologically – no-one knows) occurred in a village in the north of France near an airfield. This was where the plane arrived which performed the first well-publicised evacuation of French citizens from Wu Han. The evacuees were packed off to a holiday camp in the South of France and no cases of infection were reported. Then cases broke out near the airport. It’s a military airport, which also houses the headquarters of the French secret service. No-one knows why, no-one is asking questions, and R numbers are unlikely to be forthcoming.

That’s enough about France. I recently spent two weeks in England, of which several hours were spent not in lockdown, queueing outside supermarkets. A couple of observations:

1) The English middle classes, despite their expressed love for Europe, have not mastered the metric system. They seem to think that two metres is the length of a cricket pitch. If social distancing was all it’s cracked up to be, we’d be out of the woods by now.

2) Working class men tend to talk loudly and laugh a lot. Working class is cool, as George Orwell didn’t live long enough to say. I noticed this in Tesco’s, where two blokes would stand at opposite ends of an aisle and chat and chuckle to each other over the heads of poor masked expats desperately searching for Bovril. (Where’s it gone? Who’s hoarding it?) They may be taking risks now, but I’ll wager they’ll have developed a healthy herd immunity to all the dire psychological symptoms facing the masked remoaners when the virus is over. (Note to Dominic Cummings – if his team of Nobel prized weirdoes hasn’t already sussed that one out.)

(Note that my observation on the loudness and hilarity of the working class is not only wildly politically incorrect but unsupported by any hard data. How many thousands of hours of sociologists’ time would it take to confirm or refute my assertion – or any similar assertion by Orwell or Mayhew or Marx in his lighter moments? Who cares?)

All this to point out that the data available to Professor Ferguson may have been accurate, but is possibly not the data he needed for the job.

And the same is true, but at a thousand or a million times the cost in research funding, in the case of climate modelling.

I’m sometimes (but not always) a bit overawed by the expertise of some of my colleagues here, in physics, statistics, computer science, risk assessment, etc. If I have any area of expertise, it’s in ancient art history. It’s fascinating to see how people have pictured the world to themselves, and the symbols they have used to attempt to transmit their vision of the world to others, across the ages. The further back you go of course, the more arcane and absurd the symbols seem.

But it helps to see that R, like tau, like alpha and omega, like climate sensitivity, is just a symbol.


  1. The ancient art historian is on fi-R.

    It’s funny, I think in my own rare lighter moments, that R is also the computer language favoured by Steve Mc and other climate dissidents for stats coding from the early 2000s. (Python is now providing stiff competition for its more laconic peer, with Julia coming up fast on the rails.) And R is also the statistical test (normally mentioned in lower case) the results of which Steve showed that Mike Mann had calculated for his original hockey stick then hidden away because it was so uncorroborating.


    Yes, I’ve noticed the many mentions of R at ClimateAudit dating back a couple of decades. Nobody there ever felt the need to explain to the ignorati such as me what they were on about, and I didn’t feel the need to know. It was something they used and that was good enough for me.

    It’s when the BBC feel the need to explain what R means that my whatsitmeter leaps into action. As with wet markets, hanging chads and grassy knolls, my natural reaction is to ask: “Why are they telling me this?”


  3. “A ski slope is the only other place apart from the United Kingdom where a two metre distance from your neighbour is obligatory”

    Not in the Alps in high season! Lift queues absolutely crammed like sardines, and lots of time in them too! Not to mention the mountain eateries likewise.


  4. Geoff: You’re right about not needing to know, in both the epidemic modelling and the paleoclimatology coding context. I think the stats test is a more interesting borderline case.


  5. ANDY
    I bow to your obviously experiential knowledge. My theoretical hypothesis was based on the length of the average ski.

    On the time factor, I noted in England how people would step out into the busy A road to avoid spending two seconds within two metres spitting distance of me (not that I’m a big spitter) and that Johnson recently introduced fifteen minutes of face to face contact at two metres as the dangerosity criterion for something or other. That’s a margin of error of 450:1. So who’s right, Johnson or Johnson?

    This is not trivial. I can spend fifteen minutes or five seconds face to face in my florist’s or Chinese takeaway. Millions of people’s livelihoods depend upon apparently arbitrary decisions such as these as to what’s safe or not.

    Liked by 2 people

  6. Geoff may I express the shear delight of reading “whot you just rote” My smile muscles got a real workout.


  7. Ro being the initial estimate, and then Reffective being the evolving R as some portion of the population is already recovered and immune, and as people take precautions. I googled a number of papers on Reff and there are numerous ways of calculating it.
    What none of those methods are, is anything resembling “the daily growth factor” which was breathlessly reported by ABC Australia and the Graun Australia, with the added wrongformation that the epidemiologists say “the growth factor” must be below one to keep control of the epidemic.
    I did email ABC suggesting they read some papers on Reff but they soldiered on with their growth factor quite regardless.

    Liked by 1 person

  8. The nonsense surrounding the R number is perfectly illustrated in this article from the Conversation (where else?)
    It goes off the rails in the first paragraph when it says:

    … thanks to the coronavirus, everyone has heard of it and most people can tell you that it’s the reproduction number, an indicator of whether the number of infected people is increasing or decreasing.

    “Everyone..” “Most people..” ? Really? When you’re claiming to measure the incidence of a virus and its propagation in society, and you start off by spouting unsupported nonsense about the incidence of knowledge in society, you are in trouble.

    The R number represents the average number of people an infected person goes on to infect.

    …which implies each person has their own personal R number. There’s no indication over what time span infections are measured, so I suppose your number changes day by day.

    If R is larger than one, the number of people with the disease is increasing.

    Or, to put it another way, If the number of people with the disease is increasing, then R is larger than one. Cart, meet horse.

    Rather than assuming that every infected person and every contact they make follows the same pattern (as with the R number), scientists working on epidemic models allow for the number of new cases caused by each infected person to vary randomly.

    I’m not sure that “allow for” is the right term here. “Realised, to their astonishment..” might be nearer the mark.

    Some people might have high viral loads or might simply cough more and hence spread the virus more effectively.

    Right. And some might cough a lot, but stay at home and moan and infect nobody, while others invite their best friend’s wife in for tea and crumpy while criticising the government for not taking their advice about imposing lockdown earlier.

    All this rubbish is leading up to the introduction of a new variable constant, the dispersion parameter K, which measures statistically the absolute uselessness of the reproduction number R.

    The author is a professor of mathematics, so perhaps we should laugh instead of cry. Or vice versa of course.

    Liked by 1 person

  9. “I’m not sure that “allow for” is the right term here. “Realised, to their astonishment..” might be nearer the mark”

    Absolutely. I was very surprised to see (from Nick Lewis’ post at Climate Etc) that Ferguson’s model had no accounting for non-social inhomgeniety at all, and only the most primitive accounting for social inhomogeneity (it had half the transmission rate for schools). And it seems other models too. Even discounting wide social differences, it is basic evolutionary knowledge that because there is wide genetic variance in populations, then there is differences in reaction to *any* disease. Not only that, part of the reason this variance exists is indeed to protect us from disease! It matters to a different extent depending on the disease type, but it is exactly most useful when the disease is one that evolves rapidly and so also has many strains (it seems covid is towards this end of the spectrum, at least, if maybe not as much so as flu). This is because such a disease playing constantly with its own formula, so to speak, has a much higher chance of happening across the key that will literally unravel a mono-cultural population. But for a genetically diverse population, there is no such key. And indeed it is not just the genetic variety in the population that will produce different responses (added to which the social differences too), but the very fact that disease is evolving fast means this will also cause different responses. Some strains may essentially be harmless. The low end of Nick’s model has herd immunity at only about 17% infection, because most of the combinations of individuals and virus that do have a big enough problem for major symptoms and spread, are already used up. I’m still struggling to grasp how such basics cannot be in the models 0:


  10. I had very little background knowledge to call upon prior to the COVID-19 outbreak, but what little I did have provided me with the following small insights.

    Firstly, it is normal with epidemiological modelling to treat populations as being homogenous with respect to infectivity, insofar as for any two individuals within a community the first is as likely to infect the second as the second is to infect the first. Dropping this assumption leads to too much complexity for the average model to handle.

    Secondly, the assumption of homogeneity has been long understood to be unrealistic since all communities exhibit complex structure on a number of different scales. As a consequence, the passage of infection through a community has as much to do with connectivity and network structure as it does the properties of individuals or the inherent properties of a virus. This is what I had understood to be the importance of the K number (I had not previously heard of the R number before the COVID-19 outbreak but I was conversant with the K number). The article that Geoff cites treats the K number as being a property of the individual (i.e. how inherently infectious they are) but it had been my understanding that it was a lot more to do with their position in the community network. For example, as a group, teachers in a school have large K numbers because they are very active in the infection path, i.e. they receive and transmit infection at a higher rate than the pupils as a result of the pattern of their interaction. Again, as I understand it, the K number is only meaningful within the context of a defined grouping within a community structure. Typically, a different K number is ascertained for intra-group infection and inter-group infection. Within schools, for example, the intra-group (i.e. class) K value for younger children is higher than the inter-group (i.e. inter class) K value, and vice versa for older children. Teachers have a particularly large inter-group K number.

    Incidentally, I understand that for the purposes of determining an inter-group K value, unless there are very good reasons to do otherwise, all groups not forming part of the in-group are treated as a homogenous whole, or a so-called ‘bath’. This simplifies the mathematics and I believe it is a trick learned from solid states physics.

    I agree with Geoff that far too much importance has been placed upon the R number and its interpretation. It is something that can be deduced from infection rates rather than something that explains them. However, I had been wondering what had happened to the K number and now I have Geoff to thank for unearthing it for me.

    Liked by 2 people


    the R number … is something that can be deduced from infection rates rather than something that explains them.

    Thanks, that’s what I’ve been struggling to say, in my usual role of child in the crowd who can’t see what the Emperor is wearing. What’s missing in the Hans Andersen story is the expert, or at least someone with 20:20 vision, to confirm that the child was right. Without him, the normal reader will naturally go along with the consensus and conclude that Andersen is spreading fake news.


    I was very surprised to see (from Nick Lewis’ post at Climate Etc) that Ferguson’s model had no accounting for non-social inhomogeneity at all, and only the most primitive accounting for social inhomogeneity.

    [Shouldn’t that be heterogeneity? Or are we to describe Ferguson’s extra-lockdown pranks as an example of his inhomosexuality?]

    Defenders of modelling like to cite the examples of aeroplanes and bridges as reasons to trust modellers, to which the standard objection is that we are more complicated than concrete and aluminium. Now it turns out we’re even more complicated than viruses (except perhaps for teachers.)

    Liked by 3 people

  12. Geoff,

    “Shouldn’t that be heterogeneity?”

    You would have thought so, and in evolutionary texts it usually is. But I (inadvertently) adopted the term used by Nick Lewis (and seen elsewhere on covid). I don’t really know whether this implies something subtly different to heterogeneity.

    “Now it turns out we’re even more complicated than viruses…”

    Yep, we’re a long way from understanding ourselves biologically, let alone socially (and the interaction between the two).


  13. Here is interesting paper that I have found. It provides a useful explanation of the role of the Basic Reproduction Ratio, R, and the Dispersion Factor, k. Basically, it is because there is population heterogeneity that one has to take into account the Dispersion Factor:

    A key sentence in the above paper is:

    “Host population heterogeneity (obtained with lower values of k) increases the probability that an outbreak will go extinct, as the pathogen can only really spread via one of the dwindling super-spreading individuals.”

    I am wondering whether the ‘K’ factor that I have come across is different to the Dispersion Factor ‘k’. As I understood it, ‘K’ was a measure of connectivity, whereas ‘k’ seems to be a measure of heterogeneity, i.e. it would reflect the variation of ‘K’.

    Liked by 2 people

  14. By way of further clarification, I note the following passage in the cited paper:

    “Population heterogeneity can either be deterministic, due to differences in immune history among hosts or differences in host behavior, or stochastic, due to sudden environmental or social changes. Spatial structure can also act as a form of heterogeneity, if each region or infected individual is subject to different transmission rates, or degree of contact with other individuals.”

    It is the last-mentioned factor which is captured by a multiplicity of ‘K’ numbers. Such variances will have a bearing on the dispersion factor, ‘k’, but will not exclusively determine it. Also, note that ‘K’ is not the same as ‘R’ – although a high ‘K’ value may explain a high ‘R’ value within a given group or between groups.

    I’m not sure whether it is meaningful to ask what ‘K’ values were used in the UCL model; I don’t think it was that sort of model. However, since the dispersion factor is a measure of heterogeneity within a population as a whole, I think it should be meaningful to ask what the assumed ‘k’ value was. I suspect that k=1 was assumed, although maybe not explicitly. As has already been observed by Jaime, the only hint at heterogeneity is the special attention given to schools as a sub-community, and this will only have been because the influenza model upon which the COVID-19 model was based would have allowed for the relatively high transmission rates in schools (actually something that is reversed in the case of COVID-19). Famously, there was nothing in the model to take account of nosocomial infection or care home vulnerability.

    All in all, one gets the impression that homogeneity has been the default assumption in SAGE thinking, particularly when one looks at the government’s alert level ‘formula’, in which the dispersion factor is notable by its absence. Surely I must be wrong about this, but that is the impression they give.

    Liked by 1 person

  15. As a footnote to my previous comment, I note the following definition given by Wikipedia:

    ” [Basic Reproduction Ratio] of an infection can be thought of as the expected number of cases directly generated by one case in a population where all individuals are susceptible to infection.”

    If correct, that means that the ratio is basically a concept that assumes homogeneity, at least to the extent that the ratio applies.


  16. “I don’t think it was that sort of model…”

    Well indeed not. And yet both for social heterogeneity and inherent heterogeneity (genetic / immune system priming) one can easily reach out and see that these have been well-known factors for yonks (I’ve known of the genetic diversity thing myself for decades). Nor is putting these factors into models, at least in a first order sense, ‘too complex’, as Nick Lewis demonstrated over at Climate Etc. Very many nations went into lock-down, and per previous I think that via fear the publics drove their governments more than the other way around. But if the fear was stoked by various models such as the Ferguson one releasing results into the press, then they may still bear a large share of the burden of responsibility for events. Even discounting code / quality issues, how on Earth can they possibly justify assumptions of (largely) homogeneity? This makes a huge difference to outcomes.

    Liked by 1 person

  17. I have tried as best I can to follow the argument in this thread but keep running into a problem that either you’ve not considered or which I’ve failed to understand.
    Consider yourself in the earliest stages of the pandemic and you are tasked with creating a predictive model for the outcome in the UK. You already know that the population is not homogeneous in its reaction to the virus (many show no symptoms, severity is age related – evidence from Cruise ships,China and Italy), but you have little idea about the heterogeneity in the UK to this novel virus. You know from social differences that Italians and Chinese will differ from the British, but how do you determine the significance of those differences. Do you guess them, or for a start do you assume homogeneity. And perhaps you find you are still unable to adequately assess the degree and type of heterogeneity.
    Is it better to assume homogeneity, or guess at the heterogeneity?


  18. Alan:

    “…you are tasked with creating a predictive model for the outcome in the UK.”

    The UCL model is not new, it is quite old, i.e. not created for the current pandemic, and hence having had plenty of time to construct with generic theory that could easily be parameterized for new pandemics suddenly coming over the horizon. Notwithstanding this is bound to result in some less-than-ideal approximations, to leave out enormous fundamentals means one will already know that the model has extremely limited value.

    “…but you have little idea about the heterogeneity in the UK to this novel virus…”

    Genetic variability is generic the world over; the average of immune system priming factors may vary somewhat per nation (although this will be low if humanity is quite used to corona viruses – of which there are 4 varieties as the common cold, and in turn of which as Matt Ridley theorizes at least one and possibly all came from prior ‘covid’ pandemics), but a range of immune system responses (i.e. even over and above the variability they already have from genetic factors) would be expected everywhere.

    Social differences do need some tailoring per nation. E.g. whether old people live alone, or in extended family homes, in care homes, etc. Most countries are reasonably aware of their own societies! Only 1st order is needed.

    The idea of models is not to give the right answer. They won’t, too much is unknown. But they will give an approximate range of answers, and some idea of how the outcomes move depending on the particular factors. As Nick Lewis’ simple model incorporating a range of social and non-social heterogeneity shows, these factors can make a vast difference. Low end of range for herd immunity is 17% rather than up at 70 or 80%. Hence, while we don’t believe the 17% as true, we know that the 80% is likely not to be true either, so indeed we shouldn’t treat it as truth.

    The genetic and resistance factors have been taken into account for decades in looking at protecting mono-cultural agricultural crops from disease (they are far more vulnerable because we’ve taken out genetic diversity), and indeed in some cases figuring out how much diversity we can put back in to create protection without lowering yields too much. Are we not even using the same level of expertise to protect humans than we are using to protect our crops? I’m very surprised that so much knowledge seems not even to be contributing here. I think your point would stand if these factors were already in to a first order, and folks were complaining that we nevertheless still didn’t have a reliable ‘answer’. But they are not even considered in the first place, it seems. Given the summed variability can create result outcomes that are so radically different, 3 or 4 or 5 times different, then leaving them out altogether simply isn’t an option.


  19. P.S. the pathogen itself may have more or less variability on the relevant timescales too. For instance this is very high for flu, many strains, which is why vaccines are only good for a year or so. So clearly one would include this as a variable too, and plug in the best guess value as data started to come in. If one had started with a value for a similar disease like flu, it’s looking like that wouldn’t be a bad first approx, although I guess estimates for evolution rate may now be falling. But there are still very many strains, and some idea of effects corresponding to same may already be starting to emerge. Genetic variability of the host population matters more for protection against such fast moving critters.


  20. Alan,

    If we accept maximum epistemic uncertainty with respect to the level of heterogeneity then any value of k is as likely as another. Homogeneity (k=1) therefore would not hold a privileged epistemic position. It does, however, represent the worse case scenario in terms of pandemic threshold and herd immunity calculations. It therefore appeals to those advocating the precautionary principle. That’s fine as long as things like lockdown don’t come with huge penalties. Also, the results are highly sensitive to levels of heterogeneity and so one ought to use as much insight as one has, no matter how limited.

    Liked by 1 person

  21. I think what I’m getting at is that with a novel virus there are initially so many unknowns and unknowables that an initial model must of necessity be crude and perhaps produce a worst case result. Was this used by the government as a working model, or was it considered “good enough for government work” and not upgraded or refined as data came in?


  22. Alan:

    “…what I’m getting at is that with a novel virus there are initially so many unknowns…”

    The level of novelty in the virus, unless it is truly outlandish, i.e. came from Mars (!) does not make much difference wrt the generic theoretical aspects above. ‘Novel’ is a relative term anyhow, and likely we’ve been assailed by waves of corona viruses since before we were even human, in which case our inherent variability, which in part is specifically intended to protect us from disease(!) is critical to include. And indeed such difference as it does make, will be contained within the ranges that a more appropriate model would produce, such that these can then be constrained more by adjusting parameters as the real-world data accumulates. In terms of John’s much neater summary than mine, you will be accumulating more knowledge about which k values and whatever to plug in. But you can’t plug in and progressively constrain, when the parameters with which to constrain don’t exist!

    However, I can well believe that ‘the government’ didn’t and doesn’t know any of this. But it is very hard to believe that (at least many of) their experts don’t.


  23. Alan,

    I think part of the problem that Andy and I have is being able to take at face value the professed levels of uncertainty. I’ve mentioned before on this website that Prof Mat Keeling had this to say at the Lords Science Committee:

    “With hindsight, it’s very easy to say we know care homes and hospitals are these huge collections of very vulnerable individuals, so maybe with hindsight we could have modelled those early on and thought about the impacts there.”

    Playing up uncertainties is convenient because it allows for this sort of mealy-mouthed excuse for what actually looks to me like sheer negligence. I’d love to know what is and isn’t achievable in the world of epidemiology but I can’t accept that statements such as the above are contributing to my understanding.


  24. Geoff,

    They’re at it again:

    You would think that someone might question a national R value that can change from 1.06 to 2.88 in the space of 48 hours. Surely they haven’t made the mistake of attempting a calculation when the national infection levels are too low to do so meaningfully. After all, it isn’t rocket science.

    You’re damned right it isn’t – it’s rocket again science!

    Liked by 1 person

  25. I was also thinking of giving that article as an example on this thread John. There are so many things to pick apart in the opening three paragraphs. But never fear, right above them we have the reassuring link: Why you can trust Sky News. Can’t think why I didn’t click on that.

    Having said which, is this number well over one a bogus artifact of increased testing or is there some genuine cause for concern? Or just random effects, as you say, from having too little data with which to work. I’ll assume a nothingburger until a week or more has elapsed.


  26. “… or is there some genuine cause for concern?”

    I honestly don’t know yet. In my house, R=0 and I aim to keep it that way.


  27. As far as I can see, what’s happening in Germany is that a tiny number of new national cases is being skewed by some localised outbreaks and they’re calculating the R from that data, which is highly unealistic, but the MSM of course leaps on the R value as evidence of a second nationwide peak emerging. The German government will probably deal sensibly with this and just control these isolated pockets where the virus is still thriving, but when this happens in the UK, the media, psycho Hancock, and Rainbow Boris Bungle are going to flip their beanies and shut down all abbattoirs and chicken factories (except Halal of course) and demand we all go vegan or something. You’ll have to register your name and address at the supermarket and book an appointment beforehand to be able to buy Lamb chops.


  28. Jaime,

    I don’t expect the MSM will have the first clue how to recognize the onset of a second wave. They know it will be coming round the mountain when it comes, but they have no idea whether it will be wearing pink pyjamas.

    Liked by 2 people

  29. Oh dear, what goes up must come down so they say. This particular Second Coming of Covid wasn’t even wearing pink pyjamas; apparently it was stark naked.

    Liked by 1 person

  30. Geoff,

    They’re at it again. I have just finished reading that my region’s ‘health leaders’ are planning to calculate ‘hyper-local’ R numbers for each town individually. Only by that means do they believe they will determine the ‘true’ R numbers.

    And I think that only by calculating the value for a single brain cell will one determine their true IQ.

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.