Twice as Likely as Not: Attribution of Increased Probability of Discrete Events

We are obsessed by time series, and rightly so. Daily changes in the stock market; monthly movements of unemployment or inflation; annual trends in birthrates or immigration; these graphs are the key to understanding the direction in which our society is heading, and how fast. It is quite impossible to formulate sensible economic or social policies without these undulating icons, mystic symbols of the power of prophesy which recall the sinusoidal snakes coiled around the caduceus of Hermes the Messenger God, patron of scientists and liars, or the knot of fornicating serpents which the seer Tiresias swatted, causing him to be transformed into a her, with all the ensuing embranglement on American campuses, conflict in the courts of Ontario and ebullition on the streets of Brighton.

Where was I?

And climate has its own key graph, the one showing monthly global temperature anomalies, available from GISSTEMP; HADCRUT, UAH and another one in California I forget. The last one I looked at was for June 2018 and it showed global temperatures plummeting. In fact the fall in the past eight months was enough to wipe out half the rise in the previous 150 years. (I’m quoting from memory. I can’t be bothered to look up the graph, so I may have got those figures slightly wrong, so please correct them if necessary. But you get the idea.)

The point I’m making is how ridiculously easy it is to construct a sentence based on a time function which is literally true, but so misleading that it will suggest to a reasonable person the precise opposite of the truth. (Watch Bloomberg or any other of those business channels. That’s all they do, 24/7.) Of course temperatures are rising, (as far as we know, and if you discount the known problems with the data: the urban heat island effect; the fact that the data were never gathered for purpose and so would be thrown out by any reasonable scientist working in the field of – say – epidemiology, where reliable data are needed for saving lives, and not merely for titillating readers of the Environment pages of the Guardian; the deliberate pollution of land surface temperatures with random figures plucked from the ocean in buckets of leather or of oak, and so on.) And we sceptics do discount these scientific monstrosities most of the time, because they’re not the most interesting thing. So we ignore them and concentrate on the essential, which is what reasonable people indulging in a science-based discourse habitually do.

(Here’s what we need; a certified scientist to stand up in front of the TV cameras or a Parliamentary Committee brandishing an oaken bucket and say: “Listen, fuckwits. There are a couple of known unknowns in the data that cover three quarters of the planet. The first one is due to the fact that circa 1939 the three quarters of the world’s merchant fleet that was owned by the United Kingdom stopped recording ocean temperatures, for reasons nothing to do with climate change. And the second one is due to the difference in the temperature of the ocean as measured in water drawn up in oaken buckets as opposed to leather ones. For this and other reasons, the recorded temperature rise of approx. 1°C over the past 150 years is subject to a one degree error, either way, more or less, and is therefore entirely useless.”

And then do the Cook Lewandowsky comic turn with the bucket on the assembled M.Ps/climate scientists.)

It is hugely significant that climate hysterics hardly ever refer to time series graphs. They prefer absolutes – references to fixed points: “Eleven of the twelve hottest years have all occurred in the past decade”; “Bognor Regis has just recorded the highest temperature since King George III’s wig melted in 1819” – that kind of thing.

At the Guardian and the Conversation, where graphics in comments are forbidden, I used to summarise the situation with a formulation like: “Global temperatures have been zigzagging slowly upwards since about the eighteenth century, with an upward zig in the last three decades of the twentieth century indistinguishable from a similar zig in the first three decades.” These observations hardly ever provoked a response. It’s as if climatists can’t deal with process, movement, change. Everything is fixed points, with lines joining them up to form a picture if necessary, like the numbered puzzle pictures in your old Rupert Annual.

History as praxis, preached Marx, and Heraclitus was already savvy to the flow of things 25 centuries before. But when top climate scientist Chris Rapley produced his monumental one-man bore-in at the Royal Court Theatre, financed by the European Union and the British Ministry of Culture, Sport and Wheelchair Access, he called it 2071, after the year when his granddaughter would be the same age as he was then, Gaia willing. Leaving aside the morbid weirdness of imagining your little grandchild as a wrinkly old crock, consider the extraordinary hubris of positing a future 55 years ahead, without paying the slightest attention to what might occur in the intervening half century. You can’t get there from here, Chris. And anyway, by then you’ll be dead, and so will I.

And that is the dirty dark secret at the heart of climate porn. It is a fantasy that, by definition, (or rather, by the workings of the inevitable, indisputable laws of biology) we won’t be there to see realised. The prophesies of climate catastrophe (at least the ones that aren’t evidently false, or soon to be falsified) are placed far in the future, and therefore beyond the ken of us wrinklies. I don’t want to be excessively cruel to Chris Rapley of University College London, formerly of the British Antarctic Survey and the Science Museum, but Vladimir Nabokov got there before him in Lolita when he has his hero Humbert Humbert fantasising about having sex with his yet unborn granddaughter. Humbert, with his illicit passions, necessarily lived entirely in the present, and could only envisage the future in terms of other presents (a gift of the English language, that) yet to come: the birth of his daughter, and then his granddaughter. The future as three data points.

And this, I think, explains to a large extent the climatist’s fixation on the one event, the fixed point; the record temperature; the Big One.

The clearest example of the reification (another Marxist concept) of the Event, the Timeless Fact, which I’ve come across is in the case of the brief career circa 2012-2014 of climate doomster Stephen Emmott which I recorded in a number of angry articles here

At the Royal Court (again) the Avignon Festival, Penguin Books, on the BBC, at the Science Museum, he preached his message of impending Apocalypse. This Microsoft geek with a team of forty top scientists and pentabytes of free computer access wrote a programme which was supposedly a model of Life the Universe and Everything. Naturally it burst, exploded, fell at the first hurdle, (whatever programmes do) which Steve interpreted as evidence for the impending end of the world. He published his supporting graphs on a Microsoft website, and I commented on them at

Some of them had two data points.

But his big thing was not time series, but Facts. Like: Every Big Mac consumes 3000 litres of water.

Every cup of coffee 100 litres. The details are in this article by Alex Cull at

You see, every Big Mac contains a certain percentage of a cow, and each cow requires x acres of grassland, on which y inches of rainfall fall per year. You get the idea. So each Big Mac somehow contains a virtual 3000 litres of water. Which somehow stays in the Big Mac, depriving the rest of the world (which hasn’t been privy to the consumption of this icon – ugh – of the consumer society) of this lifegiving resource.

History as praxis, said Marx, in opposition to Hegel, who claimed, in his Prolegomena to a Study of History or whatever it’s called, that the Swiss are a warlike race because they live in the mountains, which is why they’ve been at peace for four centuries. Or something. (based on a time series with two data points.)

Things change said Heraclitus: Go with the flow.

Climatists don’t get that. Take the current hysteria around the present heat wave in Europe. Jaime Jessop has brilliantly exposed the so-called science behind the attribution of some hot weather to “climate change.”

She did it by dragging out the data from behind the filing cabinet where it was hiding and interrogating it, mercilessly, and in detail.

But there’s another objection – not scientific, but grammatical, and therefore philosophical – to the claim that is being made in this kind of “science” – that a certain event (a heatwave) was made x times more likely by a certain circumstance (man-made global warming.)

That x% of the current warming is due to man-made climate change is a statement that looks, feels, empirical, so let it pass. (That the current opinion puts x at 110% looks like a deliberate attempt to rile honest folk, but let it pass as well.)

But climatists are little interested in measurements on a continuum. What they like is Facts. Things. Binary is-or-isn’t Events.

So the current heatwave is or it isn’t made more likely by Global Warming. And it is. Oh it is, twice as likely, in Scandinavia, where it started a couple of weeks ago, when it was unusually cold in Spain. And also in Spain today, where records are currently being broken, while temperatures in Scandinavia are reverting to normal.

MétéoFrance proudly announced heat records broken in six places in France tonight, three of which were within ten miles of where I live. They could have made it seven places if they’d stuck a thermometer in my strawberry patch, or eight if they’d taken the temperature of my withering tomatoes. When you’re looking for a thing, you’re sure to find it. Identifying a trend is not so easy.

The word “climate” is derived from the Greek for a slope, or trend.

It is normal that climate hysterics should concentrate on things, like heatwaves or hurricanes. It is difficult to interest citizens in an annual temperature rise of 0.01°C, which might, if the consensus of scientific opinion is to be believed, rise to 0.02 or (woe is us) even 0.03°C. On the other hand, a heatwave or a hurricane is not a barely detectable Trend, but a Thing,

But the problem with Things is precisely that they are discrete. Where you live, there either is, or isn’t a heatwave. And similarly, for any given isle in the Caribbean, there either is, or isn’t a hurricane. And according to the experts, the number of hurricanes in the Caribbean, or heatwaves in Europe, is not increasing.

(Why do I believe the experts? Who are they, anyway? Roger Pielke? Richard Tol? What right have I to value their judgement above that of, say, Rupert Read, lecturer in Environmental Philosophy? Let’s face it – none.)

As Jaime details in her article, climate experts are certain that the current European heatwave was made twice as likely by climate change. But the statistics do not show twice as many heatwaves occurring now as previously (however you define “now” and “previously.”) So the “twice as likely” is a provisional estimate to be confirmed or disproved by future events. (A similar event next year would surely convince 97% of the population, though it would just as surely have no statistical value.) How long must we wait? Until Chris Rapley’s granddaughter is a stout matron and he a dribbling idiot? Or longer? We’re talking political decisions of great import, so these apparently arid statistical questions matter.

The scientists that Jaime demolishes in her article think they’ve found the answer to the question (which no sane person has ever asked) “How much more likely was such and such an event made by climate change?” And the answer they come up with in this particular case is “twice as likely.”

To give this odd claim a meaning you have to place it in a context, specifically, a temporal context. The normal context for such a claim would be the moment it was uttered, which was after the event happened. At this moment, the probability of it having happened was 1. It is quite impossible to double the probability to 2. Of course, it might not have happened, in which case the probability of it having happened would have been zero. In that case the fact of climate change making it twice as likely to have happened would be without effect. 2 x 0 = 0.

It follows that the claim that such-and-such an event was made x times more likely by such-and-such a causal factor is meaningless if understood as referring to the moment it is uttered, after the event in question. It can only be given a sense if it is understood as being uttered before the event, i.e. as a prediction. In which case the statement “x was made y times more probable by z” can be more simply reformulated as the prediction: “x will happen y times more often, because of z.”

Which may be true, or maybe not. We’ll have to wait and see, won’t we? That is the only possible sense which can be given to such a statement, however many days of computer time it took to formulate.

And when you’re talking about events like heatwaves or hurricanes, which happen a small number of times per decade, you’re going to have to wait a long time to confirm your hypothesis.

To summarise in the manner of Rupert Read’s beloved Wittgenstein: an event that happened, happened, with a probability that it happened of 1. Making it twice as likely to have happened doesn’t change anything about the fact that it happened.

An event that didn’t happen has a probability of having happened of zero. The fact that there was a “forcing” (climate change) that made it twice as likely to happen doesn’t change the fact that it didn’t happen.

Some supposedly scientific statements are empty of meaning.

23 thoughts on “Twice as Likely as Not: Attribution of Increased Probability of Discrete Events

  1. The great warming is something they never give details on. They spoil the narrative. If they said this year is 0.02°±0.1°C warmer than 1998, then it is hard to hang the world going to hell in a handcart on that isn’t it.


  2. The same thing is going on in the BREXIT debate, the use of data points without context,, commentary or understanding. So the pound fell against the Euro in the months after the vote – with the assistance of Mark Carney’s odd decision to cut interest rates – and this was widely hailed, even by academic economists as evidence of the disaster that was Brexit, even though Brexit had not of course happened. It was conveniently forgotten, for example, that many experts had been thinking that the pound was overvalued. Few people pointed out that Carney’s action could be construed as a desire to assist the Remainer narrative etc. All that was needed was a drop in the pound.


  3. You’re correct Geoff that the impetus behind the current trend to attribute single events to an underlying (but boringly, imperceptibly, slow moving) cause is less a result of scientific curiosity or even altruism (forewarned is forearmed, and all that), much more to do with messaging. Hence the urgency of the attribution. If experts can tell you why it is, at this very moment, you’re sweltering in your generously insulated high rise inner city flat with energy efficient double glazed windows which you can only open by 2 inches, or your lawn has turned brown and parched in your delightful cottage garden and you can’t sleep properly at night, then that’s worth a hundred thousand words about some hypothetical Thermageddon projected to happen when your unborn grandchild is old and wrinkly.

    These scientists communicate their findings using the nebulous concept of ‘probability’ which the general public have some intuitive understanding of. However, they actually talk about ‘return times’ of events in their scientific analyses, these being the expected or observed interval between two successive rare events. But obviously, the headline ‘Scientists Find That The Return Time Between Heatwaves Has Halved Due To Climate Change’ is not quite as catchy as ‘Scientists Find That The Probability of Extreme Heatwaves Has More Than Doubled, blah, blah, blah’.

    This is what makes the most recent ‘attribution’ so absurd. The empirical data from the Scandinavian stations did not reveal any definite observed return time for this particular heatwave on account of the fact that the observed summer weather variability was so large, the actual data so noisy, that there was no discernible trend in the observations! Also, from the stations further north, the return time was not available from the data because basically such extreme temperatures have not been recorded in the last 100 years, so the return time had to be estimated using the models only – which were all over the shop as well! Somehow, magically, all this observational and climate modeling uncertainty translated into a media headline stating ‘Climate Change Made Heatwave More Than twice As Likely’. That’s bad enough, but then the authors made the additional unsubstantiated assumptions about the lack of decadal variability and the attribution of all global warming to GHGs since 1900 to get their final result. Truly shocking.


  4. Geoff I admire your language skills (and resent the fact that I lack them) but not all of your logic. It might be possible to attribute after the event, if you only use information from before the event. If you do that, then indeed you are making a prediction. There is no assurance, however, that the attribution “experts” kept to this stricture.


  5. That is the same sort of attribution used by religious fanatics:
    The cats were out during the plague, the plague is evil, so cats are demons working for the devil.
    And if lonely old ladies like cats, they must be witches.


  6. John Snow used the same type of logic and a visual aid (but not a graph) to attribute cholera deaths in Soho to polluted water. Employing a new fangled and highly controversial germ theory he used a new method of plotting the location of deaths on a map to identify a possible contaminated source of water. His prediction that cholera deaths would diminish if use of the well ceased was proven correct when he convinced authorities to stop the well’s use (pictures of Snow removing the well pump handle are false). People were able to attribute the cause of cholera using Snow’s prediction, but also after the well had been closed by comparing incidence of cholera before and after the event.
    If anyone had predicted this year’s heatwave ahead of time using attribution theory, then they have a right to be listened to. If they can only do it after or during the event, not so much, and then only make the prediction with information only available prior to the heatwave.


  7. Jaime,
    Thank you for showing that the latest ensemble chosen by the Emperor is the same as the others.
    Thank you for making it so explicitly clear that the Emperor is still walking around as he has been.


  8. Geoff,

    “And when you’re talking about events like heatwaves or hurricanes, which happen a small number of times per decade, you’re going to have to wait a long time to confirm your hypothesis.”

    Spoken like a true frequentist! R.A. Fisher, regarded by many as the father of modern statistics, would be very much in agreement with you. He absolutely abhorred the idea of calculating probabilities for single events. And you will find that there are plenty of modern-day frequentists who agree with you. However, there exists a multitude of Bayesians out there who would disagree. For example, such Bayesians would see absolutely nothing wrong with calculating the odds of a future, unprecedented event, such as the accidental precipitation of a nuclear war.

    I’m not sure how this weather event attribution thing works (I’ve promised myself I will study it in depth when I get the time one day) but I am guessing that Bayesian inference lies at its foundation. This would allow for two basic calculations:

    a) Given a set of circumstances, how likely is it that a future event will happen? (induction)

    b) Given that something has happened, how likely is it that a given set of circumstances pre-appertained? (abduction)

    Induction would enable one to assess the likelihood that heatwaves may become a new norm, assuming AGW is responsible for the present warming. Abduction may allow one to assess how likely AGW is a reality, given the occurrence of a given heatwave. However, to do any of this one has to create a Bayesian Belief Network. And therein lies the rub. You do this only when data is incomplete, and expert opinion will often substitute for empirical data. This is hardly a world in which the science is settled! Furthermore, I get confused as to whether the CAGW pundits are engaging in induction or abduction when they write their headlines. They seem in danger of disappearing up their own backsides in a fit of circular reasoning.

    As I say, I may have this all wrong. I still need to look into the theory behind all of this weather event attribution. In the meantime, Geoff, try to keep out of the frequentist vs Bayesian debate. Those statistician fellows can get awfully rough. And believe me, there is nothing worse than a bit of statistical roughhousing.


  9. I should add that abduction has nothing to do with aliens and rectal probes. Perhaps it would have been better had I used one of the alternative terms: Abductive inference or retroduction.

    Liked by 2 people

  10. The reverend Bayes to you. He was a vicar after all.

    My thanks to John Ridgway for trying to enlighten me. My efforts to understand Bayesian statistics have been a failure. And I’m afraid your explanation of Read and Taleb’s use of the precautionary principles likewise left me baffled. What’s an infinite result? You can only kill off 7 billion human beings, not an infinite number.


  11. My thanks Geoff for informing me that MiaB was both cleverer and funnier than I first thought. Must admit my mind was mostly upon the latest climate porn associated with the word “Hothouse”.


  12. Geoff,

    The use of the term ‘infinite’ is taken from the Taleb et al paper. When talking of ‘ruin problems’, they say:

    “When the impact of harm extends to all future times, i.e. forever, then the harm is infinite. When the harm is infinite, the product of any non-zero probability and the harm is also infinite, and it cannot be balanced against any potential gains, which are necessarily finite.”

    Of course, when talking of ruin, they are taking a particular perspective. Consequently, the total annihilation of the human race is seen by them as an infinite harm, no matter how indifferent the rest of the universe may be to such an outcome.

    Actually, the only risk I have ever encountered that could be considered to be infinite, irrespective of the perspective taken, was that considered by the LHC Safety Group when deliberating the risks associated with operation of the Large Hadron Collider. Specifically, their concern related to the current vacuum state of the universe. The presupposition is that the current vacuum energy that universally appertains is at ground state since, if it weren’t, the state would be unstable and would have decayed before now. But the group asked, what if the vacuum energy state is currently a false minimum, i.e. a meta-stable state? And what if a high energy event in a particle accelerator is all we need to destabilise the existing vacuum to create a bubble of ‘true’ vacuum. The bubble would then expand at approaching the speed of light until it eventually engulfed the observable universe. The problem is that such a new order could not hope to support physics and chemistry as we know them, let alone biology. A more absolute catastrophe is impossible to conceive.

    It is a sobering thought that such risks were seriously considered before being dismissed. But Taleb wouldn’t have done so if he were in charge!


  13. For the avoidance of doubt, in my previous comment I had meant that Taleb would not have dismissed the risk. The problem is that any risk that is conceivable will take on a level of plausibility, i.e. even though it may be vanishingly small, the probability remains finite. In Taleb’s arithmetic, this makes the risk infinite (finite probability multiplied by infinite impact equals infinite risk). Using the same logic he applies to AGW, he would never have switched on the LHC.


  14. I’m not convinced that you can ever be justified in using infinity in the calculation of anything involving our finite universe. In fact I’m damned sure you can’t. Even the destruction of the universe by the formation of a perfect vacuum in the Large Hadron Collider doesn’t hack it. What’s infinitely important about that? It’s just seven billion bods popping their clogs all at once instead of one by one over the next century. It’s certainly the biggest disaster imaginable, but it’s not infinite. And anyway they’ve already got a perfect vacuum. Doesn’t Brian Cox work there?

    What bewilders me as how anyone as clever, or crafty, as Taleb could get persuaded that climate change is a candidate for infinite danger status. Nuclear war is a far better bet. The worst we’ve heard so far is that more people will die of excess heat, while less will die of excess cold. And some low lying countries like the Marshall Islands will disappear. But 40% of the population of the Marshall Islands already s in one town in Kentucky or somewhere. That must be pretty dire, but not infinitely so.


  15. Geoff,

    It may be difficult to convince you that an infinite risk may be associated with a finite loss but let me give it a go. If this doesn’t work, we will probably have to call it a day because I will already have given it my best shot. Also, this is going to take me some time, so please bear with me.

    The first thing to appreciate about risk is that it is always subjective. It involves a threat to a stakeholding and so there must be one or more stakeholders from whose perspective the risk is calculated. Change that perspective and the risk calculation will also change.

    Secondly, when calculating risks, cost, benefits or utilities, analysists will seek to express impacts in a common currency (normally monetarised). So, for example, when deciding what level of financial investment can be justified to prevent a death, the impact of such a death will be expressed in terms of the financial burden it represents to the society that will be footing the bill. So contrary to common wisdom, one can put a price on a life – safety analysts do it all the time. This works well when one is dealing with relatively small death counts but the calculation starts to distort when one approaches existential levels of mortality. This is because of the first point I made. As the death rate increases, an increasing burden is being borne by a diminishing population. The context and calculation of the risk is ever changing because the stakeholding is ever changing. This can be factored into the calculation by taking into account the changing ratio between the size of loss and the size of the stakeholding community, i.e. multiply by a factor calculated by dividing the number of dead by the number of survivors.

    The logical extension of the above is the possibility of an infinite risk, since the death of the last survivor results in a division by zero. Of course, this is meaningless; infinities are a sure sign that the conceptual basis has broken down. But that is the very point being made by those who advocate the precautionary principle. Conventional treatment of risk will lead to such absurdities as the end point is reached. In fact, the risk and cost/benefit functions will have become ill-behaved long before the point of singularity. Risk calculations are notoriously problematic when dealing with ultra-high impacts combined with ultra-low probabilities. Also, it is worth bearing in mind that the experience of any benefits one might construe from the situation will also be approaching infinity as the body count mounts and the surviving stakeholder community diminishes, so cost/benefit calculations are doubly blighted in such circumstances.

    Note that the above infinities arise precisely because the community of interest is finite. It is not the population size that determines the scale of risk, it is the ratio between numbers of victims and survivors. The same effect will be seen whether one is dealing with a population of seven or 7 billion – it is an artefact of the subjective nature of risk and the mathematics being used to calculate it. Even the possibility of a single death could be perceived as an infinite risk if you are that individual (not that this stops people taking such risks for the most trivial of potential benefits).

    The only way to escape the above conundrum is to change the perspective for calculation of the risk, so one is now seeing it as a member of a broader, unaffected community, e.g. cockroaches. That is why I cited the vacuum bubble as the only truly infinite risk, since there can be no broader community looking in on the fate of the universe (don’t get me started on God’s eye views; even if He existed I’m not sure He would be the sort to worry about risk).

    Another way of introducing infinities into the calculation of risk is to take into account temporal factors. If the impact of a risk is deemed to be proportional to the scale of damage and the temporal extent of damage, then irreversible damage will be viewed as infinite, simply because it introduces a temporal infinity. Personally, I’m less impressed with this ‘you’re a long time dead’ line of reasoning. When something perishes, the potential for a finite lifespan has been lost; one has not gained an infinite non-existence.

    If you are less than impressed with these quantitative arguments, let me try a qualitative one. When calculating costs and benefits, one always has to ask the question ‘Quo Bono?’ Such calculations cease to make any sense, however, once there is no longer any quo to bono. Similarly, risk is always subjective, so any calculation of risk that postulates the ultimate absence of subjects may be seen as meaningless. Infinities are not the problem, it is the lack of a sound conceptual logic.

    Anyway, none of this has been offered as an argument for Taleb, the precautionary principle or CAGW. I simply seek to explain why risk analysts can be found talking of infinite risk when dealing with finitely bound scenarios. As a result, I hope you might be more sympathetic to the idea, but I would understand any residual scepticism.


  16. To be honest I don’t see much new in this paper. The positive feedbacks were identified (actually posited) more than a decade ago and all have been questioned and criticized. I looked into one of them – the melting of the permafrost and release of methane. The low thermal conductivity of earth materials means that permafrost is likely to proceed from the base upwards (from geothermal heat flow), rather than from the top down. So any increase in surface temperatures will be immaterial.
    My wife and I put this to Tim Lenton when all of us were at UEA. The real advantage of working at a Environmental Sciences Department is that you have access to people of different expertise. Lenton was edging into our areas of expertise but didn’t want to know. I suspect he hasn’t changed a jot. I have an intuitive suspicion of positive feedbacks working over geological time periods. Lenton believes in them.


  17. Sorry, my last post should have been associated with Ben Pile’s latest. I would be grateful if it could be transferred.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.