The Sensitivity of Climate Scientists
Guest post by Clive Best
Every time that a new result shows a low climate sensitivity (ECS), climate scientists are queuing up to rubbish it. God forbid anyone implying that CMIP5 models are running too hot! Even Myles Allen received such a battering when referring to his recent paper on the BBC Today program:
“that a bunch of models developed in the mid 2000s indicate that we should have got to 1.3C by the time emissions reached what they are today (cumulative emissions), whereas we have only reached 1C”.
The predictable outcry from his fellow ‘scientists’ was enough to force him to write a ‘clarification’, see Guest post: Authors respond to misinterpretations of their 1.5C carbon budget paper
The one-third of ESMs that simulate the warmest temperatures for a given cumulative amount of inferred CO2 emissions determine the carbon budget estimated to keep temperatures “likely below” 1.5C. By the time estimated cumulative CO2 emissions in these models reach their observed 2015 level, this subset simulates temperatures 0.3C warmer than estimated human-induced warming in 2015 as defined by the observations used in AR5 (see figure below). It is this level of warming for a given level of cumulative emissions, not warming by a given date, that is relevant to the calculation of carbon budgets.
The remaining carbon budget before we reach 1.5C was found to be ~3 times greater than models predicted! In other words the climate sensitivity with cumulative emissions is lower than models ‘project’. That simply means that Earth System Models have got the carbon cycle wrong. That means they must also have carbon feedbacks wrong, which must reduce their values of ECS (doubling of CO2).
Now a new paper by Kate Marvel, Gavin Schmidt and co. continues this attack on data measurement derived values of ECS.
An emerging literature suggests that estimates of equilibrium climate sensitivity (ECS) derived from recent observations and energy balance models are biased low because models project more positive climate feedbacks in the far future. Here, we use simulations from the Coupled Model Intercomparison Project Phase 5 (CMIP5) to show that across models, ECS inferred from the recent historical period (1979-2005) is indeed almost uniformly lower than that inferred from simulations subject to abrupt increases in CO2 radiative forcing. However, ECS inferred from simulations in which sea surface temperatures are prescribed according to observations is lower still. ECS inferred from simulations with prescribed sea surface temperatures is strongly linked to changes to tropical marine low clouds. However, feedbacks from these clouds are a weak constraint on long-term model ECS. One interpretation is that observations of recent climate changes constitute a poor direct proxy for long term sensitivity.
Lewis & Curry and others all base their estimates of ECS on an energy balance model combined with observed temperatures. See also A new measurement of Equilibrium Climate Sensitivity (ECS)
Kate Marvel is now saying these are way too simplistic! She implies that long term feedback effects only emerge hundreds of years from now. Even if that were true it is still hard to justify reacting now to some future hypothetical problem which cannot be tested any time soon. The paper is really simply stating that the models are right, so therefore there is no point in looking at the temperature record to determine ECS.
This is like a ‘Tablets of Stone’ argument. Only an elite priesthood are qualified to interpret their meaning!
Contrast this to the Millar et al. Carbon Budget paper or to the recent Cox et al. sensitivity paper which are also based on CMIP5 models. Cox & co derived a low value of ECS = 2.8±0.5C by showing that the over-sensitive models predicted too much inter-annual variability. Only the few models with low sensitivity came close.
So this whole idea that climate models have some sort of status equal to a real physical theory like Relativity needs to be rebuffed. What is the probability that every Climate Model has bugs? Anyone who writes code knows that the answer is 100%. You cannot develop 1 million lines of code which are bug free. Then we have Earth Systems Models (ESMs) where hugely complex biological, geochemical, cloud nucleation, and even human processes are implemented simply as if they were linear parameterisations.
A real physical theory like Quantum Electrodynamics (QED) predicts the value of the anomalous electron magnetic dipole moment and experiments have confirmed QED theory to a precision of 1 part in a trillion. Climate Models on the other hand can only manage to ‘project’ ECS to be in the ‘likely’ range of 1.5 to 4.5C . Yet when a value is derived from observational data that finds a value even as low as 2.5C it is greeted with howls of protest from climate scientists.
They really are a hyper-sensitive bunch !
Rather predictably, Marvel and Schmidt have a new paper in GRL in which they look at
emergent ECS values from CMIP5 models and claim that these (much higher) values are
more accurate than those indicated from observations and energy balance models
.
I think this claim is simply wrong. Even if they have found some issue with the derivation of ECS
from observables, that still would not justify a claim that global warming will be greater than we
thought.
LikeLike
In case it’s not obvious, I was parodying someone close to all our
hearts in the above comment.
LikeLike
ECS is an equilibruim value, which is only occurring if a high CO2 level is maintained for 100 years after 2100, a complete science fiction scenario which is never mentioned, as the current decay rate of a CO2 pulse in the atmosphere is a constant 38 years without any sign of saturating. This observed increasing sink is the main reason for the upward adjusted carbon budget.
LikeLiked by 3 people
Agree with Hans. It’s a fictional construct. Doubled (or quadrupled) and held constant for 100s of years. LOL. Useful for people like ATTP.
LikeLiked by 2 people
Kate explains her paper in a series of tweets:
Those ‘weird cool conditions’ in the tropics don’t seem so keen to disappear. UAH shows a very sharp drop in tropical lower tropospheric temperatures during January. What she and Gavin appear to be saying is that natural variability means we have been lucky so far and this luck has translated itself into erroneous low estimates of ECS based on observations!
As pointed out already though, ECS, even estimated using more traditional methods, is not that policy relevant. Transient climate response is rather more so, and that’s lower still. What might happen in a far flung future as the earth system adapts to the CO2 we “squirted” into the atmosphere yesterday and today is not really a pressing concern (apologies to people’s children’s children’s children’s children).
What we should be concerned about is the short term effect of increasing CO2 and the possible short term effect of natural variability – which CO2 obsessed scientists have neglected and very likely underestimated.
But I forgive her for her climate sins. This tweet was very funny.
LikeLiked by 1 person
The anthropocene is just a different name for the “recent” or AP (After Present i.e. after 1950) geologically speaking sixtyeight years are totally uninteresting, but hey, the anthropocene was coined by non-geologists.
LikeLike
If one wants to do any calculation of ECS, in general, the longer the period, the better will be the estimate. From the abstract of Harvey et al just published.
This was written last year when the full-year averages CO2 levels were available for 2016. Mauna Loa data starts in 1959. That almost doubles the period, but more significantly it more than doubles the percentage rise in CO2 levels from 12.8% (379.8/336.8) to 27.9% (404.2/316.0). Even then, it is for estimating the very long-term impact of a 100% rise.
LikeLiked by 1 person
The climate is never in equilibrium. I suspect now there is far too much reputation riding on the narrative of a looming climate catastrophe to backtrack any time soon. Although an increase CO2 has caused some moderate warming so far, I suggest that it may well turn out to be the least of our problems in the medium term.
It is hard to keep a scare story going for 50 years while global life expectancy and living standards are continually increasing.
LikeLiked by 3 people
Clive, Science of Doom has some recent posts on simulations of chaotic systems that show some pretty disturbing things for the GCM faithful. The science here is not very supportive of climate scientist’s faith in these models. And it is faith based really.
LikeLiked by 2 people
@DPY6629
Thanks – Science of Doom is always worth reading. I note he too is becoming a bit of sceptic !
“which means that the climate of the model is sensitive to the time step”
“which shows that different time steps may not only lead to uncertainty in the predictions after some time, but may also lead to fundamentally different regimes of the solution.”
LikeLiked by 1 person
I think SoD was always a sceptic. Just a very smart one.
LikeLike
Hans,
All the scary predictions of climate disaster depend on the carbon cycle saturating. The Paris agreement was based on Fig10 of SPM, AR5 which purports to show that temperature rise is linear with cumulative emissions. Yet everyone knows CO2 forcing is logarithmic with concentrations. However ESMs say that the airborne fraction (0.45) is supposed to increase to nearly 1 with time and thereby cancel out the logarithmic forcing. The only problem is that it isn’t true, the airborne fraction remains almost constant. That’s essentially what Millar et al. found.
LikeLike
Clive,
When you’re not using it, could I borrow your time machine?
LikeLike
No need for a time machine ATTP. The mismatch between models and reality is already staring us in the face.
LikeLike
“That simply means that Earth System Models have got the carbon cycle wrong.”
This sentence reminded me of a paper from 2005, “Stabilising climate to avoid dangerous climate change — a summary of relevant research at the Hadley Centre January 2005”
Prepared by Geoff Jenkins, Richard Betts, Mat Collins, Dave Griggs, Jason Lowe, Richard Wood
“What constitutes ‘dangerous’ climate change, in the context of the UN Framework Convention on Climate Change, remains open to debate.
Once we decide what degree of (for example) temperature rise the world can tolerate, we then have to estimate what greenhouse gas concentrations in the atmosphere should be limited to, and how quickly they should be allowed to change.
These are very uncertain because we do not know exactly how the climate system responds to greenhouse gases.
The next stage is to calculate what emissions of greenhouse gases would be allowable, in order to keep below the limit of greenhouse gas concentrations.
This is even more uncertain, thanks to our imperfect understanding of the carbon cycle (and chemical cycles) and how this feeds back into the climate system.”
Yet the science was settled even then…
LikeLiked by 3 people
Not entirely off topic:
Australian Professor pushing back against Green/Climate suppression by his University.
https://www.gofundme.com/peter-ridd-legal-action-fund
LikeLike
Clive hits nail on head with his insight on why the climate true believers attack skeptics of all stripes:
“It is hard to keep a scare story going for 50 years while global life expectancy and living standards are continually increasing.”
Consider how much money the climate funding process devotes to “communication” (propaganda).
LikeLike
Professor Allen has had variable results with climate models over time. In 2003 he started a distributed computing program at Climate Prediction.net whereby interested volunteers could download software onto their domestic computers and run climate simulations, with the results passed back to the project. It was even taken up by the BBC, but since abandoned:
“We asked for your spare computing power to predict future climate. Developed for the BBC by climate scientists, led by Oxford University, using the Met Office climate model.”
http://www.bbc.co.uk/sn/hottopics/climatechange/
http://www.bbc.co.uk/sn/hottopics/climatechange/moreaboutexperiment.shtml
In 2005 Professor Allen published the first results of his attempts at distributed computing, in Nature. He had been testing what effect doubling the amount of carbon dioxide in the atmosphere would have on temperature:
“The vast majority of the results showed that doubling CO2 would lead to a temperature rise of about 3C. Such an increase would have a major impact on the planet. However a tiny percentage of the models showed very high levels of warming – the highest result was a startling 11C. ”
“What we found is these models being fed back to us from our participants show, by just varying the things within the ranges of uncertainty, varying certain aspects of the model within the range of uncertainty, these models are giving us warmings to that same increasing carbon dioxide, ranging to up to over 10 degrees centigrade. So this is more than double the upper end of the previously accepted range.”
He was also trying to produce a credible model to support the insurance industry. This was his message in a BBC interview in 2003:
“The vast numbers affected by the effects of climate change, such as flooding, drought and forest fires, mean that potentially people, organisations and even countries could be seeking compensation for the damage caused. “It’s not a question we could stand up and survive in a court of law at the moment, but it’s the sort of question we should be working towards scientifically,” Myles Allen, a physicist at Oxford University, UK, told the BBC World Service’s Discovery programme.”
http://news.bbc.co.uk/1/hi/sci/tech/2910017.stm
“Some of it might be down to things you’d have trouble suing – like the Sun – so you obviously need to work how particularly human influence has contributed to the overall change in risk,” the scientist, who has worked with the UN’s Intergovernmental Panel on Climate Change (IPCC), said.” “But once you’ve done that, then we as scientists can essentially hand the problem over to the lawyers, for them to assess whether the change in risk is enough for the courts to decide that a settlement could be made.”
“This next decade is going to see quite a lot of climate change cases around the world”, said environment lawyer Peter Roderick, who runs the Climate Justice Program for Friends Of The Earth International.
Professor Allen’s contribution to the Carbon Budget idea was his paper in 2009, Towards the Trillionth Tonne: https://www.nature.com/articles/nature08019
“Total anthropogenic emissions of one trillion tonnes of carbon (3.67 trillion tonnes of CO2), about half of which has already been emitted since industrialization began, results in a most likely peak carbon-dioxide-induced warming of 2 °C above pre-industrial temperatures, with a 5–95% confidence interval of 1.3–3.9 °C.”
Such confidence…
LikeLike
This spate of recent papers shows how strong and contradictory prevailing scientific dogmas can be. Previously it was claimed that unforced variability was a minor factor in climate with forcing changes providing the “control knob” for climate and decadal temperature changes. Now, it is claimed that internal variability is causing a quite striking lower response to recent forcing changes than will be the case in the long term. Both dogmas cannot be true. And the latter claim is based on GCM’s ignoring recent negative results showing that they are not skillful in simulating key processes in climate response such as tropical convection, clouds, and precipitation.
LikeLiked by 1 person
The necessary components for Catastrophic Anthropogenic Grobal Warming (CAGW) are:
1 An over the top emission scenario like RCP8.5;
2 Sink saturation which keeps this CO2 in the atmosphere;
3 A high value for climate sensitivity.
I would call this a science fiction horror scenario, not science, because all three components are highly unlikely.
LikeLiked by 3 people
Hans,
Yes – that just about sums it up nicely!
LikeLike
There’s a new post by Nic Lewis at Climate Audit on the Marvel et al paper
“It appears to me that the novel part of its analysis is faulty, and that the part which isn’t faulty isn’t novel.”
LikeLiked by 1 person
Wait, 2.8 C is a “low” sensitivity now?
It’s less than 10% from the mainstream position. 93%. It is 93% of the mainstream position of 3.0C.
This is practically the same value, with very nearly the same policy implications. We’re still very much on board for blasting past +2 C within a half-century, and probably sooner.
LikeLike
Nic Lewis has done a surgical job. Brilliant!
LikeLike
Another quote Paul:
“Marvel et al. claim that the low ECS values when models are driven by the observed evolution of SST patterns suggests that the “specific realization of internal variability experienced in recent decades provides an unusually low estimate of ECS.” However, as they admit, this is based on the perfect-model framework, which assumes “that the models as a group provide realistic descriptions of the mechanisms underlying observed climate variability“.
An alternative explanation for the models as a group misestimating the actual temporal evolution of SST change patterns is that the models as a group are imperfect. To my mind that should be the null hypothesis, rather than that internal variability over the last few decades results in an unusually low estimate of ECS. Indeed, the fact that internal variability linked to the Atlantic multidecadal oscillation is thought to have boosted warming over 1979-2005[18] makes it seem even less likely that in the real climate system ECS estimates based on this period would be biased low.”
Ouch.
LikeLike
Very good post, Clive! Thanks!
LikeLike