This is a supplement to Paul’s post on this subject which I have just noticed on site. It seems we both got the same idea! I think it’s different enough from what Paul wrote to merit publishing too, so here it is with apologies to Paul where it overlaps. It’s an important subject to which we will probably return because CMIP6 models are going to be used to justify the climate emergency narrative I’m sure.
James Annan wants to #StopBrexit so he’s my mortal enemy, but he’s spot on about the latest climate model developed by the Met Office for inclusion in the IPCC’s CMIP6 climate model ensemble which will feature prominently in the forthcoming major Assessment Report 6 (AR6) Working Group 1 (The Science Basis) and no doubt will be used to generate projections of even more scary climate impacts in the other working groups using the latest emissions scenarios.
This is what the Met Office boastfully have to say about their pair of all-singing, all dancing models developed for CMIP6:
Two state-of-the-art models for studying the Earth-system and climate are the result of years of work and feature a host of advances over previous models.
Climate models are essential tools for understanding the past, present and future of the Earth’s climate. They use equations to represent the processes and interactions in our atmosphere, land, oceans and ice – providing a simulation of the entire planet.
The Earth System Model UKESM1, and the physical model (or General Circulation Model) it is based on, HadGEM3-GC3.1 are the result of years of work by scientists and software engineers from the Met Office and wider UK science community.
The sensitivity of these two models is very high:
Analysis shows the climate sensitivity of the models is high. For both models the Transient Climate Sensitivity (TCR) is about 2.7 °C, while the Equilibrium Climate Sensitivity (ECS) is about 5.4°C for UKESM1 and about 5.5°C for GC3.1. Future projections using the new models are in progress. When these have been analysed, we will have a better understanding of how the climate sensitivity affects future warming and associated impacts.
Compare this with the forerunner of HadGEM GC3.1, HadGEM GC2. That particular model had a transient climate response of 1.9C and Equilibrium Climate Sensitivity of 3.2C – which is round about the mean climate sensitivity of the CMIP5 model ensemble. These are huge changes in sensitivity: TCR increasing from 1.9 to 2.7, ECS going from 3.2 to 5.5! Using such models to project future warming is certain to produce alarming increases in global mean temperature and correspondingly alarmingly dire predictions of sea level rise, intensity and frequency of heatwaves, glacier melt, catastrophic ocean warming etc. It’s rather difficult not to believe that this is the intention because there appears to be little real scientific justification for these huge sensitivities – certainly not current and historical observations of climate change.
But the problem with using models with such huge sensitivity is that you still have to get them to fit historical observations and the only way scientists can do this is via cooling from anthropogenic aerosols.
The paper on UKESM1 which Annan was cynically referring to above states:
We document the development of the first version of the United Kingdom Earth System Model UKESM1. The model represents a major advance on its predecessor HadGEM2-39ES, with enhancements to all component models and new feedback mechanisms.
Overall the model performs well, with a stable pre-industrial state, and good51agreement with observations in the latter period of its historical simulations. However,52global mean surface temperature exhibits stronger-than-observed cooling from 1950 to 1970, followed by rapid warming from 1980 to 2014.
This tweet from Gavin Foster illustrates just how much this model overstates mid 20th century cooling.
UKESM1 massively overstates mid 20th century cooling but it has to if it is to get the rest of the historical record more or less correct with such a ridiculously high sensitivity built in. Note that is indeed overestimated aerosol cooling which is responsible for this 20th century mismatch because it is much more pronounced in the Northern Hemisphere where most of the heavy industry was and still is.
The Met Office admit that they have had to play around with aerosol forcing and aerosol cloud interactions in order to get their high sensitivity models to fit historical temperature trends:
UKESM1 is developed on top of the coupled physical model, HadGEM3-GC3 (hereafter GC3). GC3 consists of the Unified Model (UM) atmosphere, JULES land surface scheme, NEMO ocean model and the CICE sea ice model. The UM atmosphere in GC3 is Global Atmosphere version 7 (GA7). Inclusion in GA7 of both a new cloud microphysics parameterization and the new GLOMAP aerosol scheme led to a concern the model might exhibit a strong negative historical aerosol radiative forcing (i.e. a strong aerosol-induced cooling due to increasing anthropogenic emission of aerosol and aerosol precursors over the past ~150 years) with potentially detrimental impacts on the overall historical simulation of both GC3 and UKESM1.
A protocol was therefore developed to assess the Effective Radiative Forcing (ERF) of the mainclimate forcing agents over the historical period (~1850 to 2000), namely; well mixed greenhouse gases (GHGs), aerosols and aerosol precursors, tropospheric ozone and land use change. This protocol follows that of the CMIP6 RFMIP project (Andrews 2014, Pincus et al. 2016). The aim was to assess the change in the mean top-of-atmosphere (TOA) ERF between average pre-industrial (~1850 in our experiments) and present-day (~2000) conditions. In particular to assess the aerosol ERF, with a requirement that the total (all forcing agents) historical ERF be positive. Initial tests revealed an aerosol ERF of -2.2 Wm-2, significantly stronger than the -1.4 Wm-2 simulated by HadGEM2-A (Andrews 2014) and also outside the IPCC AR5 5-95% range of -1.9 to -0.1 Wm-2. As a result of the large (negative) aerosol ERF, the total ERF diagnosed over the historical period was approximately 0 Wm-2.
We therefore investigated aspects of GA7 that could be causing this strong aerosol forcing and, where possible, introduced new processes and/or improved existing process descriptions to address these. The goal of this effort was to develop an atmosphere model configuration solidly based on GA7.0 that:1.Had a less negative aerosol ERF and thereby a total historical ERF of >+ 0.5 Wm-22.Had a pre-industrial top-of-atmosphere radiative balance of 0.0 ± 0.5 Wm-2. 3.Did not degrade the many performance improvements delivered by GA7.0.
In September 2016 a GA7.1 configuration was defined, comprised of GA7 plus a suite of parameterization improvements and new process descriptions not originally in GA7, that resulted in a reduction in the aerosol ERF from -2.2 Wm-2 to -1.4 Wm-2 (with an implied total ERF for 2000 –1850 of ~+0.8 Wm-2) and a pre-industrial TOA radiation balance of -0.04 Wm-2. The 3 primary developments leading to the aerosol ERF reduction were;
1.A new parameterization representing the observed sensitivity of cloud droplet size distributions to the pollutant (aerosol) content of the atmosphere, referred to as the cloud droplet spectral dispersion effect (Rotstayn and Liu 2009).
2.An improved treatment of the refractive index for black carbon aerosol absorption, following Bond et al. (2013), combined with more detailed look-up tables for aerosol optical properties enabling more accurate spectral resolution of aerosol solar absorption.
3.Guided by observations in McCoy et al. (2015), inclusion of an oceanic source of marine organic particles through augmentation of the parameterized marine emission of oceanic dimethyl sulphide (DMS) and subsequent treatment of this increased aerosol in the GLOMAP scheme.
The majority of the aerosol ERF decrease occurs through a reduction in aerosol-cloud forcing.
GA7.1 is now the atmospheric component of HadGEM3-GC3.1 which forms the physical coupled model core of UKESM1.
So basically what they did is tinker with aerosol cloud forcing parameterizations to get the required historical effective radiative forcing, but the problem is, they were left with too much negative forcing during the mid 20th century cooling. But that seems not to have bothered them too much as they still claim that the “model performs well, with a stable pre-industrial state, and good agreement with observations in the latter period of its historical simulations.”
Climate models can never be wrong, even though they keep predicting ever greater future warming.