Thanks to a comment by Rachel, I’ve been made aware of a recent paper by Myles Allen called Tests of a decadal climate forecast. The paper is actually a short comment to the Editor and presents a comparison of the HadCM2 climate model predictions for the last decade (2001 to 2010), with observed global surface temperature anomalies.
One of the main focus points for those who claim that there is no global warming or that the science isn’t settled, is that the climate models have failed. This is mainly based on comparing the IPCC’s CMIP5 ensemble with observations and is also mainly based on a leaked draft of the next IPCC report. So what do Myles Allen & Co. actually do in this paper? Well what they seem to do is to take their climate model (HadCM2) and set it up using data only up until 1996, and then run it forward until 2012. They also use the IS92a scenario of relatively high greenhouse gas and anthropogenic sulphate forcing. They then compare what the model predicts with what is observed. They also compares the observations with what would be expected if the temperature fluctuations were simply following a random walk.
So, the main figure is shown below. The left-hand panel shows the model prediction with the black line being the ensemble mean and the grey area the 5% – 95% confidence interval. The red line is the running decadal mean of the observations, while the yellow diamonds are the annual temperatures during the forecast period. The middle panel is simply the period 1990 – 2040 shown relative to the period 1986 – 1996. The green vertical lines are the range based on a random walk, while the blue vertical lines are estimates from the IPCC’s CMIP5 model ensemble. The right hand panel simply shows the distribution of predicted temperatures (green – random walk, black – Allen et al., blue – CMIP5).
So, as far as I can tell, the HadCM2 model has done a remarkably good job of predicting global temperatures for the last decade using only data prior to 1996. It’s clear (from the right-hand panel in the above figure) that the CMIP5 ensemble does worse than HadCM2 alone but, according to Allen et al. (2013), still does a better job than a simple random walk. Just in case you think there is something funny going on here, I include below the figure from Allen et al. (1999) on which this current comparison is based. As far as I can see, it is essentially the same as the left-hand panel of the figure above, so they haven’t done anything to adjust the model prediction that they made in their 1999 paper.
So, Allen et al. (2013) seem to show – quite convincingly – that climate models have done a remarkably good job of predicting the evolution of the global surface temperatures over the last decade. I’ll finish with a quote from Allen et al. (2013) about what would need to happen in order for these model predictions to be falsified at the 10% level,
Even if temperatures for the decade 2007–2016 remain no higher than those for the decade 2002–2011, the 1999 forecast would still not be falsified at the 10% confidence level. However, it would no longer be substantially better than the random walk. If, however, temperatures have still not risen above those of the most recent decade by 2017–2026, in the absence of an explosive volcanic eruption, asteroid strike, nuclear exchange or other neglected short-term climate forcing, then the observations will fall outside the range of the dotted lines in Fig. 1b and the forecast2 will have been falsified at the 10% level.