Watt about Bob?

Bob Tisdale has contributed a post to Watts Up With That in which he does a Model-Data analysis with trend maps. What he is discussing is a comparison of the GISSTEMP temperature anomalies with the CMIP5 model. The figure (credit: Bob Tisdale) that he uses is shown below. Personally, I think it looks quite impressive. However, I accept that simply looking at it isn’t a particularly scientific way of assessing the quality of the fit.

A comparison between the CMIP5 model and GISSTEMP data (credit: Bob Tisdale)

A comparison between the CMIP5 model and GISSTEMP data (credit: Bob Tisdale)


What Bob Tisdale did was to divide the time into 4 intervals which are labelled on the figure above. As already mentioned by HotWhopper, there seem no real justification for choosing these particular divisions and the dividing lines seem to lie on peaks or troughs. What Bob Tisdale then does is calculate the model trend and GISSTEMP trend for each interval. He then compares these trends and concludes that the comparison indicates that the model isn’t a particularly good fit to the data. Here’s where I have a problem. He does absolutely no error analysis. I find this a little ironic given that the prime reason for claiming that there has been a pause in global warming for the last 16 years is because the error in the temperature anomaly trend is large enough that we can’t rule out that the trend could be 0oC per decade.

Now, I can’t actually find errors for the CMIP5 trends, but using the Skeptical Science trend calculator I can determine the 2σ errors for the GISSTEMP trends. I’ve produced a table, below, that shows the 4 trends together with the 2σ errors on the GISSTEMP trends. Clearly, the two earlier intervals are still not consistent with each, but the two later intervals compare quite well. This does, however, ignore the errors in the CMIP5 trends. I don’t know how to work these out, but I did find the figure, below, that shows the CMIP3 and CMIP5 models, both with errors (blue and yellow regions), together with the observations. The model errors seem as though they would produce trend errors similar to (or even bigger than) than errors in the measured (GISSTEMP) trends. If so, the trends in all 4 intervals would be statistically consistent, although this might only be marginally so for the 2 earlier intervals. The main point, however, is that you can’t do this kind of analysis without considering the errors in your calculations. Ignoring the errors is very shoddy and given how skeptics regularly use the error in the trend to claim that warming has paused for the last 16 years, they can’t claim to not know about errors.

A table showing the trends in the GISSTEMP data and CMIP5 model data for the 4 intervals shown in the first figure.  Also shown is the error in the GISSTEMP trends.

A table showing the trends in the GISSTEMP data and CMIP5 model data for the 4 intervals shown in the first figure. Also shown is the error in the GISSTEMP trends.


A plot showing the CMIP3 and CMIP5 models (with errors) together with observations (black line).

A plot showing the CMIP3 and CMIP5 models (with errors) together with observations (black line).

This entry was posted in Bob Tisdale, Climate change, Global warming, Watts Up With That and tagged , , , , , , , . Bookmark the permalink.

12 Responses to Watt about Bob?

  1. Rachel says:

    It’s probably futile, but have you tried pointing this out in the comments on this post at wuwt? That they choose to use or ignore the errors when it suits their point of view? I must admit I’m too scared to comment there and I don’t particularly like reading them. They just make me cross.

  2. I have tried commenting on WUWT under a different name and was quite vitriolically attacked by some of the more extreme commentators. It was quite an unpleasant experience and is probably what made me decide to write this blog. It does seem that anyone who tries to point out even a basic mistake gets quickly challenged and “encouraged” to consider not commenting anymore.

  3. Rachel says:

    Hmm. I thought as much. They’re a bunch of lying, cheating scumbags.

  4. What I found quite interesting is that on one occasion I made a fairly benign comment. I was intentionally trying not to be controversial and simply pointed out that they seemed to have mis-interpreted what a particular study was doing. Even that got shouted down. I was accused of being a “troll” who was “spreading falsehoods” and “misleading and disrupting the thread”.

    It’s my impression that anyone who makes comments that don’t toe what seems to be the party line is quickly silenced. Especially if what they say seems reasonable and well-thought out (and I’m not referring to my comments here). I had wondered if one could do some kind of scientific study. Get various people to make relatively uncontroversial but scientifically reasonable comments and see how often someone else (probably from a small minority of regular commentators) accuses them of “lying” and of being a “troll”. I think it would happen very regularly, but I guess one would need to do the study to be sure.

  5. Pingback: Why don’t I comment? | Wotts Up With That Blog

  6. Sou says:

    Bob’s compounded his folly. He thinks that a difference between annual observations and modeled output of up to +/- 0.1 degree shows “poor simulation”. One commenter says it’s “shocking”. (For most of the 130 years the model is within +/- 0.1 degree – in the early years and for last year it is within +/- 0.2 degrees).

    This is despite the fact that the difference between one year’s observed anomaly and the next can be +/- 0.2 degrees and more!

    IMO the models are getting so much better with each new generation.

  7. Yes, I noticed the new post of his. It seems like quite remarkably good fit to me, but clearly not to most who read WUWT. I was looking yesterday to see if there was a proper statistical technique for comparing models with observations. chi-squared might give you some info (although it’s really for minimising between possible models rather than determining the goodness of fit of one model). Probably need to also consider the errors, which makes it rather more difficult. What Bob has done, however, is clearly without much justification.

  8. Pingback: Watt about Bob’s book? | Wotts Up With That Blog

  9. CDL says:

    It should be made clear (by Bob) which models he used for his CMIP5 analysis. Not all climate modeling centers did a RCP6.0 run. In fact, he probably should have used the RCP4.5 scenario for 2006-2012, which would be the mid-range emission scenario, rather that the higher emission scenario of RCP6.0.

  10. At least he does say RCP6.0 in the figure, which is more than some do. You’re right though, he should probably have considered the other scenarios.

  11. CDL says:

    Most figures posted on “sceptic/denier” blogs don’t give proper citation for where, when, what data is used. Also, while scatterplots may seem simple, the underlying details of the data used in the plot are often ignored. It is maddening. Results, to be properly verified, must be able to be replicated. This is why peer-review is important and necessary. Why would anyone put their faith in anything that hasn’t been tested multiple times and proven somewhat consistent?

  12. Indeed, what’s ironic is that they use the uncertainties in the temperature anomaly data to argue that the trend since 1998 is not statistically significant and then ignore uncertainties when comparing model results with observations.

Comments are closed.