Watt about the averages?

Just a quick post to point out that Bob Tisdale doesn’t read things very carefully, or doesn’t take much time to think about what he has read. In a recent Watts Up With That (WUWT) post called Why the NOAA global temperature produce doesn’t omply with WMO standards Bob quotes a NOAA press release

According to NOAA scientists, the globally averaged temperature for June 2013 tied with 2006 as the fifth warmest June since record keeping began in 1880. It also marked the 37th consecutive June and 340th consecutive month (more than 28 years) with a global temperature above the 20th century average. The last below-average June temperature was June 1976 and the last below-average temperature for any month was February 1985.

He then provides a link to the NOAA temperature anomaly data and comments

the “last below-average temperature for any month was” in reality was December 1984, not February 1985. Makes one wonder, if they can’t read a list of temperature anomalies, should we believe they can read thermometers?

Hold on a minute Bob. The temperature anomaly data presented here is, I believe, relative to the period 1900-1999 (20th century average). The press release is, as far as I can tell, referring to the average for the entire instrumental period (1880-2013). The average for this entire period is 0.0234oC and hence – as far as I can see – February 1985 was the last month with a temperature anomaly below this average. I will accept that the press release is not worded as carefully as maybe it could have been, but a little thought may have lead to Bob realising that he had misunderstood what was being said rather than them making some kind of silly mistake (as he has assumed).

Advertisements
This entry was posted in Anthony Watts, Bob Tisdale, Climate change, Global warming, Watts Up With That and tagged , , , , , , . Bookmark the permalink.

4 Responses to Watt about the averages?

  1. What I didn’t comment on was the latter part of Bob’s post in which he criticises NOAA for using a 1900-1999 baseline rather than the WMO baseline of 1981-2010. This is because using the latter baseline would produce lower temperature anomalies. So what? It doesn’t change anything. It simply changes the baseline.

  2. Currently Bob seems to be having an argument with Izen about the base years. Izen pointed out that the choice of base years makes no difference to the magnitude of the anomalies. What Izen means is that the anomaly for a given month (for example) is really just the difference between the average temperature for that particular month and some long-term average value for that month. Typically base periods of 30 years are chosen. Sometimes it’s 1951-1980, sometimes 1961-1990, and now it appears that it is becoming 1981-2010. It, however, doesn’t really matter what base period you use. It might change the actual value for each anomaly in the dataset, but doesn’t change which months are hotter than the average for the 20th century (for example) and doesn’t change how the anomaly varies with time.

    Bob has just replied by saying that Izen’s trollish remark as laughably wrong and said that changing the base period does change the value of the anomalies. Technically true but ultimately irrelevant. Anomalies are always relative to some base period (long-term average) and so any analysis requires you to be aware of this. If one is to compare anomalies from different datasets with different base periods you need to correct for these different base periods. You couldn’t simply conclude that one is predicting a smaller anomaly than the other. The fact that Bob seems to think that the base period has some actual relevance rather suggests that he doesn’t quite understand what the temperature anomalies actually are.

  3. There’s really no secret here.

    The anomaly approach is used because it is robust to varying data record lengths, data gaps, changes in global station coverage, etc. If we had a continuous, full-length record from every temperature station (i.e. continuous data from 1880 (or whatever) to the present, with no data gaps, no missing samples, no changes in the station mix over time, etc.), we could dispense with the anomaly approach and just average the absolute temperature data.

    But in cases of missing data, the “average the absolute data” approach can break down. If we have stations that have missing winter data for some years, then the absolute temperature average would skew high for those years. Likewise, if new stations in hot, tropical regions start reporting data at the same time that stations in cooler regions go out of operation, the absolute temperature average would skew high as a result of that change in station mix.

    But if we compute independent monthly baselines for each individual station and simply average the anomalies *relative to those independent station/month baselines*, then our averaging procedure will be much more robust to data discontinuities/gaps as well as to changes in the station mix over time.

    It’s really not all that complicated a concept when you think about it a bit — and the fact that so many “skeptics” have failed to grasp this in spite of spending *years* supposedly scrutinizing the global-temperature record suggests to me that they are way in over their heads here.

  4. Thanks, a really clear explanation of why anomalies are used rather than actual temperatures. Yes, very surprising that skeptics do seem to regularly misunderstand, or misuse, these temperature anomalies.

Comments are closed.