I thought it would be appropriate for my first post to discuss the recent paper by Shaun Marcott (Marcott, Shakun, Clark & Mix, 2013, Science, 339, 1198-1201). This is a paper that uses something like 73 different proxies to determine global temperature histories since the holocene (i.e., for the last 11,300 years). Their main figure is shown below and shows the proxy-determined temperature anomalies for the last 2000 years (left-hand panel) and for the last 11,300 years. The basic conclusion that they draw is that since the late 1800s, global surface temperatures have risen from amongst the coolest for the last 11,300 years, to being amongst the warmest. Furthermore, if they continue to rise – as predicted by climate models – by 2100 the mean global surface temperature will be higher than at any time since the holocene.
As already mentioned by HotWhopper, Watts Up With That already has something like 27 different posts that try to dismiss the work of Marcott et al. (2013). Now, if you were wanting to convince a scientifically literature person (like myself – I would suggest) that there is a fundamental and major problem with a particular piece of research, you don’t do it by writing a large number of posts attacking various aspects of this piece of research. That seem obsessive, rather that objective. You write a few posts that highlight the problems that you have encountered with this piece of research. You might even consider mentioning some positive aspects of the work because the chance that a piece of research has absolutely no merit at all is vanishingly small.
So, what are some of the supposed issues with Marcott et al. (2013)? Their main figure (shown above) shows an uptick in the 20th century. They quite openly acknowledge that this is not statistically robust. Why is this? As explained in a post called The Tick on Open Mind, this is because as you get closer to the present, you start to lose some proxies and so the reconstruction becomes less statistically significant. It seems that this then leads those critical to complain that the conclusions are then flawed. However, the conclusions are based on comparing the known instrumental record with their full 11,300 year reconstruction. The 20th century portion of their reconstruction doesn’t really influence their conclusions in any way. Some then criticise the comparison of a proxy-based reconstruction with an instrumental record. This doesn’t really make sense to me. If the reconstruction is properly calibrated, then there’s nothing wrong with comparing it to another temperature record. There’s nothing unscientific about this. It’s standard practice in many areas. The authors are completely open about all of this, so there can be no real complaint of dishonesty.
Another criticism is that the Marcott et al. (2013) reconstruction has a time resolution of about 300 years. This is partly because the actual proxies have time resolutions of between 40 and 120 years and so when you combine them you end up with a signal in which you tend to smear out any short period perturbations. I believe the uncertainty in some of the age estimates also lead them to run a series of Monte Carlo models in which you perturb the age estimates to see what effect that has on the result. This too then reduces the actual time resolution. The claim is then that this means that their reconstruction could miss large changes in the temperature anomaly that last for up to, but less than, 300 years. There is some merit in this claim, but it is not that simple. Firstly, what is used are proxies for past temperature. This is not quite the same as simply sampling past temperature at particular instants in time. If one was simply sampling past temperatures at particular instants in time, then it is possible that one could miss some perturbations with a period smaller than the sampling period. Of course, if there were lots of large perturbations then it is likely that it would be present in some of the samples.
However, proxies are not sampling the temperature at a particular instant in time. A proxy, as far as I can tell, is more like having a thermometer that measures the temperature continuously, but then only records the average after some time period. Even if a warm period had a duration less than the resolution of the proxy, it should still be present. The influence it has on the proxy will depend on the duration and on the amplitude. If there were many large temperature perturbations, then they should influence the reconstruction. Either, it will influence the range of the variation since the holocene, or we would see some perturbations that may be smeared, but still present. Something else that confuses me about this criticism is that we are about 130 years into a fairly large warming period. The global mean temperature has increased by about 0.8 degrees in the last century. This is similar to the total variation since the holocene, according to Marcott et al. (2013). Noone (I believe) would suggest that temperatures will suddenly plummet back to their mid 1800s values. If the current warming trend isn’t driven by our CO2 emissions and is about to end, it will still take a substantial amount time to return to pre-warming period values. If so, what we are going through is a substantial perturbation with a duration of 200 – 300 years. This is similar to the resolution of the Marcott et al. (2013) data, so if something like this has happened before, surely it would be present in their reconstruction?
This is getting a little long, so I will make one more comment. The work of Marcott et al. (2013) has also been criticised for being over-hyped in the media. I will acknowledge that I have some sympathy with this view in general. The role of a university press officer is to get their university’s research into the media. They know that this requires that they make it sound as interesting (and controversial?) as possible. This doesn’t excuse researchers who allow their research to be over-hyped, but doesn’t immediately invalidate their research. Some might argue that it indicates the character of the researcher, but I think it is so common at the moment as to be effectively irrelevant. That’s not to say that it’s a good thing and I would rather that we were more honest about the value of our research when engaging with the media. Having said that, I don’t actually think that the Marcott et al. (2013) work was particularly over-hyped. I also believe that journalists have some obligation to check on the research being presented to them and to not simply accept the university press release that they presumably know is going to present the work in a manner aimed at attracting media attention. Well, I think that is all I was going to say. It’s a little long but nothing compared to the amount written on “Watts Up With That”. Although I will always acknowledge that there may be issues (known or unknown) with a particular piece of research, I think that Marcott et al. (2013) is a very good bit of work that will play an important role in helping us to understand how our current climate compares to that of the past. New work will clearly update and refine this understanding, but the kind of attacks that have been directed at this work (and at the authors too) by posts and comments on “Watts Up With That” are – in my opinion – completely unwarranted.