Watt about doing revolutionary science?

Watts Up With That (WUWT) has new post called can the IPCC do revolutionary science? The post suggests that the

But the SPM has been sidelined by momentous climate change events that occurred after its March cut-off date – and even after the date the draft was circulated.

The post is referring to recent papers on climate sensitivity and on the temperature standstill. In fairness an element of flexibility may be quite reasonable. However, the IPCC is an organisation that is heavily criticised by some involved in the climate change/global warming debate, and so sticking to their rules may be sensible. It’s partly a lose-lose for them, but at least sticking to the rules is harder to criticise than breaking the rules.

When it comes to climate sensitivity, two of the recent papers highlighted in the WUWT post is one by Otto et al. and another by Nic Lewis. Both of these have climate sensitivities lower than that of many other studies. If I remember correctly, both of these studies use recent observational data to estimate climate sensitivity (Added 2/9/2013 – it seems I did not remember correctly. Nic Lewis’s paper is a re-analysis of a Bayesian climate parameter study and is based on an earlier paper by Forest et al. 2006. Hence it doesn’t use the equations I describe below. My analysis below therefore doesn’t apply to Nic Lewis’s paper. If you want to know more, there is a Skeptical Science post about it.). The equations they use for the Transient Climate Response (TCR) and equilibrium climate sensitivity (ECS) are

TCR_ECS
In a sense these are quite clever approximations. Consider some time period over which the temperature is measured to changed by an amount ΔT. Use the change in CO2 concentration over the same time period to calculate the change in forcing ΔF. Use the known change in forcing ΔF2x for a doubling of CO2 to then calculate the TCR. Additionally, one can use the change in ocean heat content to determine the forcing, ΔQ, that has contributed to ocean heating. The difference between ΔF and ΔQ then gives the amount that has heated the surface during the time period considered and can then be used to estimate the ECS.

I can see a number of issues with using the above equations to estimate the TCR and ECS. One is that you have to assume that the forcings and feedbacks associated with the rising CO2 concentrations during the time period considered is a good representation of what will happen over the entire period during which CO2 will double. It may not be and the shorter the time period considered, the more likely there will be a mismatch. Another issue is that we now know that the surface temperature has a long-term anthropogenic trend on top of which short-term variations (due to natural variability) are super-imposed. There are periods during which the surface temperature trend is faster than the anthropogenic component, and other times when it is slower. Therefore the equations above should ideally use a time period over which the short-term natural variations average out. If not, the equations will either over-estimate or under-estimate the likely values of the climates sensitivities.

Another issue is that the equations above ignore any paleo-climatological evidence. This evidence suggests that the ECS is likely to be at least 2oC. In a similar vein, there are slow feedbacks that produce an equilibrium system sensitivity (ESS) which is expected to be higher than the ECS. In some sense, I would imagine that many would regard the equations above as sensible sanity check rather than truly independent estimates that should be seen as equivalent to those from much more detailed models. So, even though there are recent papers suggesting that the TCR and ECS could be lower than expected, the general views is (I believe) that it is unlikely that the ECS can be below 2oC.

Another thing that the recent WUWT post focuses on is recent work that has addressed the surface temperature “hiatus”. A paper they mention is one by von Storch & Zorita called can climate models explain the recent stagnation in global warming. Again, something I find frustrating about such papers is that they use the term global warming to mean surface warming. Ocean heat content data tells us that global warming has not stagnated. It’s only surface temperatures that are rising slower than expected. What this paper seems to do is establish how many climate models have trends – over the same time period – the same as or lower than the observed trend. They find that only 2% satisfy this condition. It, however, seems to have chosen the lowest of the possible trends (NCDC, HADCRUT4, GISS) and also doesn’t really seem to have considered the errors in the observed trends. The WUWT post summarises the von Storch & Zorita paper by saying

concludes that ‘natural’ internal variability and/or external forcing has probably offset the anthropogenic warming during the standstill. Overestimated sensitivity may also have contributed.

This makes it seem as though it is a combination of ‘natural’ internal variability, some unknown external forcing and overestimated sensitivity. The paper actually suggests these as 3 independent possibilities, although it does suggest that all 3 could contribute. However, the recent paper that blew Judith Curry’s mind seems to be suggesting that the surface temperature “hiatus” is most likely a consequence of reduced sea surface temperatures in the central to eastern tropical pacific. In other words, the suggestion is that anthropogenic global warming continues as expected, but internal variability means that we’re going through a period of reduced surface warming, not reduced global warming. So, there’s no real evidence that there is some unknown external forcing that is slowing global warming and not much evidence that the climate sensitivity is really lower than expected.

The WUWT concludes by saying

These new papers devastate the IPCC orthodoxy that current and future global temperatures are mostly driven by greenhouse gas emissions, and will reach dangerous levels later this century. On the other hand, all older papers are blindsided by their apparent failure to take account of the recent data (standstill).

I think this rather over-emphasises the significance of these recent papers. The ones on climate sensitivity might suggest slightly lower climate sensitivities than other studies, but their estimates are still consistent with earlier work and their method may be less reliable than more detailed calculations. Also, the von Storch & Zorita paper doesn’t really seem to conclude anything by itself and one of the suggestions made in the paper is what most suspect explains the surface temperature hiatus – internal variability resulting in a period where we’re undergoing a phase of slower than expected surface warming despite global warming continuing as expected.

So, I can’t see any real reason why the IPCC should change its rules and include these recent papers in the report it will be releasing later this month. They seem like perfectly fine bits of work, but they’re nowhere near as paradigm shifting as those at WUWT would like to believe.

This entry was posted in Anthony Watts, Climate change, Global warming, Watts Up With That and tagged , , , , , , , , . Bookmark the permalink.

34 Responses to Watt about doing revolutionary science?

  1. KR says:

    “…that the equilibrium climate sensitivity is unlikely to be at least 2oC” – Shouldn’t that be “unlikely to be less than”?

  2. You’ve caught me out. I hit publish by mistake and so am still editing the post. Will correct that, thanks.

  3. Lars Karlsson says:

    WUWT: “On the other hand, all older papers are blindsided by their apparent failure to take account of the recent data (standstill).”

    Many older (and recent) papers consider other periods, so for those the ‘standstill’ is irrelevant.

  4. johnrussell40 says:

    The heat going into the oceans is a bit like paying some of your earnings into your bank account and the balance into a shoebox under the bed. Your accountant and the internal revenue might think you’re poorer than you really are but if they take a good look at the big picture — your long-term lifestyle — the truth will out.

  5. Martin says:

    von Storch and Zorita have already responded to, let’s say, tendentious readings of their paper:

    http://klimazwiebel.blogspot.fr/2013/08/hans-von-storch-and-eduardo-zorita-on.html#more

  6. Good summary. Additionally, the von Storch and Zorita paper hasn’t even been peer-reviewed (they just posted it online). Moreover, Watts is cherry picking the few studies that support his desired conclusion while ignoring other recent publications that are in line with the IPCC consensus.

  7. Interesting, I hadn’t realised that the von Storch & Zorita paper was not peer-reviewed.

  8. BBD says:

    Nor were you meant to.

    🙂

  9. Rattus Norvegicus says:

    Au contraire mon frere , it was peer reviewed and rejected.

  10. Yes, I discovered that on reading through the comments on Hans von Storch’s Die Klimazwiebel post.

  11. Martin says:

    The authors do not exactly conceal all that. The info box above the PDF on academia edu reads:

    “Publication Name: submitted to nature, but rejected; thus unpublished manuscript”

    They also say so in the above-linked weblog.

  12. I don’t know if that was aimed at me or not, but I wasn’t implying that it was hidden. I was simply responding to a comment that pointed out that it had indeed been peer-reviewed.

  13. Yes, a good analogy. Difficult to hide this kind of thing forever.

  14. Martin says:

    This was aimed at nobody but the endless beauty of the universe.

    More specifically: I am rather astonished that the skeptics blogs did not reveal it, as the authors have been rather overt about it? Usually they try to generate some oomph by misrepresenting peer reviewed literature by pointing out that it has been peer reviewed, and get worked up if something is ‘not even peer reviewed’ (remember the BEST kerfuffle?). But I might be wrong, this is rather an impression.

    Apart from that, the paper has to be evaluated on its own. Being rejected by Nature can have a lot of reasons (that, as they point out, cannot be revealed), quite a few of which do not even exclude that the paper is of high quality (no hidden opinion on this specific paper intended – just, again, my basic stance that one shall not fill in information where there is none by insinuation and personal impressions about others’ motivations).

  15. Indeed, getting rejected from Nature is not at all unusual. There’s a running joke in my field that if you do end up getting published in Nature, your paper’s almost certainly wrong 🙂

    In fairness to those on skeptic blogs, when I went to academia.edu to find the information you said was there, I couldn’t initially find it. I then noticed the small “more” link next to the title which then expanded all the information below the title and included that it had been rejected from Nature. So, it’s not necessarily immediately obvious if you simply go to academia.edu.

  16. BBD says:

    Martin

    (no hidden opinion on this specific paper intended – just, again, my basic stance that one shall not fill in information where there is none by insinuation and personal impressions about others’ motivations).

    Have you pointed this out recently in comments at, for example, WUWT?

  17. I have to get an early night tonight, so won’t be able to do any moderating. Can I assume that you two will behave yourselves in my absence 🙂

  18. BBD says:

    Wotts

    I will do my level best. FWIW I’m curious as to whether Martin is committing a tu quoque or if I have entirely misunderstood where he is coming from.

  19. Martin says:

    @ Wotts

    Well, yes. I do not remember if it was Nature or Science, but there was this story about bacteria that metabolise arsenic instead of phosphorus (or can at least switch) that turned out to be bunk. Again, this is rather an impression but this kind of stuff seems to happen more often than on should expect by journals whose influence (and impact factor) demand very high scrutiny. At languagelog this was partly explained thusly (about Science, specifically):”We know that whenever they review a paper it’s always with an eye to what Wired Magazine or The New York Times are going to do with that paper.”

    http://languagelog.ldc.upenn.edu/nll/?p=4652

    Thinking about it, the authors could have put some indication of the manuscript status by simply writing it in big, light grey letters in the background (as one sometimes finds it for unedited manuscripts that are sent around).

  20. Rattus Norvegicus says:

    I have my guess as to why it was rejected, mostly that it was not that interesting. Several other papers published in the last 2 or 3 years have made essentially the same point using different methodology. More interesting approaches to this problem would be evident in the series of papers which have been published recently which look at exactly *why* the “hiatus” or whatever you want to call it is occurring. Just saying the CMIP runs do it some fraction of the time is not all that interesting.

  21. Martin says:

    @ Rattus Norvegicus

    This is something also sorta-kinda alluded to in the comment section of the Klimazwiebel weblog by von Storch/Zorita. Specifically, von Storch says in a rather straightforward manner that reviewers found it not really innovative. They also mention the “Overestimated global warming over the past 20 years” paper that has been published by Nature and that does, in fact, discuss causes for the hiatus in surface warming – a question von Storch said they simply did not seek to answer in their paper. (“…we mostly interested in the ability of scenario simulations in describing the present stagnation, not in explaining the stagnation. That is quite different.”).

  22. Dave Werth says:

    One other typo I noticed in the first line: “can the IPCC do revolutionart science?”
    Maybe an interesting Freudian slip.

  23. Marco says:

    Slight correction, Martin: the Fyfe et al paper was published in Nature Climate Change. Nature family journal, yes, but not Nature.

  24. Thanks, subconscious at best 🙂

  25. Martin says:

    Oops! Thanks, that were my woundrous copy-paste talents taking that faux info directly out of the Klimazwiebel comment section. I retract everything and blame it on the guy with an email address who got it wrong first!

  26. Marco says:

    Martin, that would be Paul Matthews.

  27. Tom Curtis says:

    Wotts, the Nic Lewis paper published in BAMS does not make use of the formulas you quote in determining climate sensitivity. The methods are partly explained here. Nic Lewis did write an earlier article that has not been published in any peer reviewed format that did use the second formula to find low climate sensitivity. Further, according to Lewis, the Otto et al paper also uses the same method.

  28. Thanks, Tom. I’ve just looked at the Otto et al. paper and their equations 1 and 2 look identical to the ones I’ve included here. Also, the Bishop Hill post written by Nic Lewis seems to also have these equations, so maybe I’ve misunderstood your comment but it seems like these equations were used.

  29. Tom Curtis says:

    You link to two papers. One was by Otto et al (with Nic Lewis among the et al). That paper does use the two equations as did an earlier non-peer reviewed article by Nic Lewis. The second paper you link to was by Lewis alone. It is not a rehash of his earlier non-peer reviewed article, and does not use the equations described above. Ergo, when you say “The equations they [meaning both papers] use …”, you are only half right.

    There are two Bishop Hill posts by Lewis, and both use the equations. The former is Lewis’ non-peer reviewed article, while the later his his discussion of Otto et al. I also linked to Lewis WUWT discussion of his BAMS article (to which you link above), and as you will see from that discussion, the equations above do not feature.

  30. Okay, I see what you mean. Yes, you’re quite right. It seems I did not remember correctly :-). Having re-looked at the Lewis paper, he’s doing a Bayesian re-analysis of climate sensitivities from climate models, rather than using the equations I included. I should add a correction.

  31. Tom is correct. Nics paper uses Bayesian approach similar to Forest et al. 2006. Otto et al. used equations 1 and 2. While there is a bit of an issue with Nics aerosol forcing estimate (I also heard of reproducibility issues, but that’s mere speculation so far), there are more issues with the Otto et al. paper, or to be precise, with its interpretation. The paper only tested whether the available observational data for the last four decades can be reconciled with theory. It can! However, the results may look entirely different for a different period of time. Unfortunately, we can’t test that because we don’t have the data to do that (e.g. deep ocean heat content). The results may also look different for other ocean heat content estimates. The recently published Balmaseda et al. 2013 data provide such an example. They change the conclusions of the paper quite significantly.

    After all, I wouldn’t bet a penny on the ECS estimate from both these papers. It does not contain a particularly meaningful information. It’s a different story for the TCR estimate as it is independent of ocean heat data. With the best of our current forcing understanding (which is highly uncertain for the last decade as well as for anthropogenic aerosols), TCR hovers around 1.4-1.5K. The slightly lower central value of 1.3K in the Otto et al. paper is based on an aerosol estimate which is very likely too low. I tend to think of the 1.3K as the most optimistic scenario. Note that most of my judgement is based on Alex Otto’s (principical author) own interpretation (I had a lengthy chat with him a while ago). Some of these points weren’t made clear enough in the paper.

    Btw, as far as ECS estimates are concerned, I am with Andy Dessler. Global land temperatures are highly suggestive of an ECS > 2K. In fact, I consider it as virtually certain. Whether it is ultimately going to be 2.5K, 3K, or even more, does not matter in the end. It’s a scientifically interesting question, but certainly not a policy relevant question anymore.

  32. Indeed, I’ve just have a quick read through Nic Lewis’s paper and I had got it wrong. I did have a quick look at it last night when I was writing this, but still managed to think that it was using the same method as Otto et al. Anyway, teach my to write things on a Sunday night after a busy weekend.

    What you say is very interesting and it is certainly my impression that the general view is that the ECS is greater than 2oC. As you mention, it’s not really a policy relevant question anymore, but is scientifically interesting.

  33. Tom Curtis says:

    I’ve made an interesting exploration of the equation for transient climate sensitivity above using the GISS forcings and GISTEMP from 1880-2011. First I applied the equation to every 50 year, 70 year, and 90 year interval in the data using the value of individual years. Here are the summary statistics:

    Annual Values TCS (50yr) TCS (70yr) TCS (90yr)
    Number: 82 62 42
    Max: 29.21 10.7 23.49
    Min: -29.43 -18.5 -3.86
    Mean: 1.59 1.5 2.27
    Trend: -0 0.03 0
    StDev: 5.69 3.55 3.65
    (Sorry for the loss of formating)

    The values are so erratic as to be near worthless, and certainly so when applied to a single pair of data points. Using the 50 year data as an example, telling us that the TCS lies between and -9.79 and 12.97 is hardly informative, and even the 90 year data (2.27 +/- 7.3) is not particularly useful.

    I then repeated the analysis using 10 year means of the data rather than individual years:

    10 Year Mean TCS (50yr) TCS (70yr) TCS (90yr)
    Number: 73 53 33
    Max: 48.68 22.39 2.31
    Min: -47.37 0.59 0.68
    Mean: 1.17 2.79 1.72
    Trend: 0.01 -0.02 0.04
    StDev: 9.58 3.19 0.49

    That’s a bit more tractable, but note the 96 point range (within six years of each other, as it happens) in the 50 year trends. In fact, the lack of variability in the 90 year values is solely a function of not including the sixties in the end points. The 50 and 70 year values are equally tractable if you restrict them to the last 33 values. In fact, most of the variability in values comes from having end points in the 1960s, start points in the 1910s, or (for individual years) lying close to a volcanic eruption.

    Further, it should be remembered that the standard deviations under estimate the uncertainty in this case, because they do not incorporate uncertainty in the forcings and temperature data. Come to that, Nic Lewis’ exploration of climate sensitivity using this method does incorporate the uncertainty in the data (forcings, temperatures, OHC), but does not incorporate the uncertainty from the method variability in the results as applied to different time spans.

    Regardless, this little exploration shows that as a means of tightly constraining the transient climate response from real world data, this method is a bust.

  34. I discovered the same issue when I plugged in the other available OHC estimate from Balmaseda et al. 2013 (B13), plus extending things backward in time. Not only can’t there be obtained meaningful ECS results with B13, but also can’t the Levitus et al. 2012 (L12) data really be used before the 1970s. Well, they can, but as you rightly say, the ECS estimates fluctuate wildly from extremely negative to massively positive. Otto et al. were quite lucky to obtain some apparently meaningful results. Had they used the Skeie forcing rather than the CMIP5 ensemble forcing, things would have looked this way: Forcing estimates vs OHC

    It’s a rather busy plot in which the coloured dots are the decadal (trailing) average for forcing and OHC, complemented by the GISS temperature response. It basically contains all the numbers such that they can be readily plugged into equation 1 and 2. For TCR, it would be the brownish dots (total forcing) and the temperature. And indeed, the TCR estimates would be no less variable as evident from the negative (average) forcing in the 1960s alone. After all, the results are begging the question whether TCR should expected to be constant even for otherwise climatically relevant time intervals of more than 30 years under very strong forcing scenarios. I’m not so sure whether this is the case (the Kosaka and Xie paper seems to reinforce that notion). ECS should remain constant, but unfortunately, we don’t have the data to prove it.

    The next logical step would be to extent the Otto et al. exercise over entire time period of time (1880-2010), using modelled OHC data which hopefully agree with observed estimates in the later part (for this, a POGA-like setup would be benefical).

    Anyways, one thing is for sure, while Otto et al. made an interesting effort to reconcile observations with theory to some extent, it is far from robust a result and certainly nothing which anyone should bet on. As I’ve already mentioned, the authors are well aware of it, though they might have made the limitations of the study a tad clearer. To their credit, they did mention the omission of non-linearities in the ECS estimate, causing it to be an underestimate (as apparent from the unrealistically low ECS to TCR ratio). But it doesn’t make it more robust a result 😉

Comments are closed.