Watts Up With That (WUWT) has another post about John Cook’s consensus paper. I must admit that I find this all a little odd. In my experience, if people think a paper is ridiculously wrong, they typically just ignore it. That this paper has generated so much interest in certain quarters certainly implies that it must have hit some kind of nerve. Also in my experience, a paper that had generated as much interest as Cook et al. would be regarded by most as a success, even if it turned out to be wrong. I’m not suggesting that people encourage work that’s wrong; simply that even if such a paper turned out to be wrong, the interest it generated would almost certainly have lead to an improved understanding of the subject and hence the contribution would be seen as positive. Of course, I don’t work in a field where people try to tear papers down simply because the results don’t suit their political ideology.
Anyway, the most recent WUWT post is from Bjorn Lomborg’s facebook page and is called Cook’s 97% consensus paper crumbles upon examination. So, what does Bjorn say? Well he starts with
Virtually everyone I know in the debate would automatically be included in the 97% (including me, but also many, much more skeptical).
Well, Bjorn, the Cook et al. paper surveyed abstracts of peer-reviewed, scientific papers. The 97% in the Cook et al. refers to the percentage of abstracts that stated a position with respect to anthropogenic global warming (AGW), that endorsed AGW. You can’t be part of Cook et als. 97% unless you’re an abstract. Given that I believe you’re actually a human being, you aren’t part of Cook et als. 97%.
Bjorn goes on to say
They put people who agree into three different bins — 1.6% that explicitly endorse global warming with numbers, 23% that explicitly endorse global warming without numbers and then 74% that “implicitly endorse” because they’re looking at other issues with global warming that must mean they agree with human-caused global warming.
Voila, you got about 97% (actually here 98%, but because the authors haven’t released the numbers themselves, we have to rely on other quantitative assessments).
As I mentioned above, it is abstracts not people, but let’s not go through that again. You seem confused about where the 97% comes from because supposedly the authors haven’t released the numbers. Let me see if the paper helps. The Cook et al. paper says that their search of the Web of Science database resulted in 12465 papers that satisfied their search criteria. They removed 521 because they either weren’t climate related, weren’t peer-reviewed, or didn’t have an abstract. They then rated the remaining 11944 according to whether the abstract endorsed AGW, rejected AGW, or stated no position with respect to AGW.
The results of the abstracts ratings were that 3896 abstracts endorsed AGW, 78 rejected AGW, 40 were uncertain about AGW, and 7930 had no position with respect to AGW. Therefore a total of 4014 abstracts made some statement with respect to AGW. The fraction that endorsed AGW was 3896/4014 = 0.971. If I multiply this by 100, I get 97.1 %. There you go. Straight from the paper and really not all that complicated.
Bjorn then adds that
Now, Richard Tol has tried to replicate their study and it turns out they have done pretty much everything wrong. And they don’t want to release the data so anyone else can check it. Outrageous.
No Richard hasn’t tried to replicate the study in any way whatsoever. What Richard has done is run various statistical tests to show that there are problems with the Cook et al. paper. I also wrote the previous sentence quite carefully as it does really seem as though Richard’s goal is not to establish if there are problems with the Cook et al. paper. It is to show that there are problems with the Cook et al. paper. I’ve written a number of times about Richard Tol’s attempt to discredit the Cook et al. survey (here and here, for example) and as far as I’m concerned the tests he’s running aren’t really suitable for what he is trying to do and he isn’t really showing anything of any particular significance.
Bjorn’s post finishes with a letter Richard Tol has written to the Vice Chancellor of Queensland University complaining about John Cook not releasing all the data when Richard wanted it (despite the fact that replication was quite possible simply with the information in the paper itself). I find this absolutely remarkable. It does seem, however, that Richard has form. Seems like Richard gets upset when he doesn’t get his own way and then writes letters to people’s “bosses” complaining that they haven’t done as he wanted. It’s a good thing that the people Richard Tol annoys don’t write to his university, as the mail room there would be rather inundated if they did.
Anyway, Bjorn, most of what you say appears to be largely nonsense. The Cook et al. paper certainly doesn’t crumble under the analysis you’ve done. Why don’t you try reading the paper before complaining that it’s all wrong.
Given that I have been accused of being one of the defenders of the Cook et al. paper, I have become a little concerned that maybe some fundamental flaw will be identified and that I will then never hear the end of it. I wasn’t involved in the paper in any way whatsoever, so I obviously don’t know if the analysis was done in a completely appropriate way. I’m defending it because it is a published paper that is consistent with earlier work and because it is technically possible to replicate it (despite what Richard might say) given the information in the paper. For example, it only took me a few minutes to extract the abstracts from the Web of Science database.
Given that I wasn’t involved, I’ve been doing some abstract rating of my own to see if I can replicate the Cook et al. results. So far I’ve rated 133 abstracts and the results are
Endorse – 50
No position – 83
Reject – 0
So my basic result is that 37.5% of the abstracts I’ve rated endorse AGW, 62.4% have no position and none reject AGW. The results in the paper were, 32.6%, 66.4%, and 1% (I’ve added reject and uncertain to get the 1%). So, maybe I should have found 1 reject/uncertain abstract by now but the fact that I haven’t isn’t all that surprising. Also, the other percentages aren’t exactly the same, but the difference isn’t surprising given that I’ve only rated 133 and the paper rates 11944. I should add that I haven’t done this to convince anyone but myself that the Cook et al. survey results seem reasonable. I’ll probably carry on for a little while longer, but I’m reasonably convinced, now, that the Cook et al. results are reasonable.
Numerical similarity of a sufficient statistics does not constitute validation.
For those of us who aren’t as sophisticated in statistics as we should be, what would constitute validation?
It is indeed how invested some are in this paper and with what alacrity they try to steer the discussion. Comment #1 arrived with breathtaking speed.
And on it goes!
Yet how odd this all is, as Wotts points out:
According to my peers I’m “extremely well-read”. In reality, I read one paper completely during my afternoon coffee break each work day. I may read the abstracts and skim the results and discussion sections of many additional papers of interest on any particular day. But its impossible for me to keep up with all the literature in my field (immunology) and do all the other stuff necessary for a successful career as a (research) scientist. The best I can do is read selectively and then expand my review of a particularly interesting study by very selective searches of literature on that (usually very narrow) topic.
I do not know of any individual or group of individuals in the immunology field that have reviewed 12,000+ publications to address a single question. Its simply a herculean effort. Which may explain why the detractors of the paper rant about it rather than try to validate the study independently.
It seems like Lomborg can’t think for himself. Dare I suggest that he might be a Tol sockpuppet? 😉
“Virtually everyone I know in the debate would automatically be included in the 97% (including me, but also many, much more skeptical).”
No, given that as far as I’m aware Lomborg has never published a peer-reviewed climate science paper, he could not be included in the 97%. Moreover I’ve debunked the ‘even deniers agree with the 97% consensus’ myth several times, including here when Roy Spencer made the same claim (turns out he’s in the < 3%. Whoops!).
"And they don’t want to release the data so anyone else can check it."
We released our abstract ratings and the author self-ratings, and gave Tol all other *relevant* data that he asked for, as the ERL board has agreed.
"Now, Richard Tol has tried to replicate their study"
As you note, Tol has tried to do no such thing. Had he spent his time actually trying to replicate our study instead of performing irrelevant statistical tests and harassing ERL and the University of Queensland, he could have replicated a significant sample of abstract ratings by now.
I didn't have a high opinion of Lomborg prior to this incident, but he's certainly managed to lower it after this pathetic effort.
Multiple objections raised – and then shown to be themselves invalid or irrelevant, or dropped – certainly do not constitute invalidation. I’m speaking of non-representative sampling, trends of unrelated orderings, claims that order statistics were due to incompetence or malfeasance when they are entirely attributable to structures in the data (https://wottsupwiththatblog.wordpress.com/2013/07/26/richard-tol-and-the-97-consensus/comment-page-1/#comment-2700), incorrect use of kappa, and in fact accusations of fatigue – when any trends showing in the rating ordered data seem to indicate increasing expertise rather than fatigue (as I predicted at, again, https://wottsupwiththatblog.wordpress.com/2013/07/26/richard-tol-and-the-97-consensus/comment-page-1/#comment-2700).
Your initial reactions – https://twitter.com/RichardTol/status/334949069078282240, “A real advance in understanding: John Cook discovers a tautology that is only 97.1% true”, https://twitter.com/RichardTol/status/336012483733094402, “@ezraklein for starters, because that opening 97% is a load of nonsense @maliniw90th” – proceed, as I understand it, your possession of any of the raw data or even the full list of abstracts.
[ Note: strong and personal opinions follow ]
What I am seeing here (IMO) is not a case of you seeing an error in approach, in survey structure, in criteria, in either methods or data, but rather an initial judgement on your part that the Cook et al conclusions were in error, followed by an extensive search for something, anything, to support that claim. And when such objections are answered, looking for anything else, including statistics like kappa that are outside your field and incorrectly applied, to throw at the paper.
That’s not an academic or professional objection, in my view. It is simply motivated reasoning, which as Wikipedia succinctly puts it can be described as “”rather than search rationally for information that either confirms or disconfirms a particular belief [i.e., a particular objection], people actually seek out information that confirms what they already believe.”” I’m rather stunned that an academic of your stature and record would engage in such a clearly ill-founded approach.
On the other hand, similar values reached by independent researchers, using different methods and samplings, provide a convergence of evidence that strongly supports those conclusions. Which, given Oreskes, Doran, Anderegg, etc, is certainly the case with Cook et al. That is the essence of replication, not hounding researchers for every post-it note, in an apparent search for a misplaced comma to complain about.
This has been a very sad train-wreck to watch…
The nerve that was hit is the one you always hit when you have something that is easy to communicate. This whole “debate” isn’t about the science it’s about winning the communication campaign to prevent or delay action. For whatever reason or motivation.
As soon as you have something that is easily communicated like the results from the Cook et al paper everything is done to discredit the research. No matter if it is valid criticism or not, or if the results are valid.
Just look at the hockey stick and how long that and Mann have been under attack despite it being confirmed and no fundamental errors were found.
Richard, if you’re referring to what I did, I wasn’t trying to validate. I was simply applying the same criteria as Cook et al. to a sample of their abstracts to see what result I would get. Given that the result I get seem similar to theirs, gives me some confidence that their ratings of the abstracts is reasonable. Of course, my sample is small and I haven’t actually compared my ratings directly with their ratings, so I’m not claiming that it is definitive. I’m simply suggesting that it gives me more confidence that their results have merit.
In many cases where and when somebody chooses to speak is far more informative than what they choose to say. In that respect, this business with Tol reminds me of his rush to defend Pielke Jr. at ClimateCentral when Pielke made rash and unfounded accusations against the IPCC chair. Considering the all the irons in all the fires Tol must tend, it was a conspicuous expenditure of time and effort. Why there? Why then? And why this?
Data set A validates data set B if the hypothesis A(i) = B(i) cannot be rejected (say at the 5% significance level) in more that, say, 5% of cases. Typically, people use Cohen’s Kappa for this (although this may be objectionable to some in the SkS crowd). Essentially, you take the difference of A and B and see whether that difference can be explained by chance.
Wotts just compared the proportions and said “yeah they look similar”. She collapsed 133 numbers into 3, losing a lot of information. Even if she had, like you and me, only those three numbers, she should have done a test for the equality of proportions. I did this for her: Chi2 = 2.2. Proportions are the same for the Wotts’ data and Cook’s data.
Unfortunately, Wotts did not report her selection procedure. If her re-test sample is representative for the entire data set, her test validates Cook’s results. If, on the other hand, her sample is unrepresentative, her test invalidates Cook’s results.
Richard, as I pointed out in the post, my little bit of testing was to convince myself, not others. As I think I may have mentioned, I simply used The Rate Abstracts section on Skeptical Science’s Consensus project webpage. This allows me to rate 5 abstracts at a time. It then stores my ratings, but doesn’t store the abstract ID for each rating (or, at least, doesn’t provide that information for me). I have no idea if the 133 abstracts I rated were a representative sample as I don’t know how the 5 abstracts are extracted from the full set. I assume random, but I don’t know.
I may carry on a little longer and do a more complete analysis. Then again, I may not have the time to do much more.
Great post, Wotts. It’s my understanding, and correct me if I’m wrong, that if someone finds a mistake in a published paper, then the usual thing to do would be to publish their own paper outlining the mistake and possible new conclusions. Is this right?
The Cook paper is not the first to establish a consensus. Did Naomi Oreskes generate a similar amount of discussion with her paper on the scientific consensus? Perhaps someone needs to develop a secure online voting system for climate scientists only, in which they can vote on whether or not they endorse or reject anthropogenic climate change. The number of climate scientists in the world must number in the thousands which is significantly less the number of votes in a typical general election so it shouldn’t be too hard to manage. Each scientist could be mailed a unique pin code they would need to enter at the time of voting.
I agree, I find this all quite amazing. I’ve written a number of posts related to this and that may seem as though I’m a little obsessed, but the truth is that I’ve found it all quite remarkable. I’ve never encountered such behaviour before and I find it quite fascinating that people can behave in this way. No real attempt even to hide their biases or to disassociate themselves with those who are clearly regarded as having strong biases. Either they don’t think these biases exist or they don’t really care and feel that any exposure is good exposure.
If, on the other hand, her sample is unrepresentative, her test invalidates Cook’s results.
This statement makes no sense. How can an unrepresentative sample invalidate anything???
So, you would let yourself be convinced by an inappropriate statistical test?
Well, Oreskes’ paper did not generate the exact same amount of discussion, but that was before the advent of several pseudoskeptic blogs. Peiser tried and failed:
Monckton and his proxy Schulte tried, too (see link above)
Lubos Motl attacked the paper, and so did SPPI.
You can still here the occasional echo “Peiser showed Oreskes wrong”, but most then link to a poster-better-not-named-or-he-will-show-up-here with a list of supposedly x-hundred papers that reject the consensus (one includes the possibility that climate sensitivity is much larger than 3 degrees per doubling – yep, definitely goes against the consensus on that point…)
I’m not sure who you’re aiming that question at, but the answer is obviously no. Suggesting that failing an inappropriate statistical test does not prove that there is a problem with some data, does not imply that passing an inappropriate statistical test would prove that the data was fine. This is not all that complicated a concept.
Thanks, Rachel. Essentially the way you understand it is how I would understand it. If the authors notice an error in a paper they can publish a correction and explain how this would affect their results. If another group disagrees with some work, they would normally do their own work and publish a paper explaining how they think it should be done (and pointing out the error in the other work) and then presenting their own results. That wouldn’t prove that the first piece of work was wrong, but as more people worked on it, it would become clearer which ideas had most credibility.
I’m actually going through something like this myself. I think that a couple of papers published by another group are essentially wrong. Although most who understand the details agree, these papers are generating quite a lot of interest as they are presenting what seems to be some quite interesting results. I’ve published one paper pointing out the issues and another has just been submitted. My work is not generating that much interest yet (which I would argue is because it’s boring to point out an error in a piece of work that others quite like) but the truth will eventually get out. Additionally, this group has a bit of a reputation for over-hyping their work, so I suspect that there is a certain level of suspicion in the community. However, I certainly haven’t considered the possibility of writing a letter to their university’s Vice-Chancellor. I’ll just keep plugging away and have confidence that the scientific method will eventually establish which ideas are more likely to be correct.
It did rather. It seems fairly clear that nothing I, or anyone else, can say will convince Richard to take a step back and maybe give this a little more thought. It appears that he has invested too much into this to suddenly decide that maybe his strategy is inappropriate. It’s clear that the only way forward for Richard is to plug away until he finds something that appears to show some kind of major error in the Cook et al. survey. Of course, most will realise that one could do this to almost any piece of work if one tried hard enough and was willing to be sufficiently pedantic. Many, however, will not realise this and will be convinced that the Cook et al. survey is fundamentally flawed.
Indeed. The effort is quite remarkable and my first impression was “Wow, that must have taken quite a bit of effort”. However, I’ve managed to rate 133 abstracts in a few hours a day for a few days. If the detractors were genuinely interested, they could probably do 1000 fairly easily and that would probably be enough to see if the basic Cook et al. result was robust. Of course, they probably aren’t interested in knowing if the result is robust or not. They probably want people to believe that it is not and hence are following the strategy that they are.
Probably the most convincing argument that I’ve heard against the Cook et al. survey is that it isn’t really helping. Given how it’s been attacked and used by those on the pseudo-skeptical side of the debate, this argument may have some merit. However, I would tend to disagree as the response from the pseudo-skeptics, in itself, tells us something about their motivations and, also, their objectivity.
Wotts, Richard, using the “rate abstracts” facility, the abstracts rated are selected randomly. For the first abstract rated, the chance of any given abstract being chosen is 1/N, where N is the number of abstracts. For the second abstract, the chance of any abstract other than the first rated is 1/(N-1), while the first rated cannot be chosen; and so on. So, while it is possible that the final sample is not representative, it is unlikely.
I await Tol’s excuse to back away from his claim that, “If [Wotts’] re-test sample is representative for the entire data set, her test validates Cook’s results.”
I would have thought that “if” is unambiguous.
Wotts’ re-test sample is representative so, and the proportions test validates Cook’s data.
The proportions test is not very powerful, however. Wotts should report kappa instead.
If I get the opportunity to do more, I may well do so.
They obviously believe that their (inappropriately named) “Honest Broker”-universe reflects reality, while it is in fact (science) fiction, clinging to the notion that those (scientists) who speak out are in it for their own political interests. In so doing, they consistently fail to understand that most (if not all) of those scientists in question are indeed simply and honestly concerned. It seems to me that they can’t get their head around the fact that being concerned and doing objective research is not only perfectly possible, but that it is what is actually happening every day (including the typical range of exceptions). In accusing others of being biased, they have been entirely overlooking their own (strong) biases. Truly remarkable!
Took me a while to get to this point, but Richard is doing everything he can to prove me right …
The kappa statistic is is used to evaluate agreement of raters on categorical assessment when examining the same data.
As many have pointed out to you, Dr. Tol, the set of abstracts is not the set of papers. On a one-to-one basis abstracts are a proper subset of the information contained in the papers, a subset representing perhaps an order of magnitude less content than the papers as a whole (missing background sections, methods. etc.). They present related but distinctly non-identical information for categorization. Kappa comparisons for rater evaluations don’t tell you anything across differing survey sets, as there is no way to separate rater disagreements from survey set differences.
Cohen’s kappa is not typically used for this, as it is by definition an inappropriate evaluation across differing surveys.
Your application of kappa demonstrates that you are unfamiliar with the statistic and its application. Your use in this context seems more a case of searching for any statistic that can be used to attack a conclusion, than of objectively evaluating a paper.
To quote you: “So, you would let yourself be convinced by an inappropriate statistical test?”
Re ‘reason or motivation’:
Recently Richard Tol joined the Dutch libertarian lobby club ‘De Groene Rekenkamer’ (The Green Count of Audit -a name equally suitable like the comparable lobby club Friends of Science) and he signed off on a typical misinformation/scare mongering piece about wind turbines.
This fits the Green Court of Audit as it has a long history of lobbying against any climate/environmental related government incentive or regulation, often using misinformation. Any serious academic surely must have very strong libertarian views when getting associated with this lobby club.
So I’m afraid I have to agree with you and KR: motivated reasoning appears very strong in mr. Tol.
To clarify my previous reply:
Classification agreement can be used to compare differing surveys, differing tests, for identity measures – but when the survey material is not identical this is comparing all of raters, survey material, amount of information, etc, at the same time. Rater bias is not separable. And, as stated before, complete agreement is _never_ expected across different survey sets, presenting different information. Whether comparing different methods for material failure prediction, for medical prognosis, whatever, a comparative measure requires an independently established ground truth (actual failure times, actual medical outcomes) to be used to determine the best tests.
[ Note: Dr. Tol has declined to measure consensus himself, despite the available data ]
When studying just inter-rater agreement, identical survey material is required – and in those cases and those alone kappa can be used to determine underlying biases in the rater groups.
In the case at hand, the abstract and full paper ratings are distinctly different sets of information, and category disagreement is fully expected. With respect to a nominal ground truth, I would personally expect full paper ratings to be more accurate – more information. But a non-unity kappa value on every rating doesn’t invalidate considering abstracts for the purpose of identifying percentage support for the consensus – for that you need to evaluate the final percentages, and the 97.1% and 97.2% respective measures indicate that both surveys agree.
KR, thanks, you’ve certainly explained that very clearly. As you may know, though, this has been pointed out to Richard numerous times in the past and it appears that he still thinks that it is appropriate to use the kappa statistic to compare the abstract ratings with the paper ratings. You may, of course, have more luck in getting through to Richard than others have had in the past.
Exactly, I’d say that’s evidence to the contrary. Deniers wouldn’t attack the paper if they weren’t worried about its effectiveness.
Moreover, there’s strong evidence in social science research that communicating the consensus does help in terms of support for climate solutions.
You’re trying to have your cake and eat it.
wottsupwiththatblog, the real misuse is claiming that a sufficiently non-unity kappa statistic somehow invalidates Cook et al 2013. Differences in categorization were fully expected, presented in the paper, and are in fact informative.
As expected, both the endorse and reject categories are considerably higher percentages of the total (considering more information from which to judge) – while the proportion of papers expressing a position that endorse the consensus stays the same.
I have noticed that Tol has not mentioned that higher percentage of position expression in self-rated papers: 64.5% stating a position as compared to only 37.5% for the abstracts (this will certainly affect individual categorization statistics). Again, these are significantly different survey instruments – but they give the same endorsement percentages for expressing a position on AGW, meaning they support each other and the paper’s conclusions regarding that scientific consensus.
WRT to Dr. Tol: at this time I do not expect him to be convinced by anything, any evidence, although I would enjoy being proven wrong on that point. But I consider it valuable for the community at large (commentors, lurkers, Goggle searchers) to point out the errors involved in his complaints. Poor arguments such as his should not be left unchallenged in a discussion as important as on climate change.
Things cannot be comparable and incomparable at the same time.
Tol is also a member of the “academic advisory council” of the UK misinformation lobby group GWPF. And look who else is in there 🙂
Those unfamiliar with this organisation need only know that it is a libertarian/right fake charity that feeds the UK Daily Mail with distortions and misrepresentations which are then broadcast to the general public. Lord Lawson, former chancellor to Margaret Thatcher founded the GWPF with Benny Peiser (yes, our Benny) in 2009, which by an odd coincidence, was the year some emails were hacked, IIRC.
Those with an eye for trivia will be delighted to learn that Lord Lawson’s son, the right-wing journalist Dominic Lawson, is married to Lord Monckton’s sister, Rosa.
It’s a small world!
The very same Benny Peisner who attempted at length (and failed, with unforced errors) to discredit Oreskes 2004, upon which Cook et al 2013 builds? Fascinating – I had not realized that connection.
“it is abstracts not people”
Good, you noticed. You’d better tell the clowns at sks
BTW I got an email yesterday saying that another paper on which Cook was an author
“was not accepted for final publication in ESD”
Here is what climate scientist Tom Wigley said about Oreskes:
“Analyses like these by people who don’t know the field are useless. A good example is Naomi Oreskes work.”
You mean like Watts and Tol?
Pingback: Real Sceptic » Cook’s 97% Climate Consensus Paper Doesn’t Crumble Upon Examination
I think of this as “weasels within weasels”. With apologies to wheels and mustelids everywhere.
A. It’s an email, as if that’s a fact.
B. It shows peer review works. 🙂
Here’s what Marco says about Paul Matthews: People who do not even know the difference between Nature and Nature Climate Change are useless (hint: Von Storch & Zorita vs Fyfe et al).
I note Paul ignores the Doran and Zimmerman results, explicitly mentioned on SkS as reference for the 97% climate experts.
In Bishop Hill world, that stuff is really important, you know. That paper showed a LOT of contrarian papers to be just a load of nonsense, so the fact it will not be published in ESD will be used to discredit it.
Correcting myself: the Pielke/wingman fiasco transpired at ClimateProgress.
I don’t remember any retractions of any of those accusations, come to think of it. Wotts is certainly correct about how hard it is to keep a civil tongue in this matter. That in itself is unfortunately a victory for polemicists and impressionists working the subject of climate change; the hotter the discussion, surely the more genuinely controversial the topic? True also of meta-discussions such as this.
Different people have wildly different notions of what “winning” looks like.
Indeed they do!
More (redundant) evidence that Libertarian Physics can and does lead people to very strange places.
This is quite funny. Cook used the self-rating to volunteer rating comparison to validate his findings. A more appropriate method of comparison between the two, however, is not valid.
kappa is possibly one of the simplest tests that can be done to compare two processes performing the same action. The test itself will tell you nothing. The interpretation is in your hands.
wotts, go back and read your post. It has no quantitative analysis. You have to no basis to accept or reject the Cook paper. Only an opinion. Opinions don’t coun’t.
The Doran ‘result’ is 75/79 out of thousands surveyed. The Anderegg ’97’ is 97% of top 50. Paul is good to ignore both of these.
Firstly, did you actually read the aside. Secondly, the Cook et al. paper has already been accepted – by a Journal. I’m neither trying to accept nor reject it.
Shub, why don’t you try reading the link provided by KR at 1.29pm that explains the kappa statistic.
wotts, I use the kappa statistic frequently. It is not I who should be reading about it.
Look at the percent agreement – ~33%. Only in the field of climate science would you see a percent disagreement of 67% to support a claim of 97% consensus.
@BBD; Thanks for sharing this little GWPF gem! Would you believe it! All of a sudden things start making sense …
No you don’t, you old bluffer you!
Richard Tol, out of interest I calculated kappa for Cook et al. First, I considered only three categories (“endorse”, “neutral”, and “reject”). That is because differences between ratings 1-3 and 5-7 classify distinguish solely on the basis of the quality of the evidence available for making the classification, and hence are irrelevant to the reported result of the paper. That is, a difference in rating between a 1 and a 2 does not count as a disagreement for the reported results of the paper. Second, when I exclude cases where author ratings where “endorse” or “reject”, but abstract ratings ratings were “neutral”; but include cases where author ratings were “neutral”, but abstract ratings were “endorse” or “reject”. That is because, where there are two raters, a and b, and rater a has less information on which to make a determination, rater a determining that they do not have enough information to make an assessment (neutral rating) does not count as a disagreement with rater b determing that they do have enough information to make the assessment; but the reverse does count as a disagreement.
Given these strictures, Kappa for the comparison between abstract ratings and author ratings is 0.748. This is not an ideal comparison, which would involve determining kappa for the three bins for two sets of abstract ratings.
I am, quite frankly, not interested in your opinion on my methods. You have shown quite clearly, and repeatedly, that your intent is to tear down Cook et al rather than to analyze it, and that your opinions on the subject are far more informed by your intent than by your expertise. I am only interested in whether or not you would consider a kappa of 0.748 grounds to consider a result as dubious, or validated.
I see Shub claims to use Kappa but fails logic (and arithmetic) turning 0.7% or 1.9% (depending which sample one uses to count the abstracts that rejected AGW) into 67%. Tol will grasp anything no matter how absurd to try to claim something (who knows what?) about the paper or its authors.
Deniers are a weird mob.
Own goal, Shub. Guess which people those 77/79 in one question and 75 out of 77 in another were? That’s right – the climate experts in their larger sample. Now, what did SkS claim again? 97% of climate experts. SkS 1 Shub Niggurath (and Paul Matthews) 0.
Thanks. I hadn’t tried that.
Your result suggests that fine grain of the data is invalid, but the data become valid when coarsened.
My bias analysis shows the same thing: Most misclassified papers were misclassified by 1 category.
If you and I are right, then the “consensus” becomes really one on “there is some human influence”.
“That is, a difference in rating between a 1 and a 2 does not count as a disagreement for the reported results of the paper.”
Ha ha. Funnier still.
a) This implies, that when classifying papers you can use a fine-grained separation as a 7-point system and you are still good. But when verifying it, you must collapse it to a three point system. Whose point do you think it supports?
b) The 7-point system, as it stands, cannot be collapsed any further, by the authors’ own description. Categories 1-3, 4, and 5-7 cannot be collapsed across each other because they have opposite and/or differing implications. Within categories 1-3, and 5-7, no further degeneracy is possible because two of the three in each is defined on the basis of finding *explicit statements* in abstracts.
I.e., category 1 cannot possibly be confused for ‘2’, and neither ‘1’ nor ‘2’ can be mistaken for 3 because they are defined by presence (or otherwise) of explicit statements which were clearly defined and laid out for volunteer raters to look for. An explicit acceptance of orthodox position with quantification (>50% etc) *cannot* be mistaken for explicit acceptance without quantification. In other words, as a classification scheme, ratings ‘1’ and ‘2’ cannot be collapsed.
The rating system performs so badly because its pre-determined framework does not realistically reflect the ‘consensus content’ in the abstracts.
Sou, you are not following. Get the ratings for the ones where the scientists provided them for their own abstracts. Compare them with the Cook group ratings. There is only a 33% point-to-point match (like say, abstract # X being rated ‘4’ by both author and Cook team). But Cook et al conveniently forgot to report this in their paper. Why do you think this would be?
Shub, this may explain your confusion. The authors didn’t rate their own abstracts. They rated their own papers. Hence you can’t really use the kappa statistic to compare the volunteer abstract ratings with the author paper ratings. I’ll quote from the description of the kappa statistic that KR linked to above
Yeah, that’s what I meant. Compare the paper rating by self to abstract rating by volunteer. One can understand if some low kappa values are obtained (not anywhere near the fantasy 0.78 that Tom gets), as the examined entities are purportedly slightly different (paper vs abstract) but such low values as obtained only indicate failure of the rating system. After all, the whole premise of the paper is that abstracts are a good indicator/proxy of the content of papers
And the fact that papers and abstracts are not the same data doesn’t concern you in the slightest? Happy to go ahead and use the kappa statistic despite these, strictly speaking, not being the same data?
Wotts, that’s not confusion – it’s deliberate obfuscation.
Richard, the consensus is that “Most (>50%) of recent warming is anthropogenic in origin”. “Coarsening” the data does not alter this because all of ratings 1-3 only apply if the abstract (or paper for self ratings) endorse that statement at some level. Rating 1 applies if the abstract explicitly endorses both the anthropogenic origin and the numerical value. Rating 2 applies if the abstract explicitly endorses the human origin, but only implicitly endorses the numerical value. Rating 3 applies if the abstract only implicitly endorses the human origin, and implicitly endorses the numerical value. The converse applies to ratings 5-7, which only apply if the abstract endorses the claim that less than 50% of recent warming is anthropogenic in origin.
If you merge cats 1-3 and 5-7, then you merge them on the lowest common denominator: 3 and 5.
If not, you’d be adding information.
Besides, cat 3 is most common in 1-3, and cat 5 in 5-7.
Richard, I am aware of that. However, no abstract belongs in Cat 3 unless it endorses implicitly the proposition that “Most of recent warming was anthropogenic in origin.” The ratings are consistent if and only if you interpret the ratings system this way. It is impermissible, however, to interpret the ratings as inconsistent when a consistent interpretation exists. Hence the ratings must be interpreted this way (unless you are only interested in arguing against a strawman.
For what it is worth, I have rated about 50 papers using my strict interpretation described above. So far agreement is good, with a kappa across individual ratings of about 0.8-0.9.(Individual comparisons are not recorded, and I need to start again tracking the direct comparison to give exact figures.)
Cat 3: “Implies humans are causing global warming. E.g., research assumes greenhouse gas emissions cause warming without explicitly stating humans are the cause”
I’d agree with the sentiment Wotts alluded to. If this kerfuffle were a valid laboratory experiment to test Dan Kahan’s claim about how people process evidence, he’d be right in about 100 percent of the cases: absolutely everyone aligns exactly according to their priors, everybody claims to have the evidence on their side; exactly zero opinions have been changed, or at least been informed by anything other than what they had already been before. (Note that this is not intended to say that both sides are equally right.)
As to Tol’s behavior, I am still a bit surprised that you (or anybody, really) are surprised. I linked two recent economics episodes in the last thread (Krugman vs. Rogoff, Acemoglu vs Sachs), that are in no way better. Immediate over-the-top accusations (say, of dishonesty, incompetence, etc) are thrown around, and publicly so, on an every-day basis, and it’s quite normal. It would be interesting to know why this is so. But then, perhaps not.
But the really eery thing here is how this specific Tol related discussion has already taken place about a dozen of times. Ask me: Has everything from slightly important to not-at-all important to the discussion brought in? Not quite, yet… The Tol-Ackerman clash? Check. (Though, only the most recent one, the earlier one has yet to be revealed.). Tol is on the advisory board of the GWPF and NOW everything makes sense? Check. What lacks is “Tol got two papers retracted”, and “TOLGATE!!! AAAAAAAAAAAAAAAAHHH!!”, and “Tol is a lousy economist, his success is an illusion created by him citing himself so much. No, really.” I am also a bit disappointed that the language here hasn’t yet reached the usual lows before the episode goes dormant (until the pissing contest starts and gets repeated in more or less the exact same way, say, half a year later; in another comment section, with some new, but also some old commenters, again revealing all those interesting and damning facts about Tol everybody should know, to help the climate debate, of course). Disappointed, because we did already have the “Tol burns his reputation” meme in the last thread. Where is the “Tol gets ripped apart” and figurative speech containing words like “blood”, and all that? Apparently, Tol has not yet deployed his full capacity to make everybody constantly think about him and tell the internet about it.
I’ll call it the Tol cycle. Someone should write a paper about THAT! It’s quite a phenomenon.
“However, no abstract belongs in Cat 3 unless it endorses implicitly the proposition that “Most of recent warming was anthropogenic in origin.”
What? Category 3 rated abstracts contain no statement by themselves. Whatever position is accorded is solely by interpretation of the volunteer, and there is no obligation to check/quantify how much warming is anthropogenic.
How is it possible to discuss if people who were volunteers themselves disagree so drastically from what the paper states its methods were?
Moreover, how would one even divine an abstract implies “most” of warming to be anthropogenic? A good illustration of the degree of subjectivity the process entails.
“However, no abstract belongs in Cat 3 unless it endorses implicitly the proposition that “Most of recent warming was anthropogenic in origin.”
What? Category 3 rated abstracts contain no statement by themselves. Whatever position is accorded is solely by interpretation of the volunteer, and there is no obligation to check/quantify how much warming is anthropogenic.
How is it possible to discuss if people who were volunteers themselves disagree so drastically from what the paper states its methods were?
Moreover, how would one even divine an abstract implies “most” of warming to be anthropogenic? A good illustration of the degree of subjectivity the process entails.
Thinking about it, forget about Tol, Krugman and Acemoglu. The Nobel for econtrolling and insulting behavior goes to Berkeley:
Brad DeLong wins hands down in the obnoxious economist contest.
Yes, but then he is 10 years you senior, so this is not over.
The instructions given to authors for self ratings were:
The meaning of the instructions for Category 3 must be interpreted from that context. In particular, endorsement must in all categories be interpreted to mean the endorsement of “the proposition that “anthropogenic greenhouse gases” are causing “the increase in [global] temperature”. (Note: “the increase”, not “some of the increase”) You do not get to pretend that the specific category is independent of the overall instructions.
Further, the specific instructions of the category states that it implies that “humans are causing global warming”. Absent specific qualifiers, such a statement is only true if humans are the main cause of global warming. Pretending that the instruction reads “paper implies humans are causing some of global warming”.
So, quite apart from logical considerations, your interpretation is plainly flawed.
Further, from your initial complaints about abstract ratings misrepresenting your papers – complaints matched by your own “self ratings” in response to the survey – your current interpretation is clearly not your initial interpretation. In fact, your continue to inconsistently maintain that some of your papers were incorrectly rated as endorsing AGW when they should have been (according to you) rated as neutral while maintaining that Cat 3 is satisfied by any paper that endorses (in effect) that any anthropogenically emitted gas is a greenhouse gas.
You cannot have it both ways, and trying to will just further confirm many peoples low opinion of your (including mine).
Just out of curiousity, Richard, when you validate the economic portion of the FUND model, does it predict the great depression? Or the global financial crisis? Or don’t you trust it enough to run such validation tests?
I have no interest in the business cycle.
Why exactly should a model concerned with long term growth predict the business cycle?
I might ask how an economic model can be validated if it does not predict major events in the business cycle. But perhaps that is unfair. Can you point me to the publication where FUND predicts economic growth over the twentieth century?
Click and you shall find my publication list.
I’m not going through a series of questions to exclude all the areas of economics in which I have no interest or expertise.
Well, even those major events are just fluctuations around what is steady growth in the long run:
Why should FUND predict the business cycle? How would it matter even if it did?
Forget my comment in moderation, it’s actually all on wikipedia. Even major downturns are just fluctuations around long run growth (rate) – see figures 1 and 3 here:
How would it even matter if FUND could predict the business cycle? Why should it be validated against data that are not important for the question it seeks to answer?
The costs of emission reduction vary over the business cycle, as do the costs of natural disasters. There is no reason to believe that climate policy or climate change would affect the business cycle.
I understand that not everything has to be repeated every time. But to me it is relevant.
Richard Tol is widely regarded as an authority by sceptics and he makes strong public statements, so I must determine if I will consider his ideas. For me -as a non-climate scientist- I have to rely mostly on authorities for the best balanced information because I -as most people- don’t have the expertise to make the necessary careful detailed assessment myself. So I have to try to determine if someone is trustworthy and assume this applies to many readers of this blog.
When someone, like mr. Tol, associates himself with fringe ideological lobby groups, that have a long history of spreading misinformation, I have no choice but to dismiss him and look to others for the best balanced information. I think readers in the same position should be aware of these associations.
Richard, there are climate ostriches that claim the climate models are wrong because they do not not reproduce the natural variability sufficiently accurately; see all the discussions around the recent “pause”.
If I understand it right, your FUND model does not model the business cycle (natural variability), but still in your perspective does give reasonable long-term results (global warming).
You thus seem to have a unique perspective. How would you go about explaining your colleague climate ostriches that not modelling natural variability (perfectly) is no dealbreaker?
Then there shouldn’t be a problem releasing the data behind it?
For those who are interested, my questions are trick questions, and Tol’s reticence is revealing. FUND in fact does not model the key aspects of economic activity. Instead, it sets GDP per capita and population growth rates by decree for various regions and various time periods. This would be equivalent to GCMs setting changes in temperature and precipitation for various regions and times by decree. The consequence is that FUNDs core economic assumptions are not validated, and cannot be validated by comparing its operation with historical periods.
I just find it curious that a person who has built his lifes work on a model that cannot be validated should be so concerned about the validation of others work.
Martin, I think you’ve made a similar point before and I do have some sympathy with your view (if I understand what you’re suggesting). It’s a very tricky issue. Do you ignore these things and hope they go away? Possibly, but some of these views are being expressed by reasonably mainstream media commentators in the UK (Andrew Neil, James Delingpole, for example) so providing an alternative view has some merit. It has the side effect of maybe increasing the exposure and maybe giving a sense of credibility to ideas that may well be wrong. A difficult balancing act and I can’t claim to necessarily be getting it right.
Tom, don’t know much about the FUND model but if it is indeed a model that tends to model long-term changes rather than short-term variations, it would indeed be ironic if those who use it then criticise climate models for not getting the short-term variations absolutely right.
FUND does not predict economic growth. It never has. I never claimed it did. It does not need to.
Richard, to be clear I certainly wasn’t claiming that anyone has ever claimed that FUND did something it doesn’t, or that it is not a perfectly suitable model for what it is intended. I believe Tom was trying to point out that one of the main reasons for running global climate models is to understand what the long-term trends in the climate are likely to be. Hence there is an argument that could be made that criticising them for not correctly modelling short term variations is essentially criticising them for not doing something that is not actually one of the main goals of the models (although it does appear that people are working, and have been working, to include these short-term variations in the models). Tom was, I think, simply using your FUND model as an illustration of a model that may be designed to model long-term trends but could be, disingenuously, criticised for not modelling short-term trends correctly. I, however, don’t know if such an analogy is appropriate or not as I don’t really know how your FUND model works or what it is mainly intended to represent.
Tol, essentially you are claiming that FUND can with reasonable accuracy model the costs of climate change (and mitigation) without needing to model the impact of those costs on economic growth rates to which I am tempted to respond “bullshit!”.
However, if you think the point I am making is unfair, please point to some aspect that FUND does model, and show how you validated the model for that aspect.
Tom tried to play the man rather than the ball, and missed the man too.
But that’s his modus operandi. The entire blog is opinion, with no analysis, evidence, facts or data. And yet he claims to be a scientist at a leading university. That’s what makes the blog so funny.
Very ironic that Tol should complain about somebody playing the man. It is, however, true that that is what I am doing on this point. I find it extraordinarily hypocritical that a man who is willing to advise governments on the basis of a model that can’t be validated is so obsessive about validation for a study whose results he does not like. Given his monomania about validation, surely he has made the wrong career choice.
Paul, you’re entitled to your opinion of course but I think if you read more (and to be clear, I’m not encouraging you to do so) you may find that it isn’t simply opinion. Of course, as you may well know, being a scientist at a leading university doesn’t, in itself, necessarily mean anything 🙂
I am just impressed how the pattern repeats so reliably. Look at Tom Curtis below: Do you think he is interested in FUND? Or rather in Tol? And the language gets tougher, too (finally), but he is just tempted to say “bullshit”, he does not yet say it, you know. Because if you want to know anything about the economics of climate change, and especially about FUND, you don’t really ask Tol something, pfffff. No! You put forward your expertise and ask “trick questions”. That will show him.
I’m happy to answer any question about the validity or otherwise of my research. To ask such questions, you will need to familiarize yourself with what I do.
Martin, Tol’s career is built on an economic model that stipulates that the growth rate of the economy of Queensland is independent of the existence or not of the Great Barrier Reef. It further stipulates that the economic growth rate of Brazil is independent of the ecological health of the Amazon rainforest; and that the economic growth rate of Egypt is independent of the rate and sizes of floods in the Nile, and the rate at which the Nile delta is submerged by sea level rise. Just for good measure, it stipulates that population growth rates are independent of increases in mortality rate due to global warming. Given these facts, and given that the model is not validated in any way (or at least, not validated in any manner resembling the methods in which climate models are validated), if you still want to trust FUND as a reliable indicator of the actual costs of global warming, well more fool you.
But this *is* relevant. Highly relevant. If you claim that it is not relevant, then I must conclude that you are an apologist for Tol and that you endorse his tactic of generating fake controversy about Cook et al. I must conclude that you share his aims, his beliefs and those of the GWPF.
What other conclusion can I reach, Martin?
If you want to critique my work, get your facts straight. FUND assumes no such thing.
at this point I do not even know if you want FUND to predict economic downturns (that’s what you first asked) or model lasting effects of economic shocks. Perhaps that’s part of the trickiness of your trick question? I’d also be interested to see these assumptions you talk about in FUND. Why did you link this homepage about the Great Barrier Reef?
There is a discussion about the validity of IAMs, what you suggest is not part of it.
BBD, I sincerely do not care what conclusion you are forced to reach, unfortunately, even if you warn me (but thanks!). My opinion on Tol is really easy, and my part in these discussions is to repeat them, usually. First, I think his reputation is here (but apparently there is an analysis that it’s really all about FUND, I didn’t know that…):
Second, he calls for a carbon tax for two decades now. Third, he does not deny a consensus on what causes global warming – he thinks Cook et al. does not establish it. (he might be wrong, but I don’t see that it’s “fake”). Fourth, what this GWPF thing means depends on what advice he gives them. Is there any evidence that he advances false information there? If not, you might not like the affiliation, but that’s it. Fifth, I do not care about his behavior, especially as about 90 percent of his critics (not including Wotts, btw, but certainly including Tom Curtis), easiliy top him in terms of over-the-top shrillness, arrogance, lack of self-criticism and claiming expertise where they have none.
I don’t really think you grasp what is going on here at all firmly.
Tol is an affiliate, no, he is an advisor to a misinformation-producing libertarian/right fake educational charity which is distorting the public understanding of climate science and energy policy here in the UK.
This association is enough to discredit Tol utterly.
But matters are far worse than that. RT engages tirelessly in creating a fake controversy intended to undermine the clear findings of Cook et al. despite being shown at every turn that what he is doing is nothing more than obfuscation.
You are clearly endorsing RT’s actions and by doing so have defined your own stance here.
You may not care what I think, but I care what you *do* and I care what RT does. Because I do not like people working against the democratic process with the ultimate aim of distorting public policy.
Best try to understand that before we go any further.
Click to access FundTechnicalDescription.pdf
Checking EMF 14, we find that OECD nations excluding the US and EU (hence including Australia and Queensland) have per capita GDP growth rates of:
These figures are used regardless of climate impacts on the Great Barrier Reef. Similar charts exist for the other regions and hence my point is correct.
The best that you can say for the model is that if you were investigating explicitly the impact of the loss of the GBR, you could specify a different scenario with different growth rates. The costs associated with the reduced economic growth would have been stipulated rather than being an outcome of the model.
Martin, Tol’s credentials are sufficiently summed up by the fact that, on being shown that a certain set of data was ordered by year of publication, then alphabetically by title, acknowledged that it was impossible for the ordering of the data to provide information about rater biases, fatigue etc, then proceeded to analyze the ordering of the data with the intent of making findings about rater biases, fatigue etc.
Richard, by quoting Matthew 9:10-13 are you associating the GWPF with the sick (tax collectors and sinners) and yourself with Jesus? I’m not a biblical scholar by any stretch of the imagination, so may well – of course – have mis-interpreted what you were intending with your Matthew 9:10-13 comment.
@ Richard Tol
If you want to do something about those who seek to subvert democracy and distort public policy, you would be more plausible as an outspoken critic than an affiliate.
Your choice of text is bizarre; your protestation unconvincing.
We’ll get to who you are by elimination. You’re not a physicist, a statistician, an economist, or a biblical scholar.
Richard, can I ask why you are trying to get to who I am and who do you mean by we? I’d love to think that you want to work out who I am so that you can compliment me on my blog. My gut feeling is that that is unlikely to be the reason.
no, I did never “endorse” Tol’s “actions”. I also did not do so ‘in effect’, or ‘by implication’, if you are thinking along these lines. W/r/t his behavior I simply think that’s it is really common dispute attitude among economists. This might be an interesting topic regarding the sociology of the economic profession. However, it is also not a topic I am interested. As regards his motives, I am not, at all, interested in participating in the collective tea leaves reading going on here.
You know absolutely nothing about my stance (on what?) apart from what I said. If you think you must waste you time by ascribing motives to everyone, go on. As I said, I do not care about what your petty worldview forces you to believe about others.
Working against the democratic process, srsly. Wott, do you have any intent to call out anybody else but Tol on nefarious nonsense?
That was sufficient. As I said earlier, you don’t seem to have thought this through.
Martin, to be fair I’m not that comfortable with people making accusations that they can’t back up with actual evidence. So, whatever I may personally feel about the GWPF and what it promotes, I can’t and wouldn’t claim that Richard Tol isn’t giving them perfectly reasonable advice and that his intentions as an academic adviser aren’t entirely above board. So, yes, I would rather others did the same and would ask that they avoid making unsubstantiated accusations.
Having said that, in my opinion the GWPF does mis-represent the science associated with global warming and climate change. Whether they do so intentionally or because they are poorly advised (and I would hope that Richard is not giving advice about the actual science, given that he is an economist) I really cannot say.
While this is reasonable enough, I did earlier suggest that you look at the *other* members of the academic advisory council. Context is a valuable tool here. Do you really think RT is the lone voice of reason? Let’s be serious. This is serious.
Really? You surprise me profoundly.
Wotts, I agree 100 percent that the whole Peiser bunch is an embarrassment for intellectual decency. But I have really no idea about the inner workings of the GWPF, and I have no idea about Tol’s specific involvement, there (being an adviser really does not say a lot). I have not the slightest evidence that he gave them false advice, nor, for that matter, that the GWPF actually uses his advice, and if so, in which instances he has done so. No idea, at all. I refuse to let my imagination run wild and make connections that are based on nothing else but a vague idea about what is evil in this world. I think we do not really disagree on that.
The specific accusation here has now shifted from “Tol has an inappropriate proximity to skeptics” to “Tol helps to undermine democracy”. And I do not make that up: it’s really there. And all this clown can come up with when I call this the nonsense it is, is that I didn’t think to think that through, so that I can see Teh Truth, too.
Indeed, I have looked at who the other advisers are. Given who they are, it’s not that surprising that they present the science in the way that they do. I completely agree that it’s serious and I do have huge issues with organisations like the GWPF and the influence that they have. My opinion about them does largely coincide with yours. However, we still have to be careful about how we express our views on such organisations. Additionally, despite whatever views I may have with regards to Richard, that he actually comments here does make me think that treating him with some respect (whether deserved or not) is at least reasonable. I would hate the dialogue to drop to the level of that at WUWT 🙂
BBD, I may not be able to say for sure, but I certainly have my suspicions 🙂
For pointing out that lying down with dogs = fleas?
Since your parsing isn’t great, I will put it similar terms to your own:
The specific accusation here has not shifted at all. It is and remains that Tol has an inappropriate proximity to vested interest front groups that are trying to undermine democracy by concerted, deliberate misrepresentation of climate science.
I do hope this clarifies things for you.
BBD, as much as I sympathise with your views, maybe I could ask that we let this discussion end. It is becoming rather heated and I can’t see any good coming from continuing. I also suspect that your and Martin’s views are less at odds than maybe it seems, although maybe Martin could have put a bit more effort into trying to illustrate that (although maybe I am wrong about them being less at odds) and Martin’s use of the term “clown” clearly hasn’t helped. BBD is a regular and thoughtful commenter and although the comments here may have gone a little over the top, they are not entirely unfounded.
No, really not. I don’t care who BBD is, or if he is usually thoughtful. There is a very specific claim about threatening democracy by misrepresenting climate science. And I take issue with the default position that the other one hasn’t “thought things through” or not understood something unless he has reached his conclusion. This is condescending. If you feel that you have to single out my “clown” you are putting form over content. You can do that, but if so, don’t pretend to speak for me.
You will have seen Wotts’ comment above, so you will forgive me if we disengage now.
Martin, I’m certainly not pretending to speak for anyone. Okay, I agree that claiming that someone hasn’t thought things through because they’ve disagreed with your position is unfortunate (and something that is often leveled at me so I understand your frustration). I also agree that accusations of threatening democracy are somewhat over the top (to be fair though, the kind of lobbying that goes on is certainly not helping democracy). My previous comment was really just an attempt to diffuse the situation without taking any particular side. I’ve clearly failed.
BBD and yourself have both made comments here before and have both made quite reasonable and thoughtful comments. I don’t really see the point in the discussion continuing given that it is getting heated. It would be nice if some kind of pleasant resolution could be reached, but given that that is unlikely it just seems more sensible to just let it go.
I have ceased and desisted!
BTW, re threatening democracy and over-statement, have you read Mark Bowen’s Censoring Science: Inside the political attack on Dr James Hansen?
BBD, I haven’t. I shall have to add it to my ever increasing reading list 🙂
Undermining democracy? By writing reports and commenting in the press? Really?
How about trying to shut up people because they believe the wrong thing?
Richard, I’m not sure what you’re implying by the latter part of your comment but I’ve been trying to end this discussion given that it has included claims that are, indeed, somewhat over the top (although, to be clear, it is my opinion that organisations like the GWPF are not making a positive contribution to the global warming/climate change debate). I’m not sure, however, that we’ll achieve much by continuing with this discussion.
More background for anyone curious about the relationship between the GWPF and the UK Daily Mail.
Misinforming the electorate inhibits effective democracy.
Sorry, as usual, we crossed. No further comment on this sub-topic, but do, please explore the CB link and the numerous related articles you will find there.
Point out an opinion you did not hold before, but arrived at by performing mathematical/statistical analysis.
You say you are in academia and have guided PhD candidates. You know it is not easy or straightforward to put together a project and get it right. You are telling me that Cook et al, a bunch of amateurs, got together and put up a project and carried it out, with no flaws, right at the first go? But you litigated all points of criticism on this paper. Which means you think this paper is perfect?
Seriously, Shub! Nowhere have I ever said the Cook et al. paper is perfect. If you’re going to make strawman arguments, go and make them somewhere else.
Eli also had some observations, http://rabett.blogspot.co.uk/2013/09/richard-tol-calls-richard-tols-122.html
Pingback: Watt about Monckton and the 97%? | Wotts Up With That Blog
Pingback: The 97% Climate Science Consensus Reality » Real Sceptic