John Cook, who is one of that authors at Skeptical Science, has developed a survery to measure the consensus in climate research. Anthony Watts over at Watts Up With That (WUWT) has already decided that it is a fraudulent survey designed to be biased from the start.
One of the reasons is apparently because someone called Brandon, who writes for The Blackboard, apparently tried to be generous. He asked John Cook to explain the method behind the survey and got the following response
I use an SQL query to randomly select 10 abstracts. I restricted the search to only papers that have received a “self-rating” from the author of the paper (a survey we ran in 2012) and also to make the survey a little easier to stomach for the participant, I restricted the search to abstracts under 1000 characters. Some of the abstracts are mind-boggingly long (which seems to defeat the purpose of having a short summary abstract but I digress). So the SQL query used was this:
SELECT * FROM papers WHERE Self_Rating > 0 AND Abstract != ” AND LENGTH(Abstract) < 1000 ORDER BY RAND() LIMIT 10.
Brandon’s interpretation is that there are about 12000 papers from which John Cook selects those that have an abstract with fewer than 1000 words and that have been self-rated by their author. From these papers he selects 10 whose abstracts the participant needs to assess. Brandon then concludes that this survey is therefore much smaller than John Cook suggests and hence that John Cook is lying (a little ironic given that his judgement of whether or not John Cook is lying is based on information provided by John Cook himself).
Anyway, I interpreted John Cook’s response differently – although I could be wrong. This could be cleared up very easily, so maybe someone who knows better could clarify. If you go to Web of Knowledge and search for papers, published between 1991 and 2011, on the topics of “global warming” or “climate change” you find 73000 papers. I assumed that the 12000 John Cook is talking about are those with abstracts shorter than 1000 words and which have been self-rated by their authors. This is a pretty large sample and so the size seems fine. The only problem would be if there was some reason why papers with short abstracts and that have been self-rated are not a representative sample. I can’t see a reason, but maybe there is one.
Alternatively, if John Cook does mean that he is selecting from the 12000 and therefore that the actual sample size (those with 1000 word or less abstracts and that have been self-rated) is smaller than 12000, then it would be good to know how big the actual sample is. It may still be a perfectly fine sample of papers. I’ve done the survey and I recommend that others do it too and that they do it honestly. I must admit, that I don’t know that I quite understood what I was doing at first, so I would recommend making sure you understand what the survey is asking you to do and read the paper abstracts carefully.