There’s a recent post on Watts Up With That (WUWT) called the 97% consensus myth busted by a real survey. It discusses a recent paper that reports on Meteorologists’ views about global warming. The survey was of all American Meteorologial Society (AMS) members who had email addresses. What’s got Anthony excited is that – as shown in the table below – of those who responded, only 52% agreed that global warming was primarily caused by humans.
What Anthony doesn’t really highlight is that the survey was commissioned by the AMS’s Committee to Improve Climate Communication and was motivated by the tension among members of the American Meteorological Society (AMS) who hold different views on the topic. Anthony suggests that the authors were surprised by the result, but that’s a little odd given that such tensions were known to exist. As far as I can tell, the goal of the paper was not to actually determine what fraction of AMS members accept AGW, it was to try and understand the reasons why members might have different views about AGW. As such, the paper had 4 basic hypotheses
- H1: As compared with professionals with less expertise in climate change, professionals with more expertise will have higher levels of personal certainty that global warming is happening.
- H2: As compared with professionals with a more conservative political orientation, professionals with a more liberal political orientation will have higher levels of personal certainty that global warming is happening.
- H3: As compared with professionals who perceive less scientific consensus about global warming, professionals who perceive more scientific consensus will have higher levels of personal certainty that global warming is happening.
- H4: As compared with professionals who perceive less conflict about global warming within the membership base of their professional society, professionals who perceive more conflict will report lower levels of personal certainty that global warming is happening.
Each hypothesis also included whether they would regard it as likely to be harmful or to be beneficial. The paper concluded that all of these factors indeed influence whether or not someone would regard global warming as being anthropogenic or not and whether or not they would expect it to be beneficial or harmful. They essentially confirmed all 4 of their hypotheses and say
Confirmation of our four hypotheses shows that meteorologists’ views about global warming observed in the last 150 years are associated with, and may be causally influenced by, a range of personal and social factors. In other words, the notion that expertise is the single dominant factor shaping meteorologists’ views of global warming appears to be simplistic to the point of being incorrect.
So, the goal of the paper was to try and understand what might influence meteorologists’ views about AGW so as to inform how best to communicate climate science to members of the AMS. It’s fairly clear that the authors accept the consensus view on AGW and, as the abstract itself says,
[w]e suggest that AMS should: attempt to convey the widespread scientific agreement about climate change; acknowledge and explore the uncomfortable fact that political ideology influences the climate change views of meteorology professionals
As far as I can tell, the survey does indeed show that only around 52% of AMS members accept AGW but this is more because of their particular expertise, their political views, their views with respect to consensus, and whether or not they perceived significant conflict about the topic. It doesn’t imply anything with respect to AGW itself other than the AMS should probably do something about how climate science is communicated to its members. To finish I thought I would highlight this relevant, but possibly controversial, post by David Appell called climate scientists are the opposite of weathermen.
I meant to add that Collin Maessen also has a post about this.
Here’s another take by NW at Judy’s
As I said on the other thread – opinions about climate science by meteorologists are not climate science. It’s also been pointed out (above and elsewhere) that the sample may be skewed by the fact that contrarians were more likely to respond than those accepting the mainstream scientific position.
That’s partly why I included the link to David Appell’s post about climate scientists being the opposite of weathermen. As I think you often say, and then there’s physics.
Should be your line, really 😉
I think I may have already started using it 🙂
Key points:
1) The framing of the key question in the survey is very poor. They ask about whether the planet has warmed over the last 150 years, and the degree of anthropogenic influence to that warming over that 150 years. Based on the best information currently available, the planet cooled slightly in the first 50 of those 150 years, mostly due to natural causes in the form of a string of volcanoes. It then warmed rapidly mostly due to natural causes in the form of an absence of volcanoes plus a rapidly warming Sun, and then warmed rapidly in the last fifty of those 150 years primarily due to anthropogenic factors. The last of those fifty year intervals is the only one on which the IPCC expresses a consensus view that most of the warming was anthropogenic. By including all 150 years in the question, that issue is substantially blurred. To the authors credit, they recognize this issue, writing:
It should be noted that if six felt strongly enough to email about this issue, probably significantly more than six also would have changed their answer for a 50 year time frame, and given the number of respondents, particularly among the “climate experts” that would have a large effect on the percentages. Further, I would speculate that a higher proportion of “experts” on climate woud be aware of this issue, and be influenced by the change in the number of years so that the expert opinion in favour of anthropogenic cause of recent warming would be more understated than that of non-expert opinion.
2) The sample over represents retired and elderly members of the AMS. Again, from the paper:
Given that older people are more likely to reject AGW than younger people, and particularly so after retirement (andecdotal), that suggest the second of the two hypotheses proposed above is true, ie, that AMS members sceptical of AGW were more likely to respond than those who accepted it.
3) The test for expertise in climate (> 50% of paper published on climate) is weak, and the category may included a small but significant number of non-experts (given the low sample size). AMS membership is not limited to scientists, but can be obtained by broadcast meteorologists as well. Typically broadcast meteorologists would require at least an undergraduate degree in meteorology to obtain full membership, and associate members were excluded from the survey. However, it is arguable that publishing a scientific paper would meet the requirements of “having demonstrable professional or scholarly expertise in the atmospheric or related sciences” so that, eg, Anthony Watts could now be a full member. The point here is that, particularly in the non-publishing category, membership of the AMS does not represent significant expertise, or even knowledge on climate. Even in the published category, an non-expert who happens to get there name on a scientific paper or two would be categorized in the most expert category in this survey.
Weathermen are also not the same as researchers working on meteorology. From them I would expect more sensible responses.
I’m a bit confused by Williard’s link also, which is dated 2012 but the paper has only just been published. I think it’s because the survey results were released back then but now the paper has been officially published. Would that right?
I’m a little bit heartened to learn of the low response rate, not because all those people couldn’t be bothered completing the survey but because it goes some way to explaining the 52% which is really quite astonishing. It’s also telling that there are more older, white males in the mix as this group does seem to be overrepresented in the AGW-rejection crowd.
Rachel, I suspect that’s right. I think I remember reading about the survey a while ago, so presumably they released the results and have only just finished the paper.
I would agree that the demographics of the sample leave something to be desired given the conslusions reached. It might be more informative if an analysis was performed on the published climate scientist sub sample.
BBD makes an assumption that the survey MIGHT have been skewed, Equally, it MIGHT NOT HAVE been. There is a tendency to make this claim in every survey where results to not meet reader expectations. Is it not possibile that perhaps there are simply more skeptics than believers?
Concerning the response rate, 23% is well in order for an online survey:
Hamilton (2010) produced a white paper that analyzed 199 surveys. The total response rate of these surveys, calculated using the total number of surveys sent out in the 199 surveys and the total number of responses for the 199 surveys, was 13.35%. He noted that large invitations list, >1000, tend to be associated with lower individual response rates.
Hamilton, Michael Braun. (2010)‘Online Survey Response Rates and Times. Background and Guidance for Industry http://www.supersurvey.com/papers/supersurvey_white_paper_response_rates.pdf) accessed 12.02.2010).
Viser et al (1996) showed that surveys with lower response rates (near 20%) tended to produce more accurate results than surveys with higher response rates.
Viser, Penny S., Jon A. Krosnick, Jesse Marquette and Michael Curtin (1996) ‘Mail Surveys for Election Forecasting? An Evaluation of the Columbian Dispatch Poll.’ Public Opinion Quarterly 60: 181-227.
Holbrook et al (2007) concluded that a low response rate does not necessarily equate to a lower level of accuracy but simply indicates a risk of lower accuracy.
Holbrook, Allyson, Jon Krosnick and Alison Pfent (2007) ‘The Causes and Consequences of Response Rates in Surveys by the New Media and Government Contractor Survey Research Firms’ in Advances in telephone survey methodology. ed. James M. Lepkowski, N. Clyde Tucker, J. Michale Brick, Edith D. DeLeeuw, Lilli Japec, Paul J. Lavrakas, Michael W. Link, and Roberta L. Sangster. New York: Wiley.
Harris Interactive, a well-established organization specializing in web-based surveys, used a convenience sample of 70,932 California residents in a survey of attitudes towards healthcare. An email was sent to potential respondents with a link to a web survey and non-respondents received one reminder email. The response rate for the Harris Interactive survey was 2%.
Harris Interactive. (2008) ‘Climate Scientists Agree on Warming, Disagree on Dangers, and Don’t Trust Media’s Coverage of Climate Change.’ Statistical Assessment Service, George Mason University. (http://stats.org/stories/2008/global_warming_survey_apr23_08.html)
Stewart et al (1992).
Stewart et al (1992), a SCIENCEnet electronic survey received 118 responses from “a computer-based network … which has over 4000 subscribers”(p.2) the National Defense University Study (1978) based its conclusions on the responses from 21 experts;
National Defense University (NDU). (1978) ‘Climate Change To The Year 2000: A
Survey of Expert Opinion’. National Defense University, 109 pp.
The Slade Survey (1989) based conclusions on responses from 21 respondents
Slade, D. H. (1990) ‘A Survey of Informed Opinion Regarding the Nature and Reality of a
Global “Greenhouse” Warming’. Climatic Change vol. 16 no. 1Global Environmental Change Report Survey.
The Global Environmental Change Report Survey (1990) had a response rate of approximately 20% from a sample 1500
Morgan and Keith, (1995) employed the data drawn from a sample size of 16 US climate scientists.
Morgan, M. G., and D. W. Keith. (1995). ‘Subjective judgements by climate experts’.
Environmental Science and Technology., 29, 468A–476A.
Re self-selection bias: aren’t those people who respond to on-line surveys ALWAYS self selecting? A similar bias could also occur if it is a researcher selected sample. Granted surveys of special groups are seldom, if ever, truly random.
As a general comment on sampling and response rates, sampling special groups (scientists) often results in a comparatively difficult sample selection and a comparatively low response rate. The difficulty of selecting such a sample is discussed in Committee on Assessing Fundamental Attitudes of Life Scientists as a Basis for Biosecurity Education, National Research Council’s (2009) report ‘A Survey of Attitudes and Actions on Dual Use Research in Life Sciences’.
Committee on Assessing Fundamental Attitudes of Life Scientists as a Basis for Biosecurity Education, National Research Council’s Report (2009)‘A Survey of Attitudes and Actions on Dual Use Research in Life Sciences’. Survey of Attitudes and Actions on Dual Use Research in Life Sciences.
As I mentioned in a previous comment, I am sure thre are many researchers who would welcome input in developing the perfect survey. If all of the critics out there who find time to criticize other surveys could take all of their critiques, rectify them, and produce a good survey instrument, and the find the perfect sample, then get the perfect resopnse rate, then prehaps everyone would be satisfied – but I doubt it.
Dennis Bray, actually an analysis of the sub-sample of respondents having published at least five papers on climate science would be very interesting using the results from the current survey. I cannot think of any reason to think such a sample would be biased, although the result would still be biased relative to other surveys by the 150 interval (as per my first point above).
Tom, interesting comment. Thanks.
Dennis, aren’t there two separate issues here. Surveys like that done by Cook et al., Oreskes (for example) are explicitly intended to illustrate the consensus. The argument for doing such surveys is to counter claims that no such consensus exists. One could argue about whether or not such surveys are worthwhile, but that’s still their motivation.
The AMS survey, on the other hand, seems to be an attempt to try and understand why people might hold different views. The results are not intended to reflect the actual consensus (many of those surveyed would not be regarded as climate scientists) but the survey was intended as a step towards understanding why people might hold different views and what could be done to try and communicate climate science more effectively.
It would seem important to me to understand the role of a survey when designing how to undertake such a survey.
I fully agree . The AMS survey does not appear, as you say, to be designed to investigate anything ‘consensus’. That is what others have taken from the analysis. The AMS survey simply demostrates what it set out to investigate, however, the ‘consensus’, as always, becomes a main external focus. I simply meant that for those concerned about consensus it would be nice to see a disaggregated analysis. Having stated their intention, I think the AMA addresses the issues that were of interest and concern as stated. But there is nothing to stop others from making their own interpretation of the findings. This tends to shift the intended focus of the survey and resurrect old and tired conflicts.
Dennis, indeed the AMS survey results are being used to claim things that are not actually what the paper set out to illustrate. It would be interesting to have some kind of analysis of consensus that all could agree on. If one is talking about consensus amongst experts or within the literature, then I would be very surprised if a different analysis produced anything particularly different to surveys that have already taken, but maybe that’s my bias.
Wotts, you’ve linked to the read more hashtag for the WUWT page. You’ll need to remove the ‘#more-97796’ from the link to let the page open at the beginning.
I’m also a bit puzzled by how Watts talks about the results of the survey:
They’re not correcting like you said but taking stock of what the the potential causes are for the different acceptance levels.
Collin, thanks. I noticed that and didn’t know what it meant 🙂
Exactly, the survey was intended to understand what might influence people’s views on AGW, not to illustrate the consensus view itself.
Well, I can say that the results of my 2013 survey will soon be made available on line and that the level of concensus is the same, if not a little higher than in the survey of 2008. But, as always, the survey will contain some things that people agree with and some they do not, and as such they will go out of their way to find fault with the surveyand attempt to discredit the results and, in doing so, implicilty discredit the findings they agree with. I guess people see only what they want to see. On line the results will be afforded no analysis, only the presentation of descriptive statistics, making no claims and fully transparent (I guess that trademark was assigned me earlier in this post :-)). Anyway, the point being, there will likely be multiple interpretations and fitting extractions -which is what appears to have happend with the AMS survey.
As for any analysis of consensus that all agree on, the first step would be an oporational definition (or set of defintions) of ‘consensus concerning what’ and ‘by whom’. ‘Climate scientists’ is a very broad category: those working with GCMs, regional modellers, impact analysis, those with a research focus on abc or xyz … . All are not experts in everything. Can someone who works with GCMs be considered an expert on the social costs and risks associated with climate change? It would seem the AMS has extended the boundary of expert and produced a homogenized result. An interesting result, as I said previously, would be levels of consensus among the different demographic categories – if one is interested in consensus. For example, is consensus the strongest within the group who are called published climate scientists; is the consensus the weakest among experts in other fields, non-experts, etc etc. And if the level of consensus is the highest only amongst published climate scientists does this necessarily mean that this consensus is the most accurate?
Dennis:
This survey has nothing whatsoever to do with the physical basis for forced climate change. Nothing.
It tells us that some weathermen don’t understand the physical basis for forced climate change and suggests that they may also have a political bias which leads them to reject the strong scientific consensus on the physical basis for forced climate change.
Talking about surveys tells us nothing at all about climate science. Only about the value-laden bias of those who would prefer to talk about surveys instead of considering the physical basis for forced climate change as a scientific topic in its own right.
Tom makes several good points. In addition, the low level of acceptance of consensus among meteorologists isn’t news (or this study wouldn’t have been conducted). For example, in Doran & Zimmerman (2010), meteorologists had the 2nd-lowest level of consensus acceptance (behind mineral geologists).
Expertise clearly makes a big difference though. If you examine the table included in the above post, of meteorologists with climate expertise mostly publishing in climate science, 78% agreed with the consensus vs. just 2% saying GW is mostly natural. The overall 52% is mostly due to the non-publishing meteorologists (presumably a lot of TV weathermen), among whom only 38% agree with the consensus (and 8% deny that GW is happening at all). So it’s the non-publishing meteorologists who skew the results. Those who are publishing in climate science represent the smallest sample size by far.
But the key to this study is probably the correlation between acceptance of AGW and political ideology. Without looking at the numbers in detail, my guess is that a lot of non-publishing meteorologists are politically conservative, which biases them to reject AGW.
This made me smile:
DB:
Well, yes. Extending the definition of “expert” to include non-experts with a political bias against the scientific consensus will do that for you!
This point is probably already clear to everyone reading here, but I wanted to highlight it: The purpose of this survey was not to quantify the views of AMS members re climate change, but to try to characterize disagreements within the AMS membership so as to focus internal educational efforts. As such, the self-selecting sample was a feature, not a bug.
BBD:
That is too dismissive. Among those “weathermen” are people like Kevin Trenberth, who certainly know what they are talking about. The 52% figure is irrelevant to assessing the consensus among experts, but the 73% (n=231) among members who publish the majority of their papers on climate are certainly relevant, and a low figure. 73% do not a consensus make. As noted above, there are good reasons to think that figure is an underestimate. Of those, by far the most important is the 150 year time span. The number of members who are essentially weathermen who have had there name on just one or two papers would be small (my point 3 above). I mentioned it for completeness, rather than because it is a major factor. Likewise, among the “experts”, the age skew is likely to be a minor factor. It may be a significant factor in the 52% figure, but is unlikely to be a major factor in the 73% figure.
The problem is that we do not know how much the 150 interval skews the results. Technically we don’t even know the sign of the skew, although given the known temperature and forcing history, I think a negative skew is overwhelmingly more likely. At most, if only the last 50 years had been mentioned, the result would have been lifted to 92% among experts (73% anth, plus 10% equal, plus 9% insuff), but it is a stretch to assume the skew is that large and the figure would more likely have been in the low 80% range, or possibly less.
Like Bray and von Storch, this survey is well conducted and appears to tell a different result to the common consensus message. Also like Bray and von Storch, there are reasons to think there is a negative skew on the crucial question based on the wording of that question. Both, however, are legitimate surveys and should be weighed in our actual assessment of the likely “consensus” value, even if that leads us to conclude that the “consensus” is only a super-majority.
Tom, you seem to be taking this survey a bit more seriously (in terms of consensus at least) than I have. It seemed to simply be survey to try and understand why people might hold different views with respect to AGW. It’s true, I guess, that the 150 year vs 50 year issue could have skewed it, but it still seems as though the main goal was to understand why different people hold different views, rather than to actually determine the consensus itself. As such, I had assumed that it was interesting, but not – directly – all that relevant to the consensus issue.
Tom
We will have to differ on the weighting placed on the AMS survey. I go with Wotts on this one.
When I say “some” weathermen, I exclude experts like Trenberth! I rather hoped this would be self-evident, but if it was not clear enough, then rest assured I agree with your point, although I hope you can also agree that I was not, in fact, making a sweepingly dismissive statement.
IIt’s too bad age data was not included. From past experiences with other disciplines/societies, there is often an age skew, although of course that can correlate with political lean.
but, any time, in any discipline, where people have built long careers getting empirical experience, and new methods (especially computer based ones that seem to replace expertise) come along, those with more experience tend to resist the new more than those with less. Anecdotally, I’ve seen this effect at least among vintners, automotive engineers, computer designers, oil exploration folks, doctors, rock geologists.
Given how weathermen get beaten up for poor forecasts a few days off, it especially grates on them to hear climate scientists talk about decades away. (I’ve seen that sort of comment a few times.)
The age skew in this likely had other factors, since there’s no obvious reason why climate science should threaten retired nuclear physics.
I
Just to let you know, the descriptive results of the 2013 survey of climate scientists is now available on line at https://hzg.academia.edu/DennisBray
It is not perfect and there are no claims that it is, but …
Of interest in relation to the AMS survey is the comparison of questions 27 and 31. Question 31 asks:
Respondent’s responses are binned as 0, 10, 20, …, 100%, with 81.6% indicating greater than 50% of the warming in the last 150 years was anthropogenic. That compares with the 73% of “experts” in the AMS survey who think that most global warming, defined as “the premise that the world’s average temperature has been increasing over the past 150 years, may be increasing more in the future, and that the world’s climate may change as a result”.
Question 27 asks:
In response, 2.5% were “Not at all convinced” (1); 16.6% were at least a little convinced (2-4); and 80.9% were significantly convinced (5-7). The strongest level of conviction that could be registered in response was “Very much convinced” (7) The very similar results for people significantly convinced (question 27) and who felt that most of warming over the last 150 years was anthropogenic in Bray and von Storch’s survey suggests the extended period was not a major bias in the AMS survey.
It is interesting to note that the percentage of respondents significantly convinced “that most or recent or near future climate change is, or will be a, result of anthropogenic causes” has declined from 83.5% in 2008. The decline is too small to be statistically significant.
Thanks for the analysis Tom. A much more pleasant way to conduct discourse than the last time.
Dennis, your the one who started in on the ad hominens last time. If you can restrain yourself, I will have no desire to respond in kind.
I am not going to go there.
Thanks Dennis 🙂
I noticed one of the commenters on Die Klimazwiebel was astonished by the positive response to the abilities of climate models to predict the future and the ability at modelling regional effects. I notice you responded there but wondered if you were willing to elaborate a bit more. I can believe that maybe there is some bias (conscious or not) but it’s very hard to believe that a largish group of scientists would rate something as quite good when in fact it is actually quite bad. Maybe they make it seem slightly better than it actually is, but it would seem that the positive response has some validity.
So the new survey is titled “A survey of the perceptions of climate scientists 2013” but oddly includes nothing at all about paleo, and for that matter seems to place far too great an emphasis on models relative to observations.
But regarding the models, at this point isn’t “global model” a somewhat imprecise term, GCMs, ESMs and EMICs being rather different animals? Also, I would have been very interested in a question about the implications of the model failure on polar amplification.
And “36. Some scientists present extreme accounts of catastrophic impacts related to climate change in a popular format with the claim that it is their task to alert the public. How much do you agree with this practice?” Loaded question much?
Heartland Institute spreads disinformation about the AMS survey.