I was having an online debate recently with someone who is clearly skeptical of Anthropogenic Global Warming (AGW). During the debate they made a claim that Roy Spencer had shown that there was some kind of strong negative feedback (possibly associated with CO2). I looked this up and came across this article on Roy Spencer’s website called strong negative feedback from the latest CERES radiation budget measurements.
What he includes in this article is a figure (shown below) in which he plots the Radiative Flux Change (W m-2) against tropospheric temperature change (oC). The radiative flux change is measured by the CERES satellite and what he is plotting (I believe) is the change (from some mean) of the radiative flux from the Earth. What he discovers is that the best-fit line has a gradient of 5.8 W m-2 K-1. He interprets this as evidence of a strong negative feedback.
I think I know what he’s done here and why it is wrong. If anyone would like to correct me, feel free. If one has a blackbody at temperature T, the radiative flux from this body (energy per square meter per second) is
F = σ T4,
where σ is the Stefan-Boltzmann constant. The gradient of this flux (rate of change with temperature) is
dF/dT = 4 σ T3.
As shown here, the effective temperature of the Earth (as viewed from space) is -18oC or 255 K. If I plug T = 255 K into my equation for the gradient I get 3.8 W m-2 K-1. Hence, I think, what Roy Spencer is suggesting is that because the gradient he measures is bigger than this, there must be some strong negative feedback. Basically, he’s suggesting that we would expect a 0.1oC increase in temperature to produce a 0.38 W m-2 increase in flux, but instead it is producing a 0.58 W m-2 increase (and a corresponding enhanced drop in flux for a decrease in temperature of 0.1oC).
Let’s see if this makes sense. Let’s consider the planet’s actual surface (rather than the radiative surface). The Earth has an average temperature of about 290 K. If I plug this into my equation for the gradient I get 5.53 W m-2 K-1. Therefore, if the surface temperature increases by 0.1o C I would expect the flux to increase by 0.55 W m-2. What Roy Spencer seems to be suggesting is that because the gradient at the top of the atmosphere is 3.8 W m-2 K-1, equilibrium would require that an increase of 0.55 W m-2 at the Earth’s surface should produce an increase of 0.33 W m-2 at the top of the atmosphere. No, I would argue that this is fairly obviously wrong. Equilibrium would require that the increase is the same. The gradient, however, can be different. In order for the flux change to be the same, the change in temperature at the top of the atmosphere must be bigger than the change in temperature of the surface.
What I think Roy Spencer has done is to plot the radiative flux changes at the top of the atmosphere against temperature changes that effectively represent changes in the surface temperature, not changes in the top of the atmosphere temperature (which can be different). What he’s got is a gradient that is largely consistent with what one would expect given this change in surface temperature. If he were able to measure the top of the atmosphere temperature changes, he would presumably get a gradient closer to 3.8 W m-2 K-1. He also makes the comment that if he plots the radiative flux change against sea surface temperature change, he gets a gradient of 11 W m-2 K-1. Well yes, because the sea surface temperature doesn’t change much because of the high heat capacity and it isn’t a good proxy for changes in the mean surface temperature. What I think he has basically shown is that the gradient is pretty much what one would expect. There might be some feedback, but it’s hard to imagine that a 0.1oC change in surface temperature can have such a strong feedback that the radiative flux change at the top of the atmosphere is almost 50% bigger than expected. That would be remarkable.