The Great Global Warming Blunder: How Mother Nature Fooled the World’s Top Climate Scientists iby Roy Spencer was given to me for review by the publisher.
Clouds
The trick Spencer says Mother Nature played on the world’s top climate scientists was to pull the cotton over their eyes. Cotton, I say, as in clouds. Spencer says other climatologists don’t understand clouds the way he does. Everybody has noticed that, at times, there have been fewer clouds hanging about. Spencer’s special understanding impels him to claim that fewer clouds cause the higher temperatures we have also seen. The other fellows insist that higher temperatures drove the clouds away. Who is right?
Let the battle commence!
We can’t just consider clouds, but must also investigate various other forces that might change the climate. However, there are nothing but minor skirmishes over forcing. All agree that, on average, more CO2, and other similar gases, pumped into the atmosphere means warmer weather. But how much warmer? If climate models are run at twice the pre-industrial levels of CO2, the direct warming effect is predicted to be only about 1 degree C. “And since atmospheric convection typically causes more warming at high altitudes than near the surface, the surface warming can amount to only 0.5 C.” Half a degree? A pittance! So why fret?
Advertisement
Because positive feedback might take that half degree and ramp it up into two, three, even four or more degrees at which point we’d face…well, we’d face something all right. Anybody paying attention to press reports might guess this something will be an environmental apocalypse, but never mind that. It’s feedback where the real fighting occurs.
Spencer spends a couple of chapters laying out the plan of his attack, first drawing the differences between forcing and feedback, writing for an audience who have had no experience in such matters. The examples are fine, but can be skipped by anybody who is looking for the heavy artillery, which is in Chapters 5 and 6.
Feedback and Forcing
All climate models - doing what they are designed to - predict the atmosphere will warm. But how much of the warming predicted by models have we seen so far? If anybody gives you a number which he swears to, don’t believe him. The manner and the places at which we measure temperature have changed and changed again, and are changing more even now. Even the weather satellites in “fixed” orbits have a nasty habit of wandering from their appointed paths. Turns out the uncertainty in the measurements from all these disparate sources is larger than the suspected change in temperature. Yet it is still the satellites from which we derive our most reliable data.
From satellites we can measure both temperature and cloud cover, and we can estimate the various forcings and feedbacks affecting the climate system. One possible positive feedback says that as the temperature warms, low-level clouds decrease, which in turn lets in more sunlight, which causes more warming, which…well, you get the idea. Is this feedback genuine? There have been observations of fewer clouds, but the feedback could have worked in a negative direction, too. Fewer clouds could have let in more sun, which caused heating which led to fewer clouds, and so on.
But how do researchers know that “warmer temperatures caused a decrease in cloud cover, rather than the decrease in cloud cover causing warmer temperatures?” They do not: it is merely assumed. If the feedback is positive, we might have some worrying to do; but if they feedback is negative, we’ll have to find another subject over which to fret.
Spencer and a colleague decided to check which direction the feedback worked by examining the data - and not relying on a model. Plotting the radiative energy imbalance against the observed temperature change is one way to estimate the direction and magnitude of feedback. But only just over seven years of reliable data exist from the CERES satellite, which is not a lot. This means our certainty, no matter what is discovered, cannot be high. Spencer does not emphasize this, but neither do the folks on the other side. Over-certainty is rampant in this field.
Advertisement
Figure 14 in the book shows that, very roughly, when the energy imbalance (due to forcings) is positive, the temperature increases; likewise when the imbalance is negative, temperature decreases. But this relationship is noisy. So noisy that in more than a third of cases when the imbalance is positive, temperaturedecreases, and when the imbalance is negative, temperatureincreases. It is from this very highly variable relationship that feedback is estimated.
Spencer re-examines his data and notes that “month-to-month line segments are preferentially aligned along a” different slope than the regression line fitted to the raw measurements. The line fitted to the raw measurements implies positive feedbacks are important. The line fitted to the month-to-month line segments say that negative feedbacks are.
This strategy is unusual, so I ran a simple experiment to investigate it. I first generated random points with the approximate normal distributions of the temperature change, and then simulated the regression line given in his picture with non-correlated residuals (with slope 2.5; the parameters chosen by eye; I stress the exact values do not matter). I then computed the ordinary regression line and also found how the “month-to-month line segments are preferentially aligned.” The simulated regression line - the true line in this case - indicates positive feedback. The month-to-month lines segments will have a larger slope, which indicates negative feedback.
Using ordinary regression, this month-to-month line is wrong: that is, the negative feedback implied by it is false.BUT - and I want everybody to pay attention here - the ordinary regression line might very well be thewrong statistical model. That is, the month-to-month line-segments approach, at least to my ears, sounds like a better approximation to the physics than alinearregression. At the very least, more sophisticated time series models should be tried.
The data from month-to-month are correlated, obviously. But it is not clear to me that Spencer and other workers are properly accounting for this correlation when estimating feedback via regression. In my simulation I added in positively correlated data, to better approximate the real atmosphere. The situation is much the same: only in this case, we will be mis-estimating the actual regression line if we do not account for the correlation. There’s really no point in doing this, because methods for computing regressions in the presence of correlation are well known. Why they are so little used is anybody’s guess.
This doesn’t end it because Spencer, like many others, then decides the raw data looks too noisy - why oh why do people feel compelled to prettify their data! - and so smooths them with “running three-month averages” and then recomputes his feedback parameter.This unwise maneuver affects the regression estimates!The final results depend on the exact nature of the smoothing. Experiments I ran show the naive, regression-estimated feedback parameter can veer either direction, higher or lower depending on the amount of smoothing and correlation. The month-to-month line segments can, too. In other words, smoothing is nuts.
I want to stress - and stress again and stress some more - that even if Spencer and other workers used the correct statistical methods, a simple glance at the raw data is enough to convince us that any pronouncements about estimated feedback parameters must be accompanied by more than a healthy dose of uncertainty. This uncertainty is rarely given; Spencer does not give it. As it stands, it could be either negative or positive, each about equally likely.
Spencer realizes this in part, and so built a toy climate model (I use the word “toy” as physicists do, not as denigration, but as proof-of-concept) which incorporates his ideas about feedback to examine how the estimation methods work when the feedback mechanism is known exactly. What his result mean
for the diagnosis of feedbacks from satellite data is that when there is a mixture of radiative and nonradiative forcings of temperature occurring, natural cloud fluctuations in the climate system will cause a bias in the diagnosed feedback in the direction of positive feedback, thus giving the illusion of an overly sensitive climate system.[emphasis in original]
Very well. It’s Spencer’s model against the models of the IPCC. Who will win? Who knows? We do know that the IPCC models purposely incorporate positive feedback - and when the results are examined, they say, “Look at this dangerous positive feedback! Positive feedback, since it shows in the results of our models, must be real.” Circular thinking, of course.
PDO
Spencer, like many climatologists, indulges in some misplaced teleological language when discussing the PDO, the Pacific Decadal Oscillation, and its role in the climate. The PDO is - yes, it’s true - based upon a statistical model, a function of sea surface temperatures. Now, sometimes SSTs go up, sometimes they go down; the PDO attempts to capture these comings and goings in a single-number index. Experience has shown that the PDO oscillates in a rough, thirty-or-so-year cycle. Some have found that these oscillations are correlated with various changes in the climate: not just temperature, but other weather-important variables.
These correlations should not be surprising: whatever is causing the SSTs to change will be causing other changes in the climate system either directly or indirectly. It would be shocking if this were not so. But it would wrong to say, as many do say, that the PDO itselfcauseschanges in the climate. This admonition holds also for ENSO. Thus, it’s no good pursuing PDO or ENSO saying that they can account for the observed warming. They cannot. They might be used in a statistical predictive sense, but are of little use as explanations of why the climate changes.
Matters Miscellaneous
I can well understand Spencer’s frustration when encountering True Belief, a malignancy in activists and a near fatal affliction in some climatologists. But his frequent, plaintive reminders that nobody has yet acknowledged his work put me in the mind of the cry, “Fools! I’ll destroy them all!” I won’t say that Spencer is to climatology what Stephan Wolfram is to computer science or Gregory Chaitin is to information theory, but we get it already: climatologists are in love with their models and unfriendly towards the contrary evidence Spencer offers, evidence which may well prove true. A “little more humility might be appropriate” (p. 120) if he wants unconvinced audiences to take him seriously.
There’s a cute chapter on common logical fallacies rife in this heated science. My favorite is appeal to authority, most often invoked from the peanut gallery crying “Peer review!” when they hear a criticism they don’t like. To non-scientists, peer review must sound like magic, an assurance of correctness. But we on the inside know it for what it is: a weak filter of quality. It is a paper sword.
I wish Spencer would have left out the editorializing about matters economic and political. It’s unwise to commit resources to other fronts when the lines in front of you are not secure. Enemies will exploit the weaknesses of these secondary arguments and then trumpet their success in finding flaws. Ordinary observers will only hear that mistakes have been found and will dismiss the entire work, if that is most comforting to them.
Overall
This book isn’t the last word in climate science, nor can it be used as the only word, but it does contain some good words. Spencer’s climate theories cannot be ignored and should be understood by all modelers, and for that reason alone, the book is worth reading.