As climate science advances, forecasts are likely to become less - not more - precise making it more difficult to convince the public of the reality of climate change.
I think I can predict right now the headlines that will follow publication of the next report from the Intergovernmental Panel on Climate Change (IPCC), due in 2013. “Climate scientists back off predicting rate of warming: ‘The more we know the less we can be sure of,’ says UN panel.”
That is almost bound to be the drift if two-time IPCC lead author Kevin Trenberth and others are right about what is happening to the new generation of climate models. And with public trust in climate science on the slide after the various scandals of the past year over emails and a mistaken forecast of Himalayan ice loss, it hardly seems likely scientists will be treated kindly.
Advertisement
It may not matter much who is in charge at the IPCC by then: whether or not current chairman Rajendra Pachauri keeps his job, the reception will be rough. And if climate negotiators have still failed to do a deal to replace the Kyoto Protocol, which lapses at the end of 2012, the fallout will not be pretty, either diplomatically or climatically.
Clearly, concerns about how climate scientists handle complex issues of scientific uncertainty are set to escalate. They were highlighted in a report about IPCC procedures published in late August in response to growing criticism about IPCC errors. The report highlighted distortions and exaggerations in IPCC reports, many of which involved not correctly representing uncertainty about specific predictions.
But efforts to rectify the problems in the next IPCC climate-science assessment (AR5) are likely to further shake public confidence in the reliability of IPCC climate forecasts.
Last January, Trenberth, head of climate analysis at the National Center for Atmospheric Research in Boulder, Colo., published a little-noticed commentary in Nature online. Headlined “More Knowledge, Less Certainty,” it warned that “the uncertainty in AR5’s predictions and projections will be much greater than in previous IPCC reports”. He added that “this could present a major problem for public understanding of climate change”. He can say that again.
This plays out most obviously in the critical estimate of how much warming is likely between 1990, the baseline year for most IPCC work, and 2100. The current AR4 report says it will be between 1.8 and 4.0 degrees Celsius. But the betting is now that the range offered next time will be wider, especially at the top end.
The public has a simple view about scientific uncertainty. It can accept that science doesn’t have all the answers, and that scientists try to encapsulate those uncertainties with devices like error bars and estimates of statistical significance. What even the wisest heads will have trouble with, though, is the notion that greater understanding results in wider errors bars than before.
Advertisement
Trenberth explained in his Nature commentary why a widening is all but certain. “While our knowledge of certain factors [responsible for climate change] does increase,” he wrote, “so does our understanding of factors we previously did not account for or even recognize”. The trouble is this sounds dangerously like what Donald Rumsfeld, in the midst of the chaos of the Iraq War, famously called “unknown unknowns”. I would guess that the IPCC will have even less luck than he did in explaining what it means by this.
The latest climate modeling runs are trying to come to grips with a range of factors ignored or only sketchily dealt with in the past. The most troubling is the role of clouds. Clouds have always been recognised as a ticking time bomb in climate models, because nobody can work out whether warming will change them in a way that amplifies or moderates warming - still less how much. And their influence could be very large. “Clouds remain one of the largest uncertainties in the climate system’s response to temperature changes,” says Bruce Wielicki, a scientist at NASA’s Langley Research Center who is investigating the impact of clouds on the Earth’s energy budget.
An added problem in understanding clouds is the role of aerosols from industrial smogs, which dramatically influence the radiation properties of clouds. “Aerosols are a mess,” says Thomas Charlock, a senior scientist at the Langley Research Center and co-investigator in a NASA project known as Clouds and the Earth’s Radiant Energy System (CERES). “We don’t know how much is out there. We just can’t estimate their influence with calculations alone.”
Trenberth noted in Nature, “Because different groups are using relatively new techniques for incorporating aerosol effects into the models, the spread of results will probably be much larger than before”.
A second problem for forecasting is the potential for warming to either enhance or destabilise existing natural sinks of carbon dioxide and methane in soils, forests, permafrost, and beneath the ocean. Again these could slow warming through negative feedbacks or - more likely, according to recent assessments - speed up warming, perhaps rather suddenly as the planetary system crosses critical thresholds.
The next models will be working hard to take these factors into better account. Whether they go as far as some preliminary runs published in 2005, which suggested potential warming of 10 degrees C or more is not clear. Of course, uncertainty is to be expected, given the range of potential feedbacks that have to be taken into account. But it is going to be hard to explain why, when you put more and better information into climate models, they do not home in on a more precise answer.
Yet it will be more honest, says Leonard Smith, a mathematician and statistician at the University of Oxford, England, who warns about the “naïve realism” of past climate modeling. In the past, he says, models have been “over-interpreted and misinterpreted. We need to drop the pretense that they are nearly perfect. They are getting better. But as we change our predictions, how do we maintain the credibility of the science?”
The only logical conclusion for a confused and increasingly wary public may be that if the error bars were wrong before, they cannot be trusted now. If they do not in some way encapsulate the “unknowns,” what purpose do they have?
Despite much handwringing, the IPCC has never worked out how to make sense of uncertainty. Take the progress of those errors bars in assessing warming between 1990 and 2100.
The panel’s first assessment, published back in 1990, predicted a warming of 3 degrees C by 2100, with no error bars. The second assessment, in 1995, suggested a warming of between 1 and 3.5 degrees C. The third, in 2001, widened the bars to project a warming of 1.4 to 5.8 degrees C. The fourth assessment in 2007 contracted them again, from 1.8 to 4.0 degrees C. I don’t think the public will be so understanding if they are widened again, but that now seems likely.
Trenberth is nobody’s idea of someone anxious to rock the IPCC boat. He is an IPCC insider, having been lead author on key chapters in both 2001 and 2007, and recently appointed as a review editor for AR5. Back in 2005 he made waves by directly linking Hurricane Katrina to global warming. But in the past couple of years he has taken a growing interest in highlighting uncertainties in the climate science.
Late last year, bloggers investigating the “climategate” emails highlighted a message he sent to colleagues in which he said it was a “travesty” that scientists could not explain cool years like 2008. His point, made earlier in the journal Current Opinion in Environmental Stability (PDF 749KB), was that “it is not a sufficient explanation to say that a cool year is due to natural variability”. Such explanations, he said, “do not provide the physical mechanisms involved”. He wanted scientists to do better.
In his Nature commentary, Trenberth wondered aloud whether the IPCC wouldn’t be better off getting out of the prediction business. “Performing cutting edge science in public could easily lead to misinterpretation,” he wrote. But the lesson of climategate is that efforts to keep such discussion away from the public have a habit of backfiring spectacularly.
All scientific assessments have to grapple with how to present uncertainties. Inevitably they make compromises between the desire to convey complexity and the need to impart clear and understandable messages to a wider public. But the IPCC is caught on a particular dilemma because its founding purpose, in the late 1980s, was to reach consensus on climate science and report back to the world in a form that would allow momentous decisions to be taken. So the IPCC has always been under pressure to try to find consensus even where none exists. And critics argue that that has sometimes compromised its assessments of uncertainty.
The last assessment was replete with terms like “extremely likely” and “high confidence”. Critics charged that they often lacked credibility. And last August’s blue-chip review of the IPCC’s performance, by the InterAcademy Council, seemed to side with the critics.
The council’s chairman, Harold Shapiro of Princeton, said existing IPCC guidelines on presenting uncertainty “have not been consistently followed”. In particular, its analysis of the likely impacts of climate change “contains many statements that were assigned high confidence but for which there is little evidence”. The predictions were not plucked from the air. But the charge against the IPCC is that its authors did not always correctly portray the uncertainty surrounding the predictions or present alternative scenarios.
The most notorious failure was the claim that the Himalayan glaciers could all have melted by 2035. This was an egregious error resulting from cut-and-pasting a non-peer reviewed claim from a report by a non-governmental organisation. So was a claim that 55 per cent of the Netherlands lies below sea level. But other errors were failures to articulate uncertainties. The study highlighted a claim that even a mild loss of rainfall over the Amazon could destroy 40 per cent of the rainforest, though only one modeling study has predicted this.
Another headline claim in the report, in a chapter on Africa, was that “projected reductions in [crop] yield in some countries could be as much as 50 per cent by 2020”. The only source was an 11-page paper by a Moroccan named Ali Agoumi that covered only three of Africa’s 53 countries (Morocco, Tunisia, and Algeria) and had not gone through peer review. It simply asserted that “studies on the future of vital agriculture in the region have shown ... deficient yields from rain-based agriculture of up to 50 per cent during the 2000-2020 period”. No studies were named. And even Agoumi did not claim the changes were necessarily caused by climate change. In fact, harvests in North Africa already differ by 50 per cent or more from one year to the next, depending on rainfall. In other words, Agoumi’s paper said nothing at all about how climate change might or might not change farm yields across Africa. None of this was conveyed by the report.
In general, the InterAcademy Council’s report noted a tendency to “emphasise the negative impacts of climate change,” many of which were “not supported sufficiently in the literature, not put into perspective, or not expressed clearly”. Efforts to eliminate these failings will necessarily widen the error bars on a range of predictions in the next assessment.
We are all - authors and readers of IPCC reports alike - going to have to get used to greater caution in IPCC reports and greater uncertainty in imagining exactly how climate change will play out. This is probably healthy. It is certainly more honest. But it in no way undermines the case that we are already observing ample evidence that the world is on the threshold of profound and potentially catastrophic warming. And it in no way undermines the urgent need to do something to halt the forces behind the warming.
Some argue that scientific uncertainty should make us refrain from action to slow climate change. The more rational response, given the scale of what we could face, is the precise opposite