Climate modelling is at a tipping point. Desperate to counter increasing assaults on its reputation, the discipline is making a controversial new pitch. It is promising (one day) to overcome the “scale challenge” and produce accurate “fine-scale (regional and local) climate projections” (not predictions).
But there is a “truly scary” catch: millions of additional research dollars will be needed to fund soaring computational costs, deal with a looming data-storage nightmare, and reverse the “worrying” decline in quantitative and technologically capable graduates, more of whom prefer investment banking to trying to save humankind from itself – and alleged climate oblivion.
The kookaburras were laughing outside the University of Western Australia’s Alexander Lecture Theatre. Inside, an audience of about 80 heard much ado about “the decisions we make as a species”, how the science of climate change “was nailed 50 years ago” and lamentation over the “complete disconnect between public understanding and the science”, as Professor Andrew Pitman, Director of The ARC Centre of Excellence for Climate System Science at the University of New South Wales, presented this year’s UWA Joseph Gentilli Memorial Lecture last month.
Advertisement
Pitman, a lead author on the Intergovernmental Panel on Climate Change 3rd and 4th assessment reports and Review Editor of the 5th assessment report (AR5), scheduled for release on 27th September, kicked off with a riff on atmospheric carbon dioxide content. It was now “way above the natural level of 300ppm” at about 400ppm.
IPCC’s AR4 (2007) expressed “considerable confidence that climate models provide credible quantitative estimates of future climate change, particularly at continental scales and above”, 50 to 100 years away.
Pitman’s enthusiasm for this view was no surprise. ARC’s focus is developing “extraordinarily sophisticated” coupled climate models “because crystal balls don’t work very well”. The problem is the models are not working very well either.
One big elephant in the IPCC room is the (unpredicted) global temperature standstill since the mid-1990s; According to a recent post by Barry Brill, it relies on an array of models “without any curiosity about why they have been so wrong for so long.”
IPCC lead author Hans von Storch is, ironically, on the same page. He recently told Der Spiegel that ”if things continue as they have been, in five years, at the latest, we will need to acknowledge that something is fundamentally wrong with our climate models...”
There are other issues too. John McLean, is 95% certain the IPCC is untrustworthy. For him, scientists who claim they "95% sure" humankind is to blame for climate change are guilty of “a conglomeration of fraud, denial and lack of logic”; due to the way IPCC reports are “written, edited, re-written and then, at the very end, apt to be recast all over again in the interests of politics and political expedience.”
Advertisement
Pitman and his colleagues know that to remain relevant - and to justify the global billions of climate research dollars spent annually - they have to offer something new. Alarmist rhetoric about vulnerability to climate change in half a century and beyond will no longer do the trick. They have to convince governments that more grant money will deliver something useful, such as “much more detailed temporal [daily, hourly] information around extremes”.
For him, the basic science is “settled at the big picture scale”. The “phenomenally interesting” challenge now is to re-engineer global simulations to make predictions at the catchment, regional and paddock scales.(43.40min.)
According to other researchers, however, there is a long way to go. Extreme weather event (EWE) analysis is a tricky business. Reluctant to be caught out over-promising (again), the orthodoxy admits it also should “manage expectations”.
Discuss in our Forums
See what other readers are saying about this article!
Click here to read & post comments.
10 posts so far.