But bigger computers don't necessarily lead to greater accuracy in real-world outputs. The authors argue that where the outputs are in the short-term, say tomorrow's weather, and predictions are easily tested, then crunching large numbers may give you a better handle on the variables. But in 'climate' (conventionally, the average of thirty years' worth of weather) testing predictions about the future may need to wait for another thirty years, or even more. Jumping to conclusions here on the basis of what models and simulations say will most likely led to bad real-world policy decisions.
The authors have done some predictive work themselves in the domains of weather, energy pricing and nuclear stewardship, and offer some advice to potential users. For example, they use a 72-hour accumulation of knowledge to decide whether a humanitarian crisis is likely after a severe weather event. They warn against using the 'best available' model unless it is also arguably adequate for the purpose. And they ask a set of questions that ought to be answered and supplied every time a model is put up as a solution to a real-world issue. Among them:
...is it possible to construct severe tests for extrapolation (climate-like) tasks? Is the system reflexive; does it respond to the forecasts themselves? How do we evaluate models: against real-world variables, or against a contrived index, or against other models? Or are they primarily evaluated by means of their epistemic or physical found-ations? Or, one step further, are they primarily explanatory models for insight and under-standing rather than quantitative forecast machines? Does the model in fact assist with human understanding of the system, or is it so complex that it becomes a prosthesis of understanding in itself?
Advertisement
There are at least two ways to escape from model-land. One is repeatedly to challenge the model to make out-of-sample predictions and see how well it performs. This is possible where what we are dealing with is weather, or weather-like issues. Here the forecast lead-time is much less than a model's likely lifetime. You could in principle keep using the model to forecast today's weather a year from now, but you'd probably do better just to predict that it will be rather like today's weather.
In climate-like issues you'd find it useful to employ expert judgment, which is what the IPCC did in its last Assessment Report. Here we also need to consider uncertainty, something that Judith Curry has written about for several years. This is not the uncertainty of the expert judgment, but the uncertainty that exists between model-land and the real world.
This is a most interesting paper. The authors stress that their aim is not to discard models and simulations, but to make them more effective. They conclude:
More generally, letting go of the phantastic mathematical objects and achievables of model-land can lead to more relevant information on the real world and thus better-informed decision-making. Escaping from model-land may not always be comfortable, but it is necessary if we are to make better decisions.
To which I say, 'Hear, hear!'
Discuss in our Forums
See what other readers are saying about this article!
Click here to read & post comments.
5 posts so far.